diff --git "a/cncf_stackoverflow_qas.csv" "b/cncf_stackoverflow_qas.csv"
deleted file mode 100644--- "a/cncf_stackoverflow_qas.csv"
+++ /dev/null
@@ -1,42629 +0,0 @@
-question,answer,tag
-"With ansible I am using j2 templates to replace files, and pass through environment vars unique to each environment and some shared. Neither the variables for global or local work. I have the following set-up and structure.
-The directory structure
-/project
-/project/playbooks
-/project/playbooks/do-something.yaml
-/project/playbooks/templates
-/project/playbooks/templates/myfile.json.j2
-/project/vars
-/project/vars/config_vars.yaml
-/project/inventory.ini
-
-Inventory file
-inventory.ini
-
-[myhosts]
-host1 ansible_host=192.168.1.10
-
-Config Vars file
-# vars/config_vars.yaml
-
-all:
-  vars:
-    my_value_one: ""value1""
-  children:
-    myhosts:
-      hosts:
-        host1:
-          my_value_two: ""value2""
-
-Playbook file
-# playbooks/do-something.yaml
-
----
-- name: Configure Nodes
-  hosts: myhosts
-  become: yes
-  vars_files:
-    - ../vars/config_vars.yml
-  tasks:
-    - name: Replace file with j2 template
-      template:
-        src: templates/myfile.json.j2
-        dest: /home/{{ ansible_user }}/my-folder/myfile.json
-
-
-j2 template file
-# templates/myfile.json.j2
-
-{
-  ""value_one"": ""{{ my_value_one }}"",
-  ""value_two"": ""{{ my_value_two }}"",
-}
-
-Ansible Version: 2.16.7
-Jinja: 3.1.4
-I have tried the following:-
-
-Changing directory structure
-Playing with inventory, changes structure, using yaml file
-Using include vars task
-I am aware I can do host1 ansible_host=192.168.1.10 my_value_one:""value"" in the inventory file. However this for some reason outputs random numbers.
-
-","1. 
-Changing directory structure
-Playing with inventory, changes structure, using yaml file
-
-You didn't specify what exactly you tried to change but the core reason is that the role structure is not the same as the project structure - the latter does not use the vars folder.
-You are expecting your variables from ../vars/config_vars.yml to be loaded as the inventory variables but they are not. Moreover, your ../vars/config_vars.yml is itself the inventory file written in YAML format but you're loading it as the variable file. So, for example, to use that my_value_two you would need to refer to all.children.myhosts.hosts.host1.my_value_two - which is anything but what should be done in Ansible.
-To benefit from the built-in features and pick up the variables automatically instead of loading them  via vars_files you can follow the recommended project structures and do some of the following things:
-
-move the variables to playbook group_vars
-move the variables to inventory group_vars
-just simplify the things and configure a proper YAML inventory without the need to use another INI one.
-
-Consider an example for the last option:
-# inventories/my_hosts.yaml
----
-all:
-  vars:
-    my_value_one: ""value1""
-  children:
-    myhosts:
-      hosts:
-        host1:
-          ansible_host: 192.168.1.10
-          my_value_two: ""value2""
-
-Now, once you remove the vars_files from your playbook this is enough to get what you want with the below command (split to multiple lines to make it readable without scrolling):
-ansible-playbook playbooks/do-something.yaml \
-  --inventory inventories/my_hosts.yaml
-
-If you want to store the variables in the folders instead, you have multiple options covered by the documentation. Let's consider just one of them:
-# inventories/my_hosts/group_vars/all.yaml
----
-my_value_one: ""value1""
-
-# inventories/my_hosts/host_vars/host1.yaml
----
-my_value_two: ""value2""          
-
-# inventories/my_hosts/inventory.yaml
----
-all:
-  children:
-    myhosts:
-      hosts:
-        host1:
-          ansible_host: 192.168.1.10
-
-The equivalent command could look like this:
-ansible-playbook playbooks/do-something.yaml \
-  --inventory inventories/my_hosts
-
-",Ansible
-"I want to make some playbooks for checkpoint; My question is: for checkpoint is there a specific connection string from ansible?
-`Procedure to generate database backup in Security Management Server:
-$MDS_FWDIR/scripts/migrate_server import/export -v R81.10 -skip_upgrade_tools_check /path_to_file/export.tgz`
-Regards;
-I would like to be able to do without modules, since I use an offline installation
-","1. You can use match,search or regex to match strings against a substring.
-Read more about this in official docs testing strings
-Or if you need specific package(Nginx example) then
-when: nginxVersion.stdout != 'nginx version: nginx/1.2.6'
-
-will check if Nginx is not present on your server and install 1.2.6.
-",Ansible
-"I cannot seem to get an Ansible debug statement in a loop to display the individual item values when running the debug statement in a role. For comparison, given this playbook named ./test.yaml:
-- hosts: localhost
-  tasks:
-  - name: test
-    debug:
-      var: item
-    loop:
-      - 1
-      - 2
-
-This command:
-ansible-playbook test.yaml
-
-Produces this result:
-PLAY [localhost] *****...
-TASK [test] ****...
-ok: [localhost] => (item=1) => {
-    ""item"": 1
-}
-ok: [localhost] => (item=2) => {
-   ""item"": 2
-}
-
-But given this file: ./roles/TestRole/tasks/main.yaml:
-- name: test
-  debug:
-    var: item
-  loop:
-    - 1
-    - 2
-
-This command:
-ansible localhost -m include_role -a name=TestRole
-
-Produces this result:
-localhost | SUCCESS => {
-    ""changed"": false,
-    ""include_variables"": {
-        ""name"": ""FooRole""
-    }
-}
-localhost | SUCCESS => {
-    ""msg"" ""All items completed""
-}
-
-So - rather than displaying the item values, the debug statement in the role just says ""All items completed"". It looks like looped debug statements in roles behave differently than looped debug statements in playbooks. Am I doing something wrong? Running Ansible 2.7.9 on python 2.7.5.
-","1. This is effectively what you get from the adhoc command (and I have absolutely no clue why). Meanwhile this is a rather edge case of using it. You would rather include a role in a playbook. Both playbook examples below will give you the result you are expecting:
-Classic role execution
----
-- name: test1 for role
-  hosts: localhost
-  gather_facts: false
-  roles:
-    - role: TestRole
-
-Include role
----
-- name: test2 for roles
-  hosts: localhost
-  gather_facts: false
-  tasks:
-    - name: include role
-      include_role:
-        name: TestRole
-
-
-2. You can try using aiansible to debug playbook or role: https://github.com/sunnycloudy/aiansible
-DEBUG INFO:
-/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/kubespray-defaults/tasks/main.yaml:2
-
-
-    2|- name: Configure defaults
-    3|  debug:
-    4|    msg: ""Check roles/kubespray-defaults/defaults/main.yml""
-    5|  tags:
-    6|    - always
-    7|
-    8|# do not run gather facts when bootstrap-os in roles
-    9|- name: set fallback_ips
-   10|  import_tasks: fallback_ips.yml
-   11|  when:
-
-
-Saturday 25 May 2024  23:07:13 +0800 (0:00:00.101)       10:20:04.700 ********* 
-
-TASK [kubespray-defaults : Configure defaults] *****************************************************************************************************************************************************************
-ok: [test1] => {
-    ""msg"": ""Check roles/kubespray-defaults/defaults/main.yml""
-}
-Aiansible(CN) => result._result
-{'msg': 'Check roles/kubespray-defaults/defaults/main.yml', '_ansible_verbose_always': True, '_ansible_no_log': False, 'changed': False}
-Aiansible(CN) => bt
-0:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:11=>Check 2.11.0 <= Ansible version < 2.13.0
-1:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:20=>Check that python netaddr is installed
-2:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:28=>Check that jinja is not too old (install via pip)
-3:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:2=>download : prep_download | Set a few facts
-4:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:8=>download : prep_download | On localhost, check if passwordless root is possible
-5:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:23=>download : prep_download | On localhost, check if user has access to the container runtime without using sudo
-6:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:38=>download : prep_download | Parse the outputs of the previous commands
-7:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:48=>download : prep_download | Check that local user is in group or can become root
-8:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:59=>download : prep_download | Register docker images info
-9:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:68=>download : prep_download | Create staging directory on remote node
-10:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:78=>download : prep_download | Create local cache for files and images on control node
-11:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/main.yml:10=>download : download | Get kubeadm binary and list of required images
-12:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/main.yml:19=>download : download | Download files / images
-13:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/kubespray-defaults/tasks/main.yaml:2=>kubespray-defaults : Configure defaults
-Aiansible(CN) => a
-msg: Check roles/kubespray-defaults/defaults/main.yml
-
-
-
-
-3. You can try using aiansible to debug: https://github.com/sunnycloudy/aiansible
-DEBUG INFO:
-/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/kubespray-defaults/tasks/main.yaml:2
-
-
-    2|- name: Configure defaults
-    3|  debug:
-    4|    msg: ""Check roles/kubespray-defaults/defaults/main.yml""
-    5|  tags:
-    6|    - always
-    7|
-    8|# do not run gather facts when bootstrap-os in roles
-    9|- name: set fallback_ips
-   10|  import_tasks: fallback_ips.yml
-   11|  when:
-
-
-Saturday 25 May 2024  23:07:13 +0800 (0:00:00.101)       10:20:04.700 ********* 
-
-TASK [kubespray-defaults : Configure defaults] *****************************************************************************************************************************************************************
-ok: [test1] => {
-    ""msg"": ""Check roles/kubespray-defaults/defaults/main.yml""
-}
-Aiansible(CN) => result._result
-{'msg': 'Check roles/kubespray-defaults/defaults/main.yml', '_ansible_verbose_always': True, '_ansible_no_log': False, 'changed': False}
-Aiansible(CN) => bt
-0:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:11=>Check 2.11.0 <= Ansible version < 2.13.0
-1:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:20=>Check that python netaddr is installed
-2:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/playbooks/ansible_version.yml:28=>Check that jinja is not too old (install via pip)
-3:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:2=>download : prep_download | Set a few facts
-4:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:8=>download : prep_download | On localhost, check if passwordless root is possible
-5:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:23=>download : prep_download | On localhost, check if user has access to the container runtime without using sudo
-6:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:38=>download : prep_download | Parse the outputs of the previous commands
-7:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:48=>download : prep_download | Check that local user is in group or can become root
-8:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:59=>download : prep_download | Register docker images info
-9:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:68=>download : prep_download | Create staging directory on remote node
-10:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/prep_download.yml:78=>download : prep_download | Create local cache for files and images on control node
-11:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/main.yml:10=>download : download | Get kubeadm binary and list of required images
-12:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/download/tasks/main.yml:19=>download : download | Download files / images
-13:/root/.nujnus/test_suite/K8s_v2_22_2/install_k8s_v2_22_2/install/kubespray/roles/kubespray-defaults/tasks/main.yaml:2=>kubespray-defaults : Configure defaults
-Aiansible(CN) => a
-msg: Check roles/kubespray-defaults/defaults/main.yml
-
-
-
-",Ansible
-" This question is NOT answered. Someone mentioned environment variables. Can you elaborate on this?
-5/28/2024 - Simplified the question (below):
-This is an oracle problem. I have 4 PCs. I need program 1 run on the one machine that has Drive E. Out of the remaining 3 that don't have drive E, I need program 2 run on ONLY one of the 3. For the other 2, don't run anything.
-This seems like a simple problem, but not in ansible. It keeps coming up. Especially in error conditions.  I need a global variable. One that I can set when processing one host play, then check at a later time with another host. In a nutshell, so I can branch later in the playbook, depending on the variable.
-We have no control over custom software installation, but if it is installed, we have to put different software on other machines. To top it off, the installations vary, depending on the VM folder. My kingdom for a global var.
-The scope of variables relates ONLY to the current ansible_hostname. Yes, we have group_vars/all.yml as globals, but we can't set them in a play. If I set a variable, no other host's play/task can see it. I understand the scope of variables, but I want to SET a global variable that can be read throughout all playbook plays.
-The actual implementation is unimportant but variable access is (important).
-My Question: Is there a way to set a variable that can be checked when running a different task on another host? Something like setGlobalSpaceVar(myvar, true)? I know there isn't any such method, but I'm looking for a work-around. Rephrasing: set a variable in one tasks for one host, then later in another task for another host, read that variable.
-The only way I can think of is to change a file on the controller, but that seems bogus.
-An example
-The following relates to oracle backups and our local executable, but I'm keeping it generic. For below - Yes, I can do a run_once, but that won't answer my question. This variable access problem keeps coming up in different contexts.
-I have 4 xyz servers. I have 2 programs that need to be executed, but only on 2 different machines. I don't know which. The settings may be change for different VM environments.
-Our programOne is run on the server that has a drive E. I can find which server has drive E using ansible and do the play accordingly whenever I set a variable (driveE_machine). It only applies to that host. For that, the other 3 machines won't have driveE_machine set.
-In a later play, I need to execute another program on ONLY one of the other 3 machines. That means I need to set a variable that can be read by the other 2 hosts that didn't run the 2nd program.
-I'm not sure how to do it.
-Inventory file:
-[xyz]
-serverxyz[1:4].private.mystuff
-
-Playbook example:
----
-- name: stackoverflow variable question
-  hosts: xyz
-  gather_facts: no
-  serial: 1
-  tasks:
-      - name: find out who has drive E
-         win_shell: dir e:\
-         register: adminPage
-         ignore_errors: true
-
-       # This sets a variable that can only be read for that host
-      - name: set fact driveE_machine when rc is 0
-        set_fact:
-           driveE_machine: ""{{inventory_hostname}}""
-        when: adminPage.rc == 0
-
-       - name: run program 1
-         include: tasks/program1.yml
-         when: driveE_machine is defined
-
-       # program2.yml executes program2 and needs to set some kind of variable
-       # so this include can only be executed once for the other 3 machines 
-       # (not one that has driveE_machine defined and ???
-       - name: run program 2
-         include: tasks/program2.yml
-         when: driveE_machine is undefined and ???
-         # please don't say run_once: true - that won't solve my variable access question
-
-Is there a way to set a variable that can be checked when running a task on another host?
-","1. No sure what you actually want, but you can set a fact for every host in a play with a single looped task (some simulation of global variable):
-playbook.yml
----
-- hosts: mytest
-  gather_facts: no
-  vars:
-  tasks:
-    # Set myvar fact for every host in a play
-    - set_fact:
-        myvar: ""{{ inventory_hostname }}""
-      delegate_to: ""{{ item }}""
-      with_items: ""{{ play_hosts }}""
-      run_once: yes
-    # Ensure that myvar is a name of the first host
-    - debug:
-        msg: ""{{ myvar }}""
-
-hosts
-[mytest]
-aaa ansible_connection=local
-bbb ansible_connection=local
-ccc ansible_connection=local
-
-result
-PLAY [mytest] ******************
-META: ran handlers
-
-TASK [set_fact] ******************
-ok: [aaa -> aaa] => (item=aaa) => {""ansible_facts"": {""myvar"": ""aaa""}, ""ansible_facts_cacheable"": false, ""changed"": false, ""failed"": false, ""item"": ""aaa""}
-ok: [aaa -> bbb] => (item=bbb) => {""ansible_facts"": {""myvar"": ""aaa""}, ""ansible_facts_cacheable"": false, ""changed"": false, ""failed"": false, ""item"": ""bbb""}
-ok: [aaa -> ccc] => (item=ccc) => {""ansible_facts"": {""myvar"": ""aaa""}, ""ansible_facts_cacheable"": false, ""changed"": false, ""failed"": false, ""item"": ""ccc""}
-
-TASK [debug] ******************
-ok: [aaa] => {
-    ""msg"": ""aaa""
-}
-ok: [bbb] => {
-    ""msg"": ""aaa""
-}
-ok: [ccc] => {
-    ""msg"": ""aaa""
-}
-
-
-2. https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#fact-caching
-
-As shown elsewhere in the docs, it is possible for one server to reference variables about another, like so:
-  {{ hostvars['asdf.example.com']['ansible_os_family'] }}
-
-This even applies to variables set dynamically in playbooks.
-
-3. This answer doesn't pre-suppose your hostnames, nor how many hosts have a ""drive E:"". It will select the first one that is reachable that also has a ""drive E:"". I have no windows boxes, so I fake it with a random coin toss for whether a host does or doesn't; you can of course use your original win_shell task, which I've commented out.
----
-
-- hosts: all
-  gather_facts: no
-  # serial: 1
-  tasks:
-    # - name: find out who has drive E
-    #   win_shell: dir e:\
-    #   register: adminPage
-    #   ignore_errors: true
-
-    - name: ""Fake finding hosts with drive E:.""
-      # I don't have hosts with ""drive E:"", so fake it.
-      shell: |
-        if [ $RANDOM -gt 10000 ] ; then
-            exit 1
-        else
-            exit 0
-        fi
-      args:
-        executable: /bin/bash
-      register: adminPage
-      failed_when: false
-      ignore_errors: true
-      
-    - name: ""Dict of hosts with E: drives.""
-      run_once: yes
-      set_fact:
-        driveE_status: ""{{ dict(ansible_play_hosts_all |
-                            zip(ansible_play_hosts_all |
-                                map('extract', hostvars, ['adminPage', 'rc'] ) | list
-                               ))
-                        }}""
-
-    - name: ""List of hosts with E: drives.""
-      run_once: yes
-      set_fact:
-        driveE_havers: ""{%- set foo=[] -%}
-                        {%- for dE_s in driveE_status -%}
-                           {%- if driveE_status[dE_s] == 0 -%}
-                             {%- set _ = foo.append( dE_s ) -%}
-                           {%- endif -%}
-                        {%- endfor -%}{{ foo|list }}""                                     
-
-    - name: ""First host with an E: drive.""
-      run_once: yes
-      set_fact:
-        driveE_first: ""{%- set foo=[] -%}
-                        {%- for dE_s in driveE_status -%}
-                           {%- if driveE_status[dE_s] == 0 -%}
-                             {%- set _ = foo.append( dE_s ) -%}
-                           {%- endif -%}
-                        {%- endfor -%}{{ foo|list|first }}""                                     
-
-    - name: Show me.
-      run_once: yes
-      debug:
-        msg:
-          - ""driveE_status: {{ driveE_status }}""
-          - ""driveE_havers: {{ driveE_havers }}""
-          - ""driveE_first: {{ driveE_first }}""
-
-",Ansible
-"Let's say I have the following federated graph:
-type Query {
-    products: [Product]!
-}
-
-// resolved by `discounts-service`
-type Discount {
-    priceWithDiscount: Float
-    expired: Boolean
-}
-
-// resolved by `pricing-service`
-type Price @key(fields: ""sku"") @extends {
-    sku: String!
-    amount: Int!
-    discount: Discount! @requires(fields: ""amount"")
-}
-
-// resolved by `products-service`
-type Product @key(fields: ""sku"") {
-    sku: String!
-    title: String!
-    price: Price!
-}
-
-So, in summary, I have a query returning n products. Each Product has a Price, which has a Discount node.
-However, now let's say that the Discount is now tied to individual users. So now the resolver must know at least two properties about the user who is querying: its id and segmentation.
-So we have an User node like:
-// resolved by `users-service`
-type User @key(fields: ""id"") {
-    id: String!
-    name: String!
-    segmentation: String!
-    // ....
-}
-
-Now, how can I make the discounts-service receive those properties (User data) in an efficient* manner in a Federated Apollo Schema?
-*I'm saying efficient because a naive solution, such as just adding User as @external in the Product/Price node would make Apollo Router to call the users-service for the same user for each product in the array, unnecessarily.
---
-A possible solution would be adding parameters to the discount node:
-type Price @key(fields: ""sku"") @extends {
-    // ...
-    discount(userId: String!, segmentation: String!): Discount! @requires(fields: ""amount"")
-}
-
-However, such solution requires the caller to first fetch the user data and the input values are untrustworthy since it's given by the client and not fetched by the Apollo Router (i.e., a user can lie about its segmentation).
-","1. For something like this, assuming your User is somehow authenticated on the platform, I'd pass an optional Authorization header to the request in the form of a Bearer: <token> scheme. Then, in your resolver, call a getUserDiscount helper function that does the following:
-
-if no token scheme is present, return null or some other specific value, and compute the basic discount as you already did with no specific User discount attached,
-if the token scheme is present, validate the token, and throw on invalid scheme/token,
-using the userId found inside the validated token, get the id and segmentation for that user and return these values to your resolver for further computation and/or forwarding as data in your response.
-
-This way you don't have an extra GraphQL request, everything is handled on the server side, and relatively safe too since you'll be validating the (JWT) token against a secret. But again this assumes the User is authenticated, which I believe is a sensical assumption in this context.
-You can then re-use this helper function whenever necessary, although in your particular use case it'd make sense to have it on the resolver for the discount field only.
-",Apollo
-"i am new to Bosh and trying to create my first release.However,when deploying the created release i am getting this error message.I also tried to download a release published by the community and get it running but i am getting the same error message.
- L Error: Action Failed get_task: Task 91c8b925-fb75-49cb-4f28-8d783013b255 result: Compiling package nginx: Fetching package nginx: Fetching package blob 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: Getting blob from inner blobstore: Getting blob from inner blobstore: Shelling out to bosh-blobstore-dav cli: Running command: 'bosh-blobstore-dav -c /var/vcap/bosh/etc/blobstore-dav.json get 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8 /var/vcap/data/tmp/bosh-blobstore-externalBlobstore-Get484019718', stdout: 'Error running app - Getting dav blob 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: Get /d8/22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: unsupported protocol scheme """"', stderr: '': exit status 1
-Task 43 | 20:39:58 | Error: Action Failed get_task: Task 91c8b925-fb75-49cb-4f28-8d783013b255 result: Compiling package nginx: Fetching package nginx: Fetching package blob 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: Getting blob from inner blobstore: Getting blob from inner blobstore: Shelling out to bosh-blobstore-dav cli: Running command: 'bosh-blobstore-dav -c /var/vcap/bosh/etc/blobstore-dav.json get 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8 /var/vcap/data/tmp/bosh-blobstore-externalBlobstore-Get484019718', stdout: 'Error running app - Getting dav blob 22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: Get /d8/22a70ff2-5c50-4623-bf5b-47f4f7bf8ed8: unsupported protocol scheme """"', stderr: '': exit status 1
-
-","1. It was due to using ubuntu-trusty stemcell.Updated to ubuntu-jammy and it works as my VM is also running ubuntu-jammy.
-",BOSH
-"We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
-The OS, middleware, applications and settings for each EC2 instance are exactly the same.
-But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
-The CPU of Ejabberd in the Japan region is normal.
-Could you please tell me the suspicious part?
-","1. You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
-You can use the undocumented ""ejabberdctl etop"" shell command in the problematic nodes. It's similar to ""top"", but runs inside the erlang virtual machine that runs ejabberd
-ejabberdctl etop
-
-========================================================================================
- ejabberd@localhost                                                        16:00:12
- Load:  cpu         0               Memory:  total       44174    binary       1320
-        procs     277                        processes    5667    code        20489
-        runq        1                        atom          984    ets          5467
-
-Pid            Name or Initial Func    Time    Reds  Memory    MsgQ Current Function
-----------------------------------------------------------------------------------------
-<9135.1252.0>  caps_requests_cache     2393       1    2816       0 gen_server:loop/7   
-<9135.932.0>   mnesia_recover           480      39    2816       0 gen_server:loop/7   
-<9135.1118.0>  dets:init/2               71       2    5944       0 dets:open_file_loop2
-<9135.6.0>     prim_file:start/0         63       1    2608       0 prim_file:helper_loo
-<9135.1164.0>  dets:init/2               56       2    4072       0 dets:open_file_loop2
-<9135.818.0>   disk_log:init/2           49       2    5984       0 disk_log:loop/1     
-<9135.1038.0>  ejabberd_listener:in      31       2    2840       0 prim_inet:accept0/3 
-<9135.1213.0>  dets:init/2               31       2    5944       0 dets:open_file_loop2
-<9135.1255.0>  dets:init/2               30       2    5944       0 dets:open_file_loop2
-<9135.0.0>     init                      28       1    3912       0 init:loop/1         
-========================================================================================
-
-",BOSH
-"I would like for a job J from a release R in a bosh deployment to start with a certain environmental variable E set, which is not available in the job's properties for configuration
-Can this be specified in the deployment file or when calling the bosh cli?
-","1. Unfortunately, I am pretty sure this is not possible. BOSH does not understand environment variables. Instead, it executes an ERB template with the properties configured in the manifest. For example in this job template from log-cache is executed with the properties from a manifest along with defaults from the job spec.
-If you need to have a particular environment variable set for testing/development, you can bosh ssh on to an instance where you are going to run the job and then mutate the generated file. Given the CF deployment example, bosh ssh doppler/0 and then modify the generated bpm.yml in /var/vcap/jobs/log-cache/config/bpm.yml. This is a workaround for debugging and development, if you need to set a field in a manifest reach out to the release author and open an issue or PR the ability to set environment variable as a property by adding it to the job spec.
-(note the versions used in the example are just from HEAD and may not actually work)
-",BOSH
-"I have a concourse environment deployed using bosh. It is configured with AWS Secrets Manager.
-The pipeline secret template is of the form /concourse/{{.Team}}/{{.Secret}}
-I have a secret /concourse/team1/general created in AWS Secrets Manager (Other type of secrets) with the below value.
-{
-  ""gitbranch"": ""master"",
-  ""hello"": ""2"",
-  ""general"": ""hi""
-}
-
-I have a concourse pipeline hello-world.yml set in team1 team.
----
-jobs:
-- name: job
-  public: true
-  plan:
-  - task: check-secret
-    config:
-      platform: linux
-      image_resource:
-        type: registry-image
-        source: { repository: busybox }
-      run:
-        path: echo
-        args: [""((general))""]
-
-This pipeline outputs the value as
-{""gitbranch"":""master"",""hello"":""2"",""general"":""hi""}
-
-But, if I change the args (last line) in pipeline to args: [""((general.gitbranch))""], then, I get the below error
-failed to interpolate task config: cannot access field 'gitbranch' of non-map value ('string') from var: general.gitbranch
-
-Is it possible to access any of the key value pairs in the secret from AWS Secrets Manager, in the concourse pipeline? If yes, how to do so?
-","1. Answering my own question.
-By creating the secret using cli with the parameter --secret-binary, I was able to achieve to fetch the key value pairs.
-(Previously, I was creating the secret from aws console, which got created as a secret string.)
-I used the below command to update my secret to create the secret as a binary.
-b64key=$(base64 secrets.json)
-aws secretsmanager update-secret \
-    --secret-id  /concourse/team1/general \
-    --secret-binary ""$b64key""
-
-I found this using-aws-secrets-manager-with-concourse-ci and it was helpful in solving the issue.
-If anyone knows a way to do this in console, kindly let me know.
-",BOSH
-"I'm trying to make a case against automated checkins to version control. My group at work has written some system build tools around CFEngine, and now they think these tools should automatically do checkins of things like SSH host keys.
-Now, as a programmer, my initial gut reaction is that nothing should be calling ""svn up"" and ""svn ci"" aside from a human. In a recent case, the .rNNNN merged versions of a bunch of files broke the tools, which is what started this discussion.
-Now, the guy designing the tools has basically admitted he's using SVN in order to sync files around, and that he could basically replace all this with an NFS mount. He even said he would wrap ""svn diff"" into ""make diff"" because it seemed better than all of us knowing how SVN works.
-So... I'm asking how I can make a good argument for NOT having Makefiles, shell scripts, etc, wrap Subversion commands, when Subversion is basically being used to synchronize files on different machines.
-Here's my list, so far:
-
-We aren't really versioning this data, so it shouldn't go in svn.
-We've said it could be replaced by an NFS mount, so why don't we do that.
-Homegrown tools are now wrapping SVN, and software is always going to have bugs, therefore our SVN revisions are now going to have messes made of a revision when we encounter bugs.
-
-How can I make this case?
-","1. SVN isn't a bad tool to use to synchronise files on machines! If I want a bunch of machines to have the exact same version of file then having them in subversion and being able to check them out is a godsend. Yeah, you could use tools such as rsync or have NFS mounts to keep them up-to-date but at least subversion allows you to store all revisions and roll-back/forward when you want.
-One thing I will say though, is having machines automatically update from the trunk is probably a bad idea when those files could break your system, they should update from a tag. That way, you can check things in and maintain revision history TEST them and then apply a tag that will sync the files on other machines when they update.
-I understand your concerns for having these tools auto-commit because you perhaps feel there should be some sort of human validation required but for me, removing human interaction removes human error from the process which is what I want from this type of system.
-The human aspect should come into things when you are confirming all is working before setting a production tag on the svn tree.
-In summary, your process is fine, blindly allowing an automated process to push files to an environment where they could break things is not.
-
-2. Actually SVN is better than NFS. At least it provides an atomically consistent global view (ie. you won't sync a half committed view of the files). I would argue against development automated commits because it does not allow for a peer review process, but for administration jobs SVN is quite useful. My 2c.
-
-3. It's another example of the Old shoe v's the glass bottle debate.
-In this instance the NFS mount may be the way to go, our nightly build commits versioning changes, and thats it.
-Your SVN repository is what you use to help version and build your code. If what you're doing jeopardises this in any way, THEN DON'T DO IT.
-If SVN is absolutely, positively the best way to do this, then create a separate repository and use that, leave the critical repository alone.
-",CFEngine
-"CFEngine is great but I can't figure out how to copy the templates defined on the policy servers to the related hosts.
-For example, I'm looking to deploy an nginx.conf, I made a policy on my main server:
-bundle agent loadbalancers{
-
- files:
-  ubuntu::
-   ""/etc/nginx/nginx.conf""
-    create => ""true"",
-    edit_template => ""/tmp/nginx.conf.template"",
-    template_method => ""mustache"",
-    template_data => parsejson('
-       {
-          ""worker_processes"": ""auto"",
-          ""worker_rlimit_nofile"": 32768,
-          ""worker_connections"": 16384,
-        }
-    ');
-}
-
-But obliviously, CFEngine can't find /tmp/nginx.conf.template on all others clients...
-It looks like templates are not copied from the server to the clients, what I missed? I guess I miss understood something...
-Documentation doesn't explain how to propagate template files, so I hope you could help me, thanks!
-","1. I'm glad you're enjoying CFEngine. If you want one file to be a copy of another file, you use a copy_from body to
-specify it's source.
-For example:
-bundle agent loadbalancers{
-
-  files:
-    ubuntu::
-
-      ""/tmp/nginx.conf.template""
-        comment => ""We want to be sure and have an up to date template"",
-        copy_from => remote_dcp( ""/var/cfengine/masterfiles/templates/nginx.conf.mustache"",
-                                 $(sys.policy_hub));
-
-      ""/etc/nginx/nginx.conf""
-        create => ""true"",
-        edit_template => ""/tmp/nginx.conf.template"",
-        template_method => ""mustache"",
-        template_data => parsejson('
-       {
-          ""worker_processes"": ""auto"",
-          ""worker_rlimit_nofile"": 32768,
-          ""worker_connections"": 16384,
-       }
-    ');
-
-}
-
-Some people arrange for their templates to be copied as part of their normal
-policy updates, then it's very conveniant to just reference a template relateive
-to your policy file.
-For example, lets say your policy is in
-services/my_nginx_app/policy/loadbalancers.cf, and your template is
-services/my_nginx_app/templates/nginx.conf.mustache. Then, if that tempalte is
-updated as part of the normal policy update you don't have to promise a seperate
-file copy, instead just reference the path to the template relateve to the
-policy file.
-bundle agent loadbalancers{
-
-  files:
-    ubuntu::
-
-      ""/etc/nginx/nginx.conf""
-        create => ""true"",
-        edit_template => ""$(this.promise_dirname)/../templates/nginx.conf.mustache"",
-        template_method => ""mustache"",
-        template_data => parsejson('
-       {
-          ""worker_processes"": ""auto"",
-          ""worker_rlimit_nofile"": 32768,
-          ""worker_connections"": 16384,
-       }
-    ');
-
-}
-
-It's not always appropriate to send your templates to all hosts as part of your
-main policy set, it really depends on the needs of your environment.
-",CFEngine
-"CFEngine 3 newbie here. 
-Am trying to get Oracle JDK installed on an Ubuntu system, how should I script it in CFEngine?
-I can do something like this in shell by using PPA provided by webupd8team:
-add-apt-repository ppa:webupd8team/java
-apt-get update
-
-echo ""Installing JDK 7...""
-echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections
-echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections
-apt-get install -y oracle-java7-installer
-
-Am totally lost doing this in CFEngine. So far I have:
-body common control {
-    inputs => { ""$(sys.libdir)/stdlib.cf"" };
-    bundlesequence => { ""manage_properties"", 
-                        ""manage_jdk""};
-}
-
-bundle agent manage_properties {
-    vars:
-        ""prop_pkgs"" slist => {""python-software-properties"", ""software-properties-common""};
-        ""cmds""      slist => {  ""/usr/bin/add-apt-repository ppa:webupd8team/java"",
-                                ""/bin/echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections"", 
-                                ""/bin/echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections"",
-                                ""/usr/bin/apt-get update"" };
-
-    methods:
-        ""$(prop_pkgs)"" 
-            handle => ""manage_properties"",  
-            comment => ""Make sure required properties packages are installed"",
-            usebundle => package_latest(""$(prop_pkgs)"");
-
-    commands:
-        ""$(cmds)""
-            comment => ""Firing preinstall commands for JDK"";
-}
-
-bundle agent manage_jdk {
-    methods:
-        ""JDK"" 
-            handle => ""manage_jdk"",
-            comment => ""Make sure Java is installed"",
-            usebundle => package_latest(""oracle-java7-installer"");
-}
-
-But the promise fails with following error:
-2014-06-30T14:11:18+0000    error: /default/manage_jdk/methods/'JDK'/default/package_latest/packages/'oracle-java7-installer'[0]: Finished command related to promiser 'oracle-java7-installer' -- an error occurred, returned 100
-2014-06-30T14:11:18+0000    error: /default/manage_jdk/methods/'JDK'/default/package_latest/packages/'oracle-java7-installer'[0]: Bulk package schedule execution failed somewhere - unknown outcome for 'oracle-java7-installer'
-
-Would appreciate any pointer. Thanks 
-","1. One thing that I see in your policy is that you are running some commands that require a shell (your piped commands) and your commands promise is not being contained within any shell.
-
-commands:
-  ""/bin/echo 'Hello World' | grep Hello""
-    contain => in_shell;
-
-Also, it seems that you are taking a very imperative view with your pre-commands. CFEngine typically runs policy once every 5 minutes. I would focus more on performing the operations necessary when necessary and try to focus on the state instead of action.
-For example your running apt-add-repository unconditionally. Consider under what conditions you actually need to execute the command.
-",CFEngine
-"I wrote the following ExecShellResult fragment in cfengine v2.2.1a:
-control:
-    active_interface_mac = ( ExecShellResult(/sbin/ifconfig ${active_interface} | /usr/bin/grep 'ether ' | /usr/bin/cut -f2 -d' ') )
-    ...
-    time_drift = ( ExecShellResult(/usr/sbin/ntpdate -d pool.ntp.org 2>/dev/null | /usr/bin/grep -o 'offset.*sec' | /usr/bin/cut -f2 -d' ') )
-    ...
-shellcommands:
-    ""/bin/echo ${time_drift}"" inform=true syslog=true
-
-When running the above from the command line, it obviously works fine:
-$ ntpdate ...
-0.183693
-
-However, if run inside cfengine, I get a syntax error:
-$ cfagent -qIK
-Executing script /bin/echo /usr/bin/cut...(timeout=0,uid=-1,gid=-1)
-cfengine:/bin/echo /usr/: /usr/bin/cut
-cfengine: Finished script /bin/echo /usr/bin/cut
-cfengine: 
-Executing script /bin/echo  option requires an argument -- d usage...(timeout=0,uid=-1,gid=-1)
-cfengine:/bin/echo  opti: option requires an argument -- d usage
-cfengine: Finished script /bin/echo  option requires an argument -- d usage
-cfengine: 
-Executing script /bin/echo  cut -b list [-n] [file ...]        cut -c list [file ...]        cut -f list [-s] [-d delim] [file ...]...(timeout=0,uid=-1,gid=-1)
-cfengine:/bin/echo  cut : cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-s] [-d delim] [file ...]
-cfengine: Finished script /bin/echo  cut -b list [-n] [file ...]        cut -c list [file ...]        cut -f list [-s] [-d delim] [file ...]
-
-Note the error is being displayed when we run the echo command under shellcommands:. By then, the ${time_drift} variable has been already evaluated, and its result shows we invoke cut's -d option incorrectly, complaining that we didn't pass anything to -d which is obviously not true.
-This is baffling, because ${active_interface_mac} uses the same syntax and works perfectly.
-I tried replacing the second grep with | tail -1 | sed 's///', another grep -o [0-9]*.[0-9] or anything else I could think of, including /usr/bin/cut -f1 -d'${spc}'. I apparently can't use awk because cfengine interprets $(NF) as parentheses which are part of ExecShellResult, even when escaped.
-What other options do I have to get my actual seconds value extracted from ntpdate's output?
-","1. I'm unsure about cfengine 2, I don't see what would trip it up in your example
-but for cfengine 3:
-bundle agent main
-{
-  vars:
-
-    ""ntpdate_s""
-      string => execresult( ""/usr/sbin/ntpdate -d pool.ntp.org 2> /dev/null | /usr/bin/awk '/offset .* sec/ {print $10}'"", useshell ),
-      if => not( isvariable( offset ) );
-
-  reports:
-    ""Mac address of wlan0 $(sys.hardware_mac[wlan0])"";
-    ""Offset is $(ntpdate_s)"";
-}
-
-Output:
-R: Mac address of wlan0 5c:e0:c5:9f:f3:8f
-R: Offset is 0.027672
-
-",CFEngine
-"how can i set a class if a package is installed ?
-Background : i want to trigger a file modification only if a package is installed (optional in a specific version).
-My (example) code unfortunately doesn't work :
-vars:
-    ""cfengine_rpm"" data => packagesmatching(""cfengine-nova"", "".*"", "".*"", "".*"");
-    ""cfengine_rpm_installed"" slist => getindices(cfengine_rpm);
-
-classes:
-    ""cfengine_installed"" expression => some(""cfengine"", cfengine_rpm_installed);
-
-reports:
-    cfengine_installed::
-        ""cfEngine is installed "";
-        # Bonus :-)
-        ""cfEngine Version : $(cfengine_rpm[$(cfengine_rpm_installed)][version])"";
-
-Addendum : this question is similar to CFEngine - set variable if a specific package version is installed but I would like to ask for coded hints or solutions :-)
-","1. I tweaked your policy a bit and provided comments in line. I think your main issue was that you were expecting the index of the returned packagesmatching() data to be indexed by package name, instead of a numeric id. 
-bundle agent main
-{
-vars:
-
-    # Return data from cfengines internal cache about any packages matching the
-    # name cfengine-nova
-    ""p""
-      data => packagesmatching(""cfengine.*"", "".*"", "".*"", "".*"");
-
-    # Get the index (list of keys) from this data structure for iteration
-
-    # Each value in the list is a number which is the position of the JSON
-    # object in the data returned from packagesmatching(). For example, if
-    # cfengine-nova-hub is the only package found to be installed that matches
-    # then the data structure returend to p will look like the following
-    # snippet. Note it's the 0th element inside the array ([]).
-    #
-    # [
-    #   {
-    #      ""arch"":""x86_64"",
-    #      ""method"":""dpkg"",
-    #      ""name"":""cfenigne-nova-hub"",
-    #      ""version"":""3.10.1-1""
-    #   }
-    # ]
-
-    ""i"" slist => getindices(p);
-    ""c"" slist => classesmatching( "".*"", ""defined_from=$(this.bundle)"");
-
-classes:
-
-    # Iterate over the packages found, if one of their names matches
-    # cfengine-nova.* then define the class cfengine_installed.
-
-    ""cfengine_installed""
-      expression => regcmp( ""cfengine.*"", ""$(p[$(i)][name])"" ),
-      meta => { ""defined_from=$(this.bundle)"" };
-
-reports:
-
-    # Emit the version of cfengine from the internal sys var
-    ""CFEngine $(sys.cf_version)"";
-
-    # Iterate over the index (i) of the data returned from packagesmatching
-    # cfengine-nova (p) and print the name of each package.
-
-    ""CFEngine cached knowledge of $(p[$(i)][name]) $(p[$(i)][version])"";
-
-    ""Found the class '$(c)' defined from $(this.bundle)"";
-
-    cfengine_installed::
-
-        ""CFEngine is installed "";
-
-        # Bonus :-)
-
-        # In case you had multiuple packages returned, you might want to make
-        # this more strict, or it will emit the version of each package found
-        # by packagesmatching.
-
-        ""CFEngine Package Version : $(p[$(i)][version])""
-          if => strcmp( ""cfengine-nova-hub"", ""$(p[$(i)][name])"" );
-}
-
-Results in this output:
-R: CFEngine 3.10.1
-R: CFEngine cached knowledge of cfengine-nova-hub 3.10.1-1
-R: Found the class 'cfengine_installed' defined from main
-R: CFEngine is installed 
-R: CFEngine Package Version : 3.10.1-1
-
-Does this answer your question?
-",CFEngine
-"I have been setting up a cloud custodian policy for automatically terminating the ec2 instances after a certain amount of time. But unfortunately it is not working fine.
-Filters and mod are working fine in the policy, But action is not getting executed. Kindly let us know if you have any solution.
-Policy:
-policies:
-  - name: ec2-terminate-instance
-    resource: ec2
-    description: |
-      Mark any stopped ec2 instance for deletion in 60 days
-      If an instance has not been started for 60 days or over
-      then they will be deleted similar to internal policies as it wont be patched.
-    filters:
-      - ""tag:expiration"": present
-      - ""State.Name"": stopped
-    mode:
-      schedule: ""rate(15 minutes)""
-      type: periodic
-      role: arn:aws:iam::xxxxxxxxxxxx:role/cloud-custodian-role
-    actions:
-      - type: mark-for-op
-        tag: c7n_stopped_instance
-        op: terminate
-        hours: 0.5
-
-","1. Your policy looks right, despite what has been mentioned about custom tags for the delayed operation mark-for-op.
-The details are important here, if you are not seeing the instance terminated with this policy, that is because you would need a second  follow up policy that filters on the marked resources and a corresponding action of terminating those discovered instances.
-  - name: ec2-terminate-instance
-    resource: aws.ec2
-    description: |
-      Delete any marked instances in the previous policy based on the tag c7n_stopped_instance
-    filters:
-      - type: marked-for-op
-        tag: c7n_stopped_instance 
-        op: terminate
-    actions:
-      - type: terminate
-
-So you:
-
-mark-for-op as a future delayed action by tagging resources
-filter on these resources using the marked-for-op filter type, and in your actions perform the terminate action.
-
-ref: https://www.cloudcustodian.io/docs/azure/examples/resourcegroupsdelayedoperation-general.html#azure-example-delayedoperation
-",Cloud Custodian
-"I am trying to configure an custodian policy which will do some basic filtering and send the details of the match to specific user via mail(SNS).
-I can able to send the email to the user but I couldn't edit the mail subject or mail body, instead am getting default mail subject and some random text in mail body.
-My custodian policy:
-policies:
-  - name: iam-user-permission-check
-    resource: aws.iam-user
-    description: |
-      Finding IAM users with specific tags.
-    filters:
-      - and:
-        - type: check-permissions
-          match: allowed
-          actions:
-            - '*:*'
-        - ""tag:c7n"": ""absent""
-    actions:
-      - type: notify
-        subject: ""IAM Users Without Proper Tags""
-        template: |
-          The following IAM users match the filter criteria:
-          {% for user in resources %}
-          - IAM User: {{ user.UserName }}
-            Tags: {{ user.Tags }}
-          {% endfor %}
-        transport:
-         type: sns
-         topic: <sns-topic-arn>
-         region: us-east-1
-
-
-I did some research, all I get it to use c7n-mailer with SES by passing SQS queue in mailer.html, can't it be done by using SNS?
-What am I missing here ?
-",,Cloud Custodian
-"I'm having trouble getting an IP address for a UI. Here is the error message I got when trying to set the IP address:
-
-Attempting to connect...
-  HTTPConnectionPool(host='10.227.124.50', port=80): Max retries exceeded with url: /api/v3.1/status (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',))
-
-There is also the other UI which was set up by OpenStack I can't access that too, I used a foxy proxy and still can access it, do I need to create some flow rules to make it accessible?
-","1. It looks like port 80 egress is not open somewhere in the route between your local machine and the world.
-I suggest to check that all security groups, firewalls, iptables and of course your proxy, enable connection to port 80.
-
-2. I had the same problem with library urllib3. The ip address was incorrect, so check if your ip address is correct.
-",Cloudify
-"My directory structure is --> test => [blueprint.yaml, scripts [python_script.py, sample.conf]]
-python_script.py would basically read default configurations from sample.conf and parse/do some string operations and generates a new conf file.
-But am not able to get the path of sample.conf as it keeps changing with every deployment.
-Example:
-./tmp/cloudifyLookupYamls/archive1658295820027/extracted/script_plugin/scripts/sample.conf
-./tmp/cloudifyLookupYamls/archive1658294160172/extracted/script_plugin/scripts/sample.conf
-./tmp/cloudifyBrowseSources/script1658295889590/extracted/script_plugin/scripts/sample.conf
-Below is the python script:
-import configparser
-from cloudify import ctx
-from cloudify.state import ctx_parameters as inputs
-import os
-print(""Path at terminal when executing this file"") # this is / 
-print(os.getcwd() + ""\n"")
-
-print(""This file path, relative to os.getcwd()"") # this is /tmp/ZMX3U/python_script.py
-print(__file__ + ""\n"")
-
-print(""The directory is "")
-print(os.path.dirname( __file__ )+ ""\n"") # this is /tmp/ZMX3U
-
-parser = configparser.ConfigParser()
-parser.read(""sample.conf"") # this is the problem, this file is not present at the place the script runs
-
-configer = configparser.ConfigParser()
-#parsing logic
-with open(r""/tmp/new.conf"", 'w+') as configfile:
-    configer.write(configfile, True)
-
-I see that the script file is executed in a temporary directory /tmp/ZMX3U/.
-Please suggest on how I can access the sample.conf from my python_script.py
-","1. @Kasibak if you want to have access to any file associated inside the blueprint, you can check the documentation
-https://docs.cloudify.co/latest/bestpractices/plugin-development/#downloading-resources-using-ctx-download-resource
-so you would do path = ctx.download_resource('sample.conf')
-then you can do whatever you want with the file
-",Cloudify
-"I recently installed OSX and Ubuntu on different computers. I then attempted to install redis and foreman for both OS's. Both errors threw no flags, and seemed to execute successfully. However, whenever I go to start foreman with foreman start, I run into the below issue on both computers:
-23:48:35 web.1    | started with pid 1316
-23:48:35 redis.1  | started with pid 1317
-23:48:35 worker.1 | started with pid 1318
-23:48:35 redis.1  | [1317] 11 Jun 23:48:35.180 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
-23:48:35 redis.1  | [1317] 11 Jun 23:48:35.181 * Increased maximum number of open files to 10032 (it was originally set to 256).
-23:48:35 redis.1  | [1317] 11 Jun 23:48:35.181 # Creating Server TCP listening socket *:6379: bind: Address already in use
-23:48:35 redis.1  | exited with code 1
-23:48:35 system   | sending SIGTERM to all processes
-23:48:35 worker.1 | terminated by SIGTERM
-23:48:35 web.1    | terminated by SIGTERM
-
-For some reason, it seems like a path issue to me because it seems like Redis or Foreman cannot find the files they need to use to successfully execute, but I'm not exactly sure. 
-On OSX I used gem install foreman and Brew install Redis .
-On Ubuntu I used the following:
-Redis:
-$ cd ~
-$ wget http://download.redis.io/redis-stable.tar.gz
-$ tar xvzf redis-stable.tar.gz
-$ cd redis-stable
-$ make
-$ make test 
-
-Foreman: 
-$ gem install foreman
-My PATH on OSX is as follows:
-
-/Users/c/.rvm/gems/ruby-2.1.0/bin:/Users/c/.rvm/gems/ruby-2.1.0@global/bin:/Users/c/.rvm/rubies/ruby-2.1.0/bin:/Users/c/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
-
-On Ubuntu, my PATH is:
-
-/usr/local/bin:/usr/lib/postgresql:/usr/lib/postgresql/9.3:/usr/lib/  postgresql/9.3/lib:/usr/lib/postgresql/9.3/bin:/usr/share/doc:/usr/share/doc/postgresql-9.3:/usr/share/postgresql:/usr/share/postgresql/9.3:/usr/share/postgresql/9.3/man:$PATH
-
-Redis-server does seem to execute successfully once, and then it fails with the message:
-[1457] 12 Jun 00:02:48.481 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
-[1457] 12 Jun 00:02:48.482 * Increased maximum number of open files to 10032 (it was originally set to 256).
-[1457] 12 Jun 00:02:48.483 # Creating Server TCP listening socket *:6379: bind: Address already in use
-
-Trying $ redis-server stop returns:
-[1504] 12 Jun 00:05:56.173 # Fatal error, can't open config file 'stop'
-I need help figuring out how to get Foreman and Redis working together so that I can view my local files in the browser at 127.0.0.1
-EDIT
-Redis does start, but nothing happens when I navigate to localhost:6379. I also tried the suggestion of finding processes. It found 
-c                751   0.0  0.0  2432768    596 s005  R+    2:03PM   0:00.00 grep redis
-c                616   0.0  0.0  2469952   1652 s004  S+    2:01PM   0:00.05 redis-server *:6379
-
-Trying to kill the process results in 
-
-kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec
-  ... or kill -l [sigspec]
-
-","1. Try starting Redis server with the following command :
-redis-server <path to your config file>
-
-Also, check if there's an instance of Redis server already running by 
-ps aux | grep redis 
-
-and then if process is found :
-kill <process id>
-
-Restart your redis server.
-
-2. This one liner will kill any existing redis-servers and then start a new redis-server. When run in Foreman it doesn't send a SIGTERM which causes Foreman to quit, sending a SIGINT lets Foreman continue.
-(ps aux | grep 6379 | grep redis | awk '{ print $2 }' | xargs kill -s SIGINT) && redis-server
-In Procfile.dev:
-redis: (ps aux | grep 6379 | grep redis | awk '{ print $2 }' | xargs kill -s SIGINT) && redis-server
-
-3. 
-List the redis server running using terminal 
-command : ps aux | grep redis
-In list note down 'pid' number of the server which you want to terminate Example pid: 5379
-use command : kill 5379
-
-",Foreman
-"Environment:
-
-Windows 10 x64
-Ruby: ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x64-mingw32]
-Node: v16.13.1
-npm: 8.1.2
-
-
-Problem Statement:
-Running webpack script inside Rails project in Windows 10 x64 doesn't work properly.
-It seems that the environment must be set before the exec for Windows
-
-Exception:
-Foreman start throwing below error
-$ foreman start
-
-Traceback (most recent call last):
-        17: from C:/Ruby27/bin/foreman:23:in `<main>'
-        16: from C:/Ruby27/bin/foreman:23:in `load'
-        15: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/bin/foreman:7:in `<top (required)>'
-        14: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/vendor/thor/lib/thor/base.rb:444:in `start'
-        13: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/vendor/thor/lib/thor.rb:369:in `dispatch'
-        12: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/vendor/thor/lib/thor/invocation.rb:126:in `invoke_command'
-        11: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/vendor/thor/lib/thor/command.rb:27:in `run'
-        10: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/cli.rb:42:in `start'
-         9: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:57:in `start'
-         8: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:363:in `spawn_processes'
-         7: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:363:in `each'
-         6: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:364:in `block in spawn_processes'
-         5: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:364:in `upto'
-         4: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/engine.rb:367:in `block (2 levels) in spawn_processes'
-         3: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/process.rb:53:in `run'
-         2: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/process.rb:53:in `chdir'
-         1: from C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/process.rb:54:in `block in run'
-C:/Ruby27/lib/ruby/gems/2.7.0/gems/foreman-0.87.2/lib/foreman/process.rb:54:in `spawn': Exec format error - bin/webpack-dev-server (Errno::ENOEXEC)
-
-
-Configuration:
-GemFile
-ruby ""2.7.2""
-gem ""rails"", ""~> 6.0""
-gem ""webpacker"", ""< 6""
-
-group :development do
-  gem ""foreman"", require: false
-end
-
-package.json
-{
-  ""name"": ""demo"",
-  ""private"": true,
-  ""engines"": {
-    ""node"": "">=10.15.3"",
-    ""npm"": "">=6"",
-    ""yarn"": "">=1.15.2""
-  },
-  ""dependencies"": {
-    ""@rails/webpacker"": ""5.4.3"",
-    ""webpack"": ""4.46.0"",
-    ""webpack-cli"": ""3.3.12""
-  },
-  ""version"": ""0.1.0"",
-  ""devDependencies"": {
-    ""webpack-dev-server"": ""3""
-  },
-}
-
-
-
-EDIT 1:
-However, executing bin/webpack-dev-server on a separate terminal is compiling properly.
-$ bin/webpack-dev-server
-
-i 「wds」: Project is running at http://localhost:3035/
-i 「wds」: webpack output is served from /packs/
-i 「wds」: Content not from webpack is served from C:\Users\DELL\path\project\plate\public\packs
-i 「wds」: 404s will fallback to /index.html
-i 「wdm」: Hash: 3f9f73c5cf41d5c81ee8
-Version: webpack 4.46.0
-Time: 4740ms
-Built at: 12/01/2022 14:01:39
-                                     Asset       Size       Chunks                         Chunk Names
-    js/application-f0238c055e7bba05cd93.js    515 KiB  application  [emitted] [immutable]  application
-js/application-f0238c055e7bba05cd93.js.map    581 KiB  application  [emitted] [dev]        application
-                             manifest.json  364 bytes               [emitted]
-i 「wdm」: Compiled successfully.
-
-
-EDIT 2:
-I tried removing foreman from Gemfile and re-installing it using gem command by following answer
-Gemfile
-group :development do
-  # gem ""foreman"", require: false
-end
-
-$ gem uninstall foreman
-$ gem install foreman
-
-
-gem uninstall foreman is removing the foreman from the ruby main directory itself.
-
-
-EDIT 3:
-I tried bundle exec rails webpacker:install but again foreman start is throwing same error. While bin/webpack-dev-server is compiling properly on PORT as config/webpack file.
-$ bundle exec rails webpacker:install
-
-  Please add the following to your Gemfile to avoid polling for changes:
-    gem 'wdm', '>= 0.1.0' if Gem.win_platform?
-  Please add the following to your Gemfile to avoid polling for changes:
-    gem 'wdm', '>= 0.1.0' if Gem.win_platform?
-  Please add the following to your Gemfile to avoid polling for changes:
-    gem 'wdm', '>= 0.1.0' if Gem.win_platform?
-   identical  config/webpacker.yml
-Copying webpack core config
-       exist  config/webpack
-   identical  config/webpack/development.js
-   identical  config/webpack/environment.js
-   identical  config/webpack/production.js
-   identical  config/webpack/test.js
-Copying postcss.config.js to app root directory
-   identical  postcss.config.js
-Copying babel.config.js to app root directory
-   identical  babel.config.js
-Copying .browserslistrc to app root directory
-   identical  .browserslistrc
-The JavaScript app source directory already exists
-       apply  C:/Users/DELL/Documents/project/plate/vendor/cache/ruby/2.7.0/gems/webpacker-5.4.3/lib/install/binstubs.rb
-  Copying binstubs
-       exist    bin
-   identical    bin/webpack
-   identical    bin/webpack-dev-server
-File unchanged! The supplied flag value not found!  .gitignore
-Installing all JavaScript dependencies [5.4.3]
-         run  yarn add @rails/webpacker@5.4.3 from "".""
-yarn add v1.22.15
-[1/5] Validating package.json...
-[2/5] Resolving packages...
-[3/5] Fetching packages...
-info fsevents@2.3.2: The platform ""win32"" is incompatible with this module.
-info ""fsevents@2.3.2"" is an optional dependency and failed compatibility check. Excluding it from installation.
-info fsevents@1.2.13: The platform ""win32"" is incompatible with this module.
-info ""fsevents@1.2.13"" is an optional dependency and failed compatibility check. Excluding it from installation.
-[4/5] Linking dependencies...
-[5/5] Building fresh packages...
-warning Your current version of Yarn is out of date. The latest version is ""1.22.17"", while you're on ""1.22.15"".
-info To upgrade, download the latest installer at ""https://yarnpkg.com/latest.msi"".
-success Saved 0 new dependencies.
-Done in 8.92s.
-Installing webpack and webpack-cli as direct dependencies
-         run  yarn add webpack@^4.46.0 webpack-cli@^3.3.12 from "".""
-yarn add v1.22.15
-[1/5] Validating package.json...
-[2/5] Resolving packages...
-[3/5] Fetching packages...
-info fsevents@2.3.2: The platform ""win32"" is incompatible with this module.
-info ""fsevents@2.3.2"" is an optional dependency and failed compatibility check. Excluding it from installation.
-info fsevents@1.2.13: The platform ""win32"" is incompatible with this module.
-info ""fsevents@1.2.13"" is an optional dependency and failed compatibility check. Excluding it from installation.
-[4/5] Linking dependencies...
-[5/5] Building fresh packages...
-success Saved 0 new dependencies.
-Done in 6.82s.
-Installing dev server for live reloading
-         run  yarn add --dev webpack-dev-server@^3 from "".""
-yarn add v1.22.15
-[1/5] Validating package.json...
-[2/5] Resolving packages...
-[3/5] Fetching packages...
-info fsevents@2.3.2: The platform ""win32"" is incompatible with this module.
-info ""fsevents@2.3.2"" is an optional dependency and failed compatibility check. Excluding it from installation.
-info fsevents@1.2.13: The platform ""win32"" is incompatible with this module.
-info ""fsevents@1.2.13"" is an optional dependency and failed compatibility check. Excluding it from installation.
-[4/5] Linking dependencies...
-[5/5] Building fresh packages...
-success Saved 0 new dependencies.
-Done in 7.33s.
-Webpacker successfully installed 🎉 🍰
-
-","1. Add a new file named ""Procfile"" on root of your project.
-Insert the following in the Procfile
-web: bundle exec rails server -p $PORT
-
-Now you will be able to run your server using command
-heroku local
-
-on http://localhost:5000.
-
-2. I simply run gem uninstall foreman and gem install foreman then bundle install.
-This resolved the issue for me.
-
-3. 
-Use ps to list running processes. Kill ruby process by kill -9 <pid>
-Change bin/rails to rails inside Procfile.dev (ex: web: rails server and css: rails tailwindcss:watch)
-Run bin/dev
-
-",Foreman
-"I am working on an inherited system and trying to diagnose an issue with ssh access to particular a VM. The machine is managed by puppet/foreman, is receiving updates, and producing reports on Foreman. However, our usual SSH is blocked by the host, the information for the login accounts is from a separate LDAP account.
-Many of our VM's are able to configure and login fine, but I believe an issue somewhere within our puppet/foreman application is treating this machine differently.
-I have tried to inspect the system to find difference and then evaluate the Gitlab Puppet and Foreman Puppet ENC, OS installation and networking. There is many subtle difference between some of the machines but rectifying these has not fixed the login issue. The one method that did aleviate the restrictions was to remove the firewall and PAM access, however, this is not ideal.
-Investigating on the machine I can see the output from puppet resource group shell is different on this VM to others. We get a group that is present, but only contains RKE, on every other VM we get our list of members. Why would this resource be different for this VM, How can I fix this to be consistent(puppet agent -t does not fix it), and what else could be causing the login restrictions?
-","1. The final solution for this ended up being to remove the groups already existing on the VMs. This allowed puppet to apply the groups it was managing and align with the expected SSH behaviour.
-",Foreman
-"I am running a RedHat Satellite 6.12 Server. I have a VM that I am hosting in VMware. After provisioning, I used hostnamectl set-hostname to change the hostname of the machine and in Satellite changed the name to the updated hostname. Satellite can no longer establish a connection to the host. Error is: Error initializing command: RuntimeError - Failed to establish connection to remote host, exit code: 255. More than likely this is something to do with foreman.
-I have updated /etc/hosts, I can ping and dns resolve the VM from the CLI of the Satellite Server. What services or refresh do I need to restart in order to pick up this change in the UI?
-","1. I found an answer on the RedHat website shortly after this. My foreman-proxy public key was somehow corrupted on the end host. I had to execute an ssh-copy-id for the foreman-proxy according to:
-[root@sat]# ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@host.example.com
-Answer comes from: https://access.redhat.com/solutions/7012983
-
-2. On the satellite WebUI try this,
-Administer -> Settings -> Remote Execution -> Connect by IP -> set to ""yes""
-",Foreman
-"I have several projects that use different versions of Postgres. We're using the asdf plugin to configure them. It's annoying to have to start and stop Postgres each time I move to different project. Is there some way to automate this? I was looking at adding it to our development Procfile, but since the pg_ctl process exits immediately after starting the server, foreman also exits.
-","1. My current solution is to add an alias that starts the database before running foreman and shuts it down afterwards. It's not perfect, since it means I need to boot up the server in order to run the tests:
-alias fsd=""pg_ctl start && foreman start -f Procfile.dev; pg_ctl stop""
-
-",Foreman
-"I am quite new to juju and started to install charmed kubeflow as shown in below link
-https://charmed-kubeflow.io/docs/get-started-with-charmed-kubeflow
-As part of "" Configure the Charmed Kubeflow components"" I have configured below commands
-juju config dex-auth public-url=http://10.64.140.43.nip.io
-juju config oidc-gatekeeper public-url=http://10.64.140.43.nip.io
-
-Looks like 10.64.140.43 ipaddress is the Kubeflow dashboard. But I dont see it configured anywhere in my system. When I tried to ping it from another machine it fails. How can I access the kubeflow page ?
-Also I am using a single VM running 22.04 with 20GB of ram and 8vCPUs. Once I installed kubeflow I can see that CPU usage is around 90% and Ram usage is almost 100%. What is the minimum CPU requirement for running charmed kubeflow ?
-    Tasks: 568 total,   3 running, 565 sleeping,   0 stopped,   0 zombie
-%Cpu(s): 46.3 us, 20.5 sy,  0.0 ni, 31.1 id,  0.5 wa,  0.0 hi,  1.6 si,  0.0 st
-MiB Mem :  20007.9 total,    412.6 free,   5405.4 used,  14189.9 buff/cache
-MiB Swap:   2048.0 total,   2045.5 free,      2.5 used.  14227.7 avail Mem 
-
-    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                       
-    672 root      20   0 2305992 134716  42264 S 100.0   0.7  10:12.49 containerd                                                    
-   2546 ipi       20   0  325580  10032   7308 R 100.0   0.0  18:59.08 gvfs-udisks2-vo                                               
- 236892 root      20   0   18172  14208   6516 R  85.0   0.1   0:00.17 python3                                                       
-    675 root      20   0 2393496 429028  18152 S  65.0   2.1   9:34.72 k8s-dqlite                                                    
-   1543 root      20   0 2219152   1.3g 101568 S  60.0   6.4  14:03.99 kubelite                                                      
-  20814 root      20   0 1005656 280556  91924 S  20.0   1.4   9:01.81 jujud                                                         
- 236890 ipi       20   0   22640   4804   3524 R  20.0   0.0   0:00.08 top                                                           
-  20470 root      20   0 2240592 455236  53752 S  15.0   2.2   5:26.83 mongod  
-
-","1. The minimum requirements are: at least 4 cores, 32GB RAM and 50GB of disk space available.
-If you look for something more lightweight, there is also Charmed MLFlow available.
-",Juju
-"
-How many juju charms hooks files are there ? what are they important ?
-Can anyone please explain about how the juju charms hooks files working ?
-what are the orders to be executed ?
-Explain about life-cycle about juju charms hooks ?
-
-","1. Hooks are simple scripts called during the lifecycle of a service. They are triggered either by commands run (like ""deploy"") or events (like a relation with another service going down).
-All hooks are optional, but they are the points where you take control of what the charm actually needs to do.
-To me it seems strange that a complete lifecycle graph is not available, so I created one below from what I understand:
-
-Note that this is gathered by just reading the docs, it might not correspond 100% with how the actual code is run.
-
-2. They have some good documentation now.
-Lifecycle of the events: https://juju.is/docs/sdk/a-charms-life
-Events that can trigger relational hooks: https://juju.is/docs/sdk/integration
-",Juju
-"I am using a pi 4b to display a website. This site allows the user to sort/filter data in a preferred format. When I specify the sort/filter preferences in the UI, they are only saved for a short time. I have to reboot to get them back. They do not change and require reboot on the website if viewed on a Windows or Mac machine or tablet.
-I have tried various versions of raspian, trying different browsers (chromium, Firefox) and cannot seem to solve it. I also tried kiosk mode or just showing the page full screen.
-","1. Maybe you can first check how these parameters are persisted.
-Are those filters part of the URL, and maybe you refresh the page by clicking a URL which removes those?
-You can also check Chrome's Developer tools by pressing F12, go to Application tab on the top and check session storage and cookies.
-Once you know where they are stored, you can take further steps to investigate when they get lost (and why).
-",kiosk
-"I am using below script and it giving me an error #/bin/sh: 1: kpt: not found
-FROM nginx    
-RUN apt update
-RUN apt -y install git
-RUN apt -y install curl
-
-# install kpt package
-RUN mkdir -p ~/bin
-RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
-RUN export PATH=${HOME}/bin:${PATH}
-RUN SRC_REPO=https://github.com/kubeflow/manifests
-RUN kpt pkg get $SRC_REPO/tf-training@v1.1.0 tf-training
-
-But if I create the image using
-FROM nginx
-RUN apt update
-RUN apt -y install git
-RUN apt -y install curl
-
-and perform
-docker exec -it container_name bash
-
-and manually do the task then I am able to install kpt package. Sharing below the screenshot of the process
-
-The error changes if I provide the full path to /bin/kpt
-Error: ambiguous repo/dir@version specify '.git' in argument
-FROM nginx
-RUN apt update
-RUN apt -y install git
-RUN apt -y install curl
-RUN mkdir -p ~/bin
-RUN curl -L https://github.com/GoogleContainerTools/kpt/releases/download/v1.0.0-beta.1/kpt_linux_amd64 --output ~/bin/kpt && chmod u+x ~/bin/kpt
-RUN export PATH=${HOME}/bin:${PATH}
-# Below line of code is to ensure that kpt is installed and working fine
-RUN ~/bin/kpt pkg get https://github.com/ajinkya101/kpt-demo-repo.git/Packages/Nginx
-RUN SRC_REPO=https://github.com/kubeflow/manifests
-RUN ~/bin/kpt pkg get $SRC_REPO/tf-training@v1.1.0 tf-training
-
-What is happening when I am using docker and not able to install it?
-","1. First, make sure SRC_REPO is declared as a Dockerfile environment variable
-ENV SRC_REPO=https://github.com/kubeflow/manifests.git
-^^^                                               ^^^^
-
-And make sure the URL ends with .git.
-As mentioned in kpt get:
-
-In most cases the .git suffix should be specified to delimit the REPO_URI from the PKG_PATH, but this is not required for widely recognized repo prefixes.
-
-Second, to be sure, specify the full path of kpt, without ~ or ${HOME}.
-/root/bin/kpt
-
-For testing, add a RUN id -a && pwd to be sure who and where you are when using the nginx image.
-",kpt
-"I am following the steps for adding Anthos Service mesh provided here
-However, When I try running...
-kpt cfg list-setters asm
-I get...
-error: The input value doesn't validate against provided OpenAPI schema: validation failure list:
-gcloud.container.cluster.clusterSecondaryRange in body must be of type string: ""null""
-
-What is wrong? How would I debug?
-","1. Not really an answer but a workaround was to add these lines...
-kpt cfg set asm gcloud.container.cluster.clusterSecondaryRange "" ""
-kpt cfg set asm gcloud.container.cluster.servicesSecondaryRange "" ""
-
-",kpt
-"I've noticed that we can crate a setter contains list of string based on kpt [documentation][1]. Then I found out that complex setter contains list of object is not supported based on [this github issue][1]. Since the issue itself mentioned that this should be supported in kpt function can we use it with the current kpt function version?
-[1]: Kpt Apply Setters. https://catalog.kpt.dev/apply-setters/v0.1/
-[1]: Setters for list of objects. https://github.com/GoogleContainerTools/kpt/issues/1533
-","1. I've discussed a bit with my coworkers, and turned out this is possible by doing the following setup:
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: my-nginx
-spec:
-  replicas: 4 # kpt-set: ${nginx-replicas}
-  selector:
-  matchLabels:
-    app: nginx
-template:
-  metadata:
-    labels:
-      app: nginx
-  spec:
-    containers:
-    - name: nginx
-      image: ""nginx:1.16.1"" # kpt-set: nginx:${tag}
-      ports:
-        - protocol: TCP
-          containerPort: 80
----
-apiVersion: v1
-kind: MyKind
-metadata:
-  name: foo
-environments: # kpt-set: ${env}
-- dev
-- stage
----
-apiVersion: v1
-kind: MyKind
-metadata:
-  name: bar
-environments: # kpt-set: ${nested-env}
-- key: some-key
-  value: some-value
-
-After that we can define the following setters:
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: setters
-data:
-  env: |-
-    - prod
-    - dev
-  nested-env: |-
-    - key: some-other-key
-      value: some-other-value
-  nginx-replicas: ""3""
-  tag: 1.16.2
-
-And then we can call the following command:
-$ kpt fn render apply-setters-simple
-
-I've send a Pull Request to the repository to add documentation about this.
-",kpt
-"I've been searching for a couple of hours already, can't still find the solution, feeling very frustrated.
-I've installed make tool with chocolatey and docker, and am trying to build linuxkit tool
-https://github.com/linuxkit/linuxkit
-and then using it build linux VM image for Docker
-From the README:
-""LinuxKit uses the linuxkit tool for building, pushing and running VM images.
-Simple build instructions: use make to build. This will build the tool in bin/.""
-I run make install
-but again and again, whatever I do it keeps failing
-PS C:\Users\Tim\Desktop\linuxkit-master\linuxkit-master> make install
-cp -R bin/* /usr/local/bin
-process_begin: CreateProcess(NULL, cp -R bin/* /usr/local/bin, ...) failed.
-make (e=2): The system cannot find the file specified.
-make: *** [Makefile:78: install] Error 2
-
-In Makefile: 77,78:
-install:
-    cp -R bin/* $(PREFIX)/bin
-
-I've tried changing makefile because there is no such path as usr/local/bin on Windows, but whatever I change it to, the build never succeeds.
-I've even tried running it on wsl:
-root@DESKTOP-GF982I3:/mnt/c/users# cd /mnt/c/Users/Tim/Desktop/linuxkit-master/linuxkit-master
-root@DESKTOP-GF982I3:/mnt/c/Users/Tim/Desktop/linuxkit-master/linuxkit-master# make install
-cp -R bin/* /usr/local/bin
-cp: cannot stat 'bin/*': No such file or directory
-make: *** [Makefile:78: install] Error 1
-root@DESKTOP-GF982I3:/mnt/c/Users/Tim/Desktop/linuxkit-master/linuxkit-master#
-
-But yet again the error is on the 78th line.
-Please, help.
-EDIT:
-I've encountered an error on linux as well
-With docker engine installed and daemon running:
-tim@tim-vm:~/Desktop/linuxkit/linuxkit-1.0.1$ sudo make
-make -C ./src/cmd/linuxkit
-make[1]: Entering directory '/home/tim/Desktop/linuxkit/linuxkit-1.0.1/src/cmd/linuxkit'
-fatal: not a git repository (or any of the parent directories): .git
-tar cf - -C . . | docker run --rm --net=none --log-driver=none -i -e GOARCH= linuxkit/go-compile:7b1f5a37d2a93cd4a9aa2a87db264d8145944006 --package github.com/linuxkit/linuxkit/src/cmd/linuxkit --ldflags ""-X github.com/linuxkit/linuxkit/src/cmd/linuxkit/version.GitCommit= -X github.com/linuxkit/linuxkit/src/cmd/linuxkit/version.Version=""v0.8+"""" -o linuxkit > tmp_linuxkit_bin.tar
-gofmt...
-vendor/github.com/Code-Hex/vz/v3/internal/objc/finalizer_118.go:8:18: expected '(', found '['
-vendor/github.com/moby/buildkit/frontend/attest/sbom.go:75:13: expected '(', found '['
-vendor/github.com/moby/buildkit/frontend/frontend.go:15:28: expected ';', found '['
-vendor/github.com/moby/buildkit/frontend/gateway/client/client.go:17:28: expected ';', found '['
-vendor/github.com/moby/buildkit/solver/result/result.go:16:15: expected ']', found any
-vendor/github.com/moby/buildkit/solver/result/result.go:26:2: expected declaration, found 'if'
-vendor/github.com/moby/buildkit/solver/result/result.go:68:3: expected declaration, found 'return'
-vendor/github.com/moby/buildkit/solver/result/result.go:91:2: expected declaration, found 'if'
-govet...
-golint...
-./cache/write.go:357:1: exported method Provider.ImageInCache should have comment or be unexported
-sh: exported: unknown operand
-make[1]: *** [Makefile:40: tmp_linuxkit_bin.tar] Error 2
-make[1]: *** Deleting file 'tmp_linuxkit_bin.tar'
-make[1]: Leaving directory '/home/tim/Desktop/linuxkit/linuxkit-1.0.1/src/cmd/linuxkit'
-make: *** [Makefile:61: linuxkit] Error 2
-
-While tweaking makefile file on windows I have encountered a similar problem.
-As you can see, the script creates a .tar file but instantly deletes it.
-I will re-iterate that main goal is to run linux Docker containers on Windows, and as I've read LinuxKit would build specific .iso images for using with Hyper-V that would provide more efficiency such as a faster startup and less CPU and memory overhead compared to a regular Hyper-V machine.
-But since I'm having trouble with linuxkit I will have to resort to using regular Hyper-V machine.
-","1. You are feeling frustrated because you're trying to use a project that was created to work on GNU/Linux, on a Windows system.  That simply will not work.  Windows and Linux are completely different in just about every way imaginable and it takes an enormous amount of effort for a project to be able to work on both of them.  Most projects don't have the time, energy, or interest to do that.
-This error:
-process_begin: CreateProcess(NULL, cp -R bin/* /usr/local/bin, ...) failed.
-
-is because you're trying to run the Linux program cp, on Windows.  And that program doesn't exist on Windows.
-Then you switched to WSL.  I don't know much about WSL, but you're moving in the right direction: WSL provides a Linux-like environment that you can run (some) Linux-style programs in.
-This error:
-cp: cannot stat 'bin/*': No such file or directory
-
-now is running Linux cp, but it's saying that it's trying to copy the files in the bin directory and there are no such files.  I can't explain why exactly but just to be clear: the install target in a Makefile usually will install files that you already built.  In your example text above, you didn't run a make command that actually builds anything (usually that's just make with no targets).
-So, maybe you can't run make install because there is nothing to install, because you didn't build the code yet.
-It seems to me that a project like linuxkit (just going from the name and description, I know nothing about it) which is used to build Linux distributions, will almost certainly NOT be something you can run on Windows.  Possibly not even in WSL.  You should check with the project to see what their requirements are.
-You may need to go back to the drawing board here: either get a separate system and install GNU/Linux on it, or create a real virtual machine (not just WSL) and run this there, or find another tool that is designed to run on Windows.
-
-2. The second error that you have encountered on Linux is because the go-compiler container image used in the build is old, and apparently no longer compatible with the actual code. The linuxkit/go-compile:7b1f5a37d2a93cd4a9aa2a87db264d8145944006 container uses go 1.16.3. You can update the Makefiles to use a newer version, just get an appropriate one from here: https://hub.docker.com/r/linuxkit/go-compile/tags -- At least at the moment of this writing, linuxkit/go-compile:c97703655e8510b7257ffc57f25e40337b0f0813 (which provides go 1.19.4) seems to work well.
-",LinuxKit
-"The official docker image for node is: https://hub.docker.com/_/node. This comes with yarn pre-installed at v1.x. I want to upgrade yarn to v2. However, I can't tell how yarn was installed on this image. It's presumably not via npm because if I do npm list, yarn does not show up in the list. I don't know of another way to install yarn. I thought maybe it was via the package manager for linuxkit, which I believe is the distribution used by the node docker image. However I looked at the package-manager for linuxkit – as I understand it they just use git clone and there are a list of packages available in /pkg in the github repository. However, yarn isn't one of those.
-Some steps towards an answer, maybe:
-
-How is the installed version of yarn on node:latest docker image installed? [Maybe that will inform me as to how I can upgrade it]
-How can I upgrade yarn on a LinuxKit docker image?
-How can I see the Dockerfile for the base image? [I.e. node:latest – is there a Dockerfile for that which tells us how the image was generated? If so that might tell me how yarn was installed.]
-
-","1. The Best Practices Guide recommends for (simple) local installs
-FROM node:6
-
-ENV YARN_VERSION 1.16.0
-
-RUN yarn policies set-version $YARN_VERSION
-
-in your Dockerfile. This guide is worth to be read anyway ;-)
-
-2. If you can do something on runtime,
-simply use below command to install the Yarn version you want:
-yarn set version stable
-
-On 2022-11-23, stable will install Yarn 3.3.0.
-
-3. According to Dockerfile it is installed via tarbar in both in alpine and debian versions:
-  && curl -fsSLO --compressed ""https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz"" \
-  && curl -fsSLO --compressed ""https://yarnpkg.com/downloads/$YARN_VERSION/yarn-v$YARN_VERSION.tar.gz.asc"" \
-  && gpg --batch --verify yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz \
-  && mkdir -p /opt \
-  && tar -xzf yarn-v$YARN_VERSION.tar.gz -C /opt/ \
-  && ln -s /opt/yarn-v$YARN_VERSION/bin/yarn /usr/local/bin/yarn \
-  && ln -s /opt/yarn-v$YARN_VERSION/bin/yarnpkg /usr/local/bin/yarnpkg \
-  && rm yarn-v$YARN_VERSION.tar.gz.asc yarn-v$YARN_VERSION.tar.gz \
-
-You can use the similar commands to download and use ln to create symlink for your version like above.
-",LinuxKit
-"Linuxkit is very interesting project so started playing with it. I have created image using redis-os.yml example https://raw.githubusercontent.com/linuxkit/linuxkit/master/examples/redis-os.yml
-When i boot redis-os it works but i am not seeing any redis server container, i found redis is running but not able to find where.
-(ns: getty) linuxkit-f6b2836a15cb:~# pstree
-init-+-containerd---7*[{containerd}]
-     |-containerd-shim-+-tini---rungetty.sh-+-rungetty.sh---login---sh
-     |                 |                    `-rungetty.sh---login---sh---bash--+
-     |                 `-11*[{containerd-shim}]
-     `-containerd-shim-+-redis-server---3*[{redis-server}]
-                       `-11*[{containerd-shim}]
-
-.    when i run list container i am not seeing any redis container
-  (ns: getty) linuxkit-f6b2836a15cb:~# runc list
-    ID           PID         STATUS      BUNDLE                          CREATED                         OWNER
-    000-dhcpcd   0           stopped     /containers/onboot/000-dhcpcd   2022-08-12T21:38:05.40297821Z   root
-
-I can see redis listen on port
-(ns: getty) linuxkit-f6b2836a15cb:~# netstat -natp
-Active Internet connections (servers and established)
-Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
-tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      421/redis-server
-tcp        0      0 :::6379                 :::*                    LISTEN      421/redis-server
-
-Question is where is redis container and how do i get to configuration file or exec container filesystem?
-","1. I figured out, yes its in namespace but syntax are little complex compare to docker command.
-(ns: getty) linuxkit-fa163e26c0e8:~# ctr -n services.linuxkit t exec -t --exec-id bash_1 redis sh
-/data # redis-cli
-127.0.0.1:6379> PING
-PONG
-127.0.0.1:6379>
-
-",LinuxKit
-"Unable to run linux containers from testcontainer in Windows Server 2019 (LinuxKit installed). Getting errors as mentioned below.
-2020-07-01 20:12:59.342 ERROR 4936 --- [           main] o.t.d.DockerClientProviderStrategy       : Could not find a valid Docker environment. Please check configuration. Attempted configurations were:
-2020-07-01 20:12:59.342 ERROR 4936 --- [           main] o.t.d.DockerClientProviderStrategy       :     NpipeSocketClientProviderStrategy: failed with exception InvalidConfigurationException (ping failed). Root cause TimeoutException (null)
-2020-07-01 20:12:59.342 ERROR 4936 --- [           main] o.t.d.DockerClientProviderStrategy       :     WindowsClientProviderStrategy: failed with exception TimeoutException (org.rnorth.ducttape.TimeoutException: java.util.concurrent.TimeoutException). Root cause TimeoutException (null)
-2020-07-01 20:12:59.342 ERROR 4936 --- [           main] o.t.d.DockerClientProviderStrategy       : As no valid configuration was found, execution cannot continue
-
-Docker Info as below:
-Client:
- Debug Mode: false
- Plugins:
-  cluster: Manage Docker clusters (Docker Inc., v1.2.0)
-
-Server:
- Containers: 4
-  Running: 1
-  Paused: 0
-  Stopped: 3
- Images: 6
- Server Version: 19.03.5
- Storage Driver: windowsfilter (windows) lcow (linux)
-  Windows:
-  LCOW:
- Logging Driver: json-file
- Plugins:
-  Volume: local
-  Network: ics internal l2bridge l2tunnel nat null overlay private transparent
-  Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
- Swarm: inactive
- Default Isolation: process
- Kernel Version: 10.0 17763 (17763.1.amd64fre.rs5_release.180914-1434)
- Operating System: Windows Server 2019 Standard Version 1809 (OS Build 17763.1294)
- OSType: windows
- Architecture: x86_64
- CPUs: 4
- Total Memory: 16GiB
- Name: DHUBAS204
- ID: ZWHK:7HAM:2IKC:YTDS:RD7L:P2V6:6ECL:A2I3:X6T2:2N33:3KSQ:ZX24
- Docker Root Dir: C:\ProgramData\docker
- Debug Mode: true
-  File Descriptors: -1
-  Goroutines: 31
-  System Time: 2020-07-01T14:00:30.2657542+10:00
-  EventsListeners: 0
- Registry: https://index.docker.io/v1/
- Labels:
- Experimental: true
- Insecure Registries:
-  registry-1.docker.io
-  127.0.0.0/8
- Registry Mirrors:
-  https://hub-proxy.upm.asx.com.au/
- Live Restore Enabled: false
-
-PS:- Able to run linux containers without any issues from docker cli commands
-","1. No support for windows servers.
-https://github.com/testcontainers/testcontainers-java/issues/2960
-",LinuxKit
-"I'm unable to run bash scripts in ""runcmd:"" that aren't inline.
-runcmd:
-    - [ bash, -c, echo ""=========hello world========="" >>foo1.bar ]
-    - [ bash, -c, echo ""=========hello world========="" >>foo2.bar ]
-    - [ bash, -c, /usr/local/bin/foo.sh ]
-
-The first two lines are successfully run on the deployed Ubuntu instance. However, the foo.sh doesn't seem to run.
-Here is /usr/local/bin/foo.sh:
-#!/bin/bash
-echo ""=========hello world========="" >>foosh.bar
-
-foo.sh has executable permissions for root and resides on the MAAS server.
-I've looked at the following but they don't seem to sort out my issue:
-
-Cannot make bash script work from cloud-init
-run GO111MODULE=on go install . ./cmd/... in cloud init
-https://gist.github.com/aw/40623531057636dd858a9bf0f67234e8
-
-","1. Anything you run using runcmd must already exist on the filesystem. There is no provision for automatically fetching something from a remote host.
-You have several options for getting files there. Two that come to mind immediately are:
-
-You could embed the script in your cloud-init configuration using the write-files directive:
-write_files:
-  - path: /usr/local/bin/foo.sh
-    permissions: '0755'
-    content: |
-      #!/bin/bash
-      echo ""=========hello world========="" >>foosh.bar
-
-runcmd:
-  - [bash, /usr/local/bin/foo.sh]
-
-
-You could fetch the script from a remote location using curl (or similar tool):
-runcmd:
-  - [curl, -o, /usr/local/bin/foo.sh, http://somewhere.example.com/foo.sh]
-  - [bash, /usr/local/bin/foo.sh]
-
-
-
-",MAAS
-"Info + objective:
-I'm using MAAS to deploy workstations with Ubuntu.
-MAAS just deploys the machine with stock Ubuntu, and I then run a bash script I wrote to set up everything needed.
-So far, I've ran that bash script manually on the newly deployed machines. Now, I'm trying to have MAAS run that script automatically.
- 
-
- 
-What I did + error:
-In the MAAS machine, I create the following file curtin file called /var/snap/maas/current/preseeds/curtin_userdata_ubuntu which contains the following:
-write_files:
-  bash_script:
-    path: /root/script.sh
-    content: |
-      #!/bin/bash
-      echo blabla
-      ... very long bash script
-    permissions: '0755'
-
-late_commands:
-  run_script: [""/bin/bash /root/script.sh""]
-
-However, in the log, I see the following:
-known-caiman cloud-init[1372]: Command: ['/bin/bash /root/script.sh']
-known-caiman cloud-init[1372]: Exit code: -
-known-caiman cloud-init[1372]: Reason: [Errno 2] No such file or directory: '/bin/bash /root/script.sh': '/bin/bash /root/script.sh'
-
- 
-
- 
-Question
-I'm not sure putting such a large bash script in the curtin file is a good idea. Is there a way to store the bash script on the MAAS machine, and have curtin upload it to the server, and then execute it? If not, Is it possible to fix the error I'm having?
-Thanks ahead!
-","1. This worked executing the command:
-[""curtin"", ""in-target"", ""--"", ""/bin/bash"", ""/root/script.sh""]
-
-Though this method still means I have to write to a file and then execute it. I'm still hoping there's a way to upload a file and then execute it.
-
-2. I do not add my script to curtin file.
-I run below command and deploy servers.
-maas admin machine deploy $system_id user_data=$(base64 -w0 /root/script.sh)
-
-3. I would try
-runcmd:
-   - [/bin/scp, user@host:/somewhere/script.sh, /root/]
-
-
-late_commands:
-  run_script: ['/bin/bash', '/root/script.sh']
-
-This obviously imply that you inject the proper credentials on the machine being deployed.
-",MAAS
-"I provision machine by terraform and maas but I can't get ip address of provided machine in
-output of terraform.
-I'm using of suchpuppet as maas provider
-for IaC but just returned machine_id and doesn't return IP address of it.
-
-
-In output returning machine_id instead of ip address of machine.
-","1. Thanks for your comment.
-I resolve my problem by calling MAAS API and sending machine_id to MAAS API
-and getting the IP address of the machine for use in the configuration manager
-tools.
-from oauthlib.oauth1 import SIGNATURE_PLAINTEXT # fades
-from requests_oauthlib import OAuth1Session # fades
-
-MAAS_HOST = ""URL_OF_MAAS""
-CONSUMER_KEY, CONSUMER_TOKEN, SECRET = ""API_KEY_MAAS"".split("":"")
-
-maas = OAuth1Session(CONSUMER_KEY, resource_owner_key=CONSUMER_TOKEN, 
-resource_owner_secret=SECRET, signature_method=SIGNATURE_PLAINTEXT)
-
-nodes = maas.get(f""{MAAS_HOST}/api/2.0/machines/gfppbc/"")
-nodes.raise_for_status()
-
-print(nodes.json()['ip_addresses'][0])
-
-",MAAS
-"I am trying to setup MaaS360 device compliance through Azure AD Conditional Access and having an issue with Azure Integration menu in MaaS360.
-Basically one of the steps requires to setup ""Device compliance status sync for Android and iOS"" which requires the Azure tenant ID and Client ID established.
-I am not able to see this checkbox when I go to the Setup->Azure Integration menu in MaaS360.
-I only have 2 checkboxes that I allowed to configure:
-
-User Authentication
-User Visibility
-
-I have been provided full admin roles on my account and I am not sure why else I cannot see this menu.
-Here is the IBM article that I am following and if you see step 7 it shows the menu option.
-https://www.ibm.com/docs/en/maas360?topic=iaam-integrating-maas360-microsoft-enforce-device-compliance-through-azure-ad-conditional-access
-Any help is appreciated.
-Thanks
-","1. I was able to solve this, needed to enable this by opening a case with IBM to enable Azure conditional access.
-",MAAS
-"I cloned the repo https://github.com/ManageIQ/manageiq/ and used the Dockerfile to build the docker image. But when I start the container none of the files are served.
-It seems the files required are under public/ directory but I'm not sure where it should be copied manually. I tried copying all files to app/assets/ but still I get the same error.
-Any idea where the public/* files should be copied to?
-This is how the default login page looks like
-
-And there's a lot of errors on the console.
-
-config/application.rb says the following and I've tried that already as stated above.
-# TODO: Move to asset pipeline enabled by moving assets from public to app/assets
-config.asset_path = ""%s""
-
-","1. The monolithic docker image is not built directly from source as is normally expected from a Dockerfile. Instead, it is built on top of the podified (kubernetes) image for the UI worker, with some bootstrapping on top (ref). The podified images use RPMs that are built nightly from the various source repositories, which includes packing the UI code that is found in /public/packs.
-If the image build is failing then either there is something wrong with the UI worker image or some services are not starting. You best bet is to open an issue at https://github.com/ManageIQ/manageiq/issues .
-",ManageIQ
-"I'm calling AWX template from ManageIQ.  I'm passing 9 variables to the playbook (with prompt on launch active).  The playbook is successfully called, and all of the vars come through.  However two of the vars are supposed to be arrays.  Instead they come through to AWX as strings: e.g., '[""chefclient""]' instead of [""chefclient""].
-I have confirmed that these vars are indeed of type array in ManageIQ before I pass them to the AWX template.
-Any clue why this is happening?  Do all vars get irresistibly converted to strings?  How do I fix this?
-Thank you!
-","1. According to the RedHat developers on Gitter.im, this is a shortcoming in the launch_ansible_method in ManageIQ.  I.e., it always converts arrays to strings.  We have opened an issue on GitHub to address this.
-
-2. I have basically had a variable in ansible tower/awx that takes input as Text with server names as array/List. example: [""node1"",""node2"",""node3""] and once job is launched I can see the variable in the extra variables as '[""node1"",""node2"",""node3""]'. I'm not sure about reason why it does that but it doesn't effect your subsequent ansible operations on that variable. Not all variables gets single quotations only when you use array/List.
-
-3. I have tried to replicate this on my end with AWX installed locally. I have passed v_packages variables data as [""apache2"",""nginx""]. I don't see that issue now.
-
-
-",ManageIQ
-"I feel like I'm missing something really simple. I've got the simplest possible CIKernel, it looks like this:
-extern ""C"" float4 Simple(coreimage::sampler s) {
-    float2 current = s.coord();
-    float2 anotherCoord = float2(current.x + 1.0, current.y);
-    float4 sample = s.sample(anotherCoord);  // s.sample(current) works fine
-    return sample;
-}
-
-It's (in my mind) incrementing the x position of the sampler by 1 and sampling the neighboring pixel. What I get in practice is a bunch of banded garbage (pictured below.) The sampler seems to be pretty much undocumented, so I have no idea whether I'm incrementing by the right amount to advance one pixel. The weird banding is still present if I clamp anootherCoord to s.extent(). Am I missing something really simple?
-
-","1. The coordinates of coreimage:sampler are relative, between 0 and 1, where [0,0] is the lower left corner of the image and [1,1] is the upper right. So when you add 1.0 to that, you are effectively sampling outside the defined image space.
-Core Image provides access to the coreimage:destination to get absolute coordinates (pixels). Simply add a destination as the last parameter to your kernel function (no need to pass anything when you invoke the kernel with apply):
-extern ""C"" float4 Simple(coreimage::sampler s, coreimage::destination dest) {
-    float2 current = dest.coord();
-    float2 anotherCoord = current + float2(1.0, 0.0);
-    float4 sample = s.sample(s.transform(anotherCoord));
-    return sample;
-}
-
-dest.coord() gives you the coordinates in absolute (pixel) space, and s.transform translates it back into (relative) sampler space.
-",Metal³
-"I'm trying to recreate the sparkle effect from Apple's activity ring animation using SwiftUI. I've found Paul Hudson's Vortex library, which includes a sparkle effect, but as a beginner in SwiftUI animations, I'm struggling to modify it to match my vision. Can anyone offer guidance on how to achieve this effect?
-Here's the Vortex project I'm referring to: Vortex project
-This is what I envision it should look like: YouTube Link
-This YouTube video shows the effect I'm aiming for: YouTube Link
-I have attempted to implement it, but the result isn't what I expected. Here's my current code:
-import SwiftUI
-import Foundation
-import Vortex
-
-struct ContentView: View {
-    
-    @State private var isAnimatingFast = false
-    var foreverAnimationFast: Animation {
-        Animation.linear(duration: 1.0)
-            .repeatForever(autoreverses: false)
-    }
-    
-    @State private var isAnimatingSlow = false
-    var foreverAnimationSlow: Animation {
-        Animation.linear(duration: 1.5)
-            .repeatForever(autoreverses: false)
-    }
-    
-    var body: some View {
-        ZStack {
-            VortexView(customMagic) {
-                Circle()
-                    .fill(.blue)
-                    .frame(width: 10, height: 10)
-                    .tag(""sparkle"")
-            }
-            .frame(width: 250, height: 250)
-            .rotationEffect(Angle(degrees: isAnimatingFast ? 360 : 0.0))
-            .onAppear {
-                withAnimation(foreverAnimationFast) {
-                    isAnimatingFast = true
-                }
-            }
-            .onDisappear { isAnimatingFast = false }
-            
-            VortexView(customSpark) {
-                Circle()
-                    .fill(.white)
-                    .frame(width: 20, height: 20)
-                    .tag(""circle"")
-            }
-            .rotationEffect(Angle(degrees: isAnimatingSlow ? 360 : 0.0))
-            .onAppear {
-                withAnimation(foreverAnimationSlow) {
-                    isAnimatingSlow = true
-                }
-            }
-            .onDisappear { isAnimatingSlow = false }
-            
-            VortexView(customSpark) {
-                Circle()
-                    .fill(.white)
-                    .frame(width: 20, height: 20)
-                    .tag(""circle"")
-            }
-            .rotationEffect(Angle(degrees: isAnimatingSlow ? 180 : -180))
-            
-            VortexView(customSpark) {
-                Circle()
-                    .fill(.white)
-                    .frame(width: 20, height: 20)
-                    .tag(""circle"")
-            }
-            .rotationEffect(Angle(degrees: isAnimatingSlow ? 90 : -370))
-            
-            VortexView(customSpark) {
-                Circle()
-                    .fill(.white)
-                    .frame(width: 20, height: 20)
-                    .tag(""circle"")
-            }
-            .rotationEffect(Angle(degrees: isAnimatingSlow ? 370 : -90))
-        }
-    }
-}
-
-let customMagic =
-    VortexSystem(
-        tags: [""sparkle""],
-        shape: .ring(radius: 0.5),
-        lifespan: 1.5,
-        speed: 0,
-        angleRange: .degrees(360),
-        colors: .random(.red, .pink, .orange, .blue, .green, .white),
-        size: 0.5
-    )
-
-let customSpark = VortexSystem(
-    tags: [""circle""],
-    birthRate: 150,
-    emissionDuration: 0.2,
-    idleDuration: 0,
-    lifespan: 0.75,
-    speed: 1,
-    speedVariation: 0.2,
-    angle: .degrees(330),
-    angleRange: .degrees(20),
-    acceleration: [0, 3],
-    dampingFactor: 4,
-    colors: .ramp(.white, .yellow, .yellow.opacity(0)),
-    size: 0.1,
-    sizeVariation: 0.1,
-    stretchFactor: 8
-)
-
-#Preview {
-    ContentView()
-}
-
-Any insights or suggestions on how to better match the desired animation effect would be greatly appreciated!
-","1. i'm guessing you will have a hard time reproducing that video using Vortex inside SwiftUI (as opposed to doing the whole thing using particles in spritekit). but here's a rough approximation
-import SwiftUI
-import Foundation
-import Vortex
-
-struct ContentView: View {
-    @State private var isAnimating = false
-    
-    var body: some View {
-        ZStack {
-            
-            Color.black
-             
-            Group {
-                ForEach(0..<18) { index in
-                    //a single pinwheel sparkler
-                    VortexView(customSpark) {
-                        Circle()
-                            .fill(.white)
-                            .blendMode(.plusLighter)
-                            .frame(width: 32)
-                            .tag(""circle"")
-                    }
-                    .frame(width:200, height:200)
-                    .offset(y:-100)
-                    .rotationEffect(Angle(degrees: Double(index) * 20))
-                    .opacity(isAnimating ? 1 : 0)
-                    .animation(
-                         Animation.easeInOut(duration: 0.2)
-                             .delay(Double(index) * 0.075),
-                         value: isAnimating
-                     )
-                }
-                .onAppear {
-                    withAnimation {
-                        isAnimating = true
-                    }
-                    
-                    //disappear in a ring
-                    Timer.scheduledTimer(withTimeInterval: 2, repeats: false) { _ in
-                        withAnimation {
-                            isAnimating = false
-                        }
-                    }
-                }
-            }
-            .onAppear {
-                withAnimation {
-                    isAnimating = true
-                }
-            }
-        }
-    }
-}
-
-//pinwheel sparkler
-let customSpark = VortexSystem(
-    tags: [""circle""],
-    birthRate: 20,
-    emissionDuration: 5,
-    lifespan: 2,
-    speed: 0.75,
-    speedVariation: 0.5,
-    angle: .degrees(90),
-    angleRange: .degrees(8),
-    colors: .ramp(.white, .red, .red.opacity(0)),
-    size: 0.06
-)
-
-#Preview {
-    ContentView()
-}
-
-",Metal³
-"I'm transferring my program from opengl to metal. In the original I use tightly packed vertex data, which there are no problems with describing in opengle, but in metal it doesn’t work.
-Tightly packed data in my opinion is: [x,y,z, ... x,y,z, nx,ny,nz, ... nx,ny,nz, r,g,b, ... r,g,b], normal packed data is [x,y,z,nx,ny,nz,r,g,b ... x,y,z,nx,ny,nz,r,g,b], where x,y,z - coordinates, nx,ny,nz - normals and r,g,b - colors.
-In opengl I set stride to zero (or step in bytes between vertices within the same data type) and offset for the beginning of each data. But here I get an error.
-Code
-        let vertices: [Float32] = [...]
-        vbuffer = device.makeBuffer(bytes: vertices, length: vertices.count * MemoryLayout<Float32>.size, options: [])!
-
-        vertexDescriptor = MTLVertexDescriptor()
-        vertexDescriptor.attributes[0].format = .float3
-        vertexDescriptor.attributes[0].offset = 0
-        vertexDescriptor.attributes[0].bufferIndex = 0
-        vertexDescriptor.attributes[1].format = .float3
-        vertexDescriptor.attributes[1].offset = MemoryLayout<Float>.size * vertices.count / 2 //half of vertex buffer, for this buffer contains only positions and colors
-        vertexDescriptor.attributes[1].bufferIndex = 0
-        vertexDescriptor.layouts[0].stride = MemoryLayout<Float>.size * 3
-        vertexDescriptor.layouts[0].stepRate = 1
-        vertexDescriptor.layouts[0].stepFunction = .perVertex
-        vertexDescriptor.layouts[1].stride = MemoryLayout<Float>.size * 3
-        vertexDescriptor.layouts[1].stepRate = 1
-        vertexDescriptor.layouts[1].stepFunction = .perVertex
-
-error
-validateVertexAttribute, line 724: error 'Attribute at index 1: the attribute offset (48) + attribute size (12) must be <= the stride of the buffer (12) at buffer index 0.'
-
-
-I don't understand how to solve my problem. How to describe this data? Mixing data will be inconvenient, time-consuming and resource-intensive. Using different buffers is also not very advisable
-","1. As always, it’s worth asking a question, so I find the answer myself
-        vertexDescriptor = MTLVertexDescriptor()
-        vertexDescriptor.attributes[0].format = .float3
-        vertexDescriptor.attributes[0].offset = 0
-        vertexDescriptor.attributes[0].bufferIndex = 0
-        vertexDescriptor.attributes[1].format = .float3
-        vertexDescriptor.attributes[1].offset = 0
-        vertexDescriptor.attributes[1].bufferIndex = 1
-        vertexDescriptor.layouts[0].stride = MemoryLayout<Float32>.size * 3
-        vertexDescriptor.layouts[1].stride = MemoryLayout<Float32>.size * 3
-
-//where draw
-            commandEncoder.setVertexBuffer(vbuffer, offset: 0, index: 0)
-            commandEncoder.setVertexBuffer(vbuffer, offset: MemoryLayout<Float>.size * vertices.count / 2, index: 1)
-
-
-",Metal³
-"I'm trying to load a large image into a MTLTexture and it works with 4000x6000 images. But when I try with 6000x8000 it can't.
-func setTexture(device: MTLDevice, imageName: String) -> MTLTexture? {
-        let textureLoader = MTKTextureLoader(device: device)
-        
-        var texture: MTLTexture? = nil
-        
-        //  In iOS 10 the origin was changed.
-        let textureLoaderOptions: [MTKTextureLoader.Option: Any]
-        if #available(iOS 10.0, *) {
-            let origin = MTKTextureLoader.Origin.bottomLeft.rawValue
-            textureLoaderOptions = [MTKTextureLoader.Option.origin : origin]
-        } else {
-            textureLoaderOptions = [:]
-        }
-        
-        if let textureURL = Bundle.main.url(forResource: imageName, withExtension: nil, subdirectory: ""Images"") {
-            do {
-                texture = try textureLoader.newTexture(URL: textureURL, options: textureLoaderOptions)
-            } catch {
-                print(""Texture not created."")
-            }
-        }
-        return texture
-    }
-
-Pretty basic code. I'm running it in an iPad Pro with A9 chip, GPU family 3. It should handle textures this large. Should I manually tile it somehow if it doesn't accept this size? In that case, what's the best approach: using MTLRegionMake to copy bytes, slicing in Core Image or a Core Graphics context...
-I appreciate any help
-","1. Following your helpful comments I decided to load it manually drawing to a CGContext and copying to a MTLTexture. I'm adding the solution code below. The context shouldn't be created each time a texture is created, it's better to put it outside the function and keep reusing it.
-// Grab the CGImage, w = width, h = height...
-    
-let context = CGContext(data: nil, width: w, height: h, bitsPerComponent: bpc, bytesPerRow: (bpp / 8) * w, space: colorSpace!, bitmapInfo: bitmapInfo.rawValue)
-        
-let flip = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: CGFloat(h))
-context?.concatenate(flip)
-context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: CGFloat(w), height: CGFloat(h)))
-        
-let textureDescriptor = MTLTextureDescriptor()
-textureDescriptor.pixelFormat = .rgba8Unorm
-textureDescriptor.width = w
-textureDescriptor.height = h
-        
-guard let data = context?.data else {print(""No data in context.""); return nil}
-        
-let texture = device.makeTexture(descriptor: textureDescriptor)
-texture?.replace(region: MTLRegionMake2D(0, 0, w, h), mipmapLevel: 0, withBytes: data, bytesPerRow: 4 * w)
-        
-return texture
-
-
-2. I had this issue before, a texture would load on one device and not on another. I think it is a bug with the texture loader.
-You can load in a texture manually using CGImage and a CGContext, draw the image into the context. Create a MTLTexture buffer, then copy the bytes from the CGContext into the texture using a MTLRegion.
-It's not fool proof, you have to make sure to use the correct pixel format for the metal buffer or you'll get strange results, so either you code for one specific format of image you're importing, or do a lot of checking. Apples' Basic Texturing example shows how you can change the color order before writing the bytes to the texture using MTLRegion.
-",Metal³
-"I looking for a scenario where i can reach a instance created in a OPENSTACK Rocky version with an IP directly on the network created (inst-1 launched on 172.6.0.0/24 network) got an ip address of 172.6.0.5  So i want to ping 172.6.0.5 directly from controller machine without using the floating ip.
-I know the provider network concept by associating a floating ip for the instance to reach the VM externally. But i am checking for the other approach to directly get access the VM IP from controller. Can someone help me out if you have any suggestion on this.
-Thanks in advance.
-","1. You need a route to the tenant network to which the instance is attached. In case the external bridge, often named br-ex, is located on that controller, just create a suitable routing table entry. Assuming the subnet is 172.6.0.0/24, this command takes care of it:
-ip route add 172.6.0.0/24 dev br-ex
-
-How to make this route persistent depends on the network management tool used on that server.
-Note that this only gives you access to that instance from that controller, not from other devices.
-",OpenStack
-"I applied half of a terraform plan then lost my state file, so now terraform doesn't know about the AWS resources that it created previously. When I try to apply again, it complains that conflicting resources already exist.
-Manually deleting them and re-creating them will be faster than manually importing them, but I am hopeful there's an automated way to delete them that's even faster?
-","1. Unfortunately the state is the sole location where Terraform tracks which objects a particular configuration is managing, so if you have lost it then Terraform has no way to recover that information automatically.
-Unless you have some other record of what actions your previous Terraform runs performed -- for example, saved output produced by Terraform CLI reporting what objects it was creating -- you will need to rely only on information you can find from your target platform to determine which objects ought to be deleted.
-",OpenTofu
-"I've successfully managed to create a Terraform file for configuring Rundeck to run an inline BASH script. However, I want to take things further by figuring out how to automatically configure a job that makes use of Ansible.
-From reading the provider documentation for a job, it looks like I need to configure the step_plugin within the command block that references the Ansible plugin. This takes a type and a config as shown below:
-
-Unfortunately, unlike the other areas of the documentation, it does not list possible values for the type so I have had to guess, with no success thus far. I always get an error message similar to:
-
-Workflow has one or more invalid steps: [1: [The step plugin type ""com.batix.rundeck.plugins.AnsiblePlaybookInlineWorkflowNodeStep"" is not valid: Plugin not found: com.batix.rundeck.plugins.AnsiblePlaybookInlineWorkflowNodeStep]]
-
-I did look up the list of plugins from the GET /plugins/list endpoint and tried these names but they didn't work:
-
-I also tried lots of variations of camelCase, snake_case etc on the words ""ansible"" ""playbook"" and ""inline"" with no combination seeming to work. I saw that in the returned API output, it stated that builtin was set to false. However, if I go to artifact/index/configurations then I can see the Uninstall button suggesting that they are installed.
-
-Question
-Does anybody know how to configure an Ansible job in Rundeck through Terraform/Tofu and can provide a basic example?
-","1. You must add the node_step_plugin block (inside the command block) pointing to the plugin's name and the config. A good way to see which options elements you need is to create a mockup ansible job, export it in YAML format, and then see the job definition content to apply on the terraform config subblock.
-The terraform rundeck deployment file looks as follows (a very basic example tested on Terraform 1.8.0 and Rundeck 5.2.0):
-terraform {
-  required_providers {
-    rundeck = {
-      source  = ""rundeck/rundeck""
-      version = ""0.4.7""
-    }
-  }
-}
-
-provider ""rundeck"" {
-  url         = ""http://rundeck_url:4440/""
-  api_version = ""47""
-  auth_token  = ""rundeck_auth_token""
-}
-
-resource ""rundeck_project"" ""terraform"" {
-  name        = ""terraform""
-  description = ""Sample Created using Terraform Rundeck Provider""
-  resource_model_source {
-    type = ""file""
-    config = {
-      format = ""resourcexml""
-      file = ""/path/to/your/resources.xml""
-      writable = ""true""
-      generateFileAutomatically = ""true""
-    }
-  }
-  extra_config = {
-    ""project.label"" = ""Ansible Example""
-  }
-}
-
-resource ""rundeck_job"" ""ansiblejob"" { 
-  name              = ""Ansible Test""
-  project_name      = ""${rundeck_project.terraform.name}""
-  node_filter_query = ""tags: ansible""
-  description       = ""Ansible Playbook Test""
-
-  command {
-    node_step_plugin {
-      type = ""com.batix.rundeck.plugins.AnsiblePlaybookWorflowNodeStep""
-      config = {
-        ansible-base-dir-path = ""/path/to/ansible/config/""
-        ansible-become = ""false""
-        ansible-binaries-dir-path = ""/path/to/ansible/executable/""
-        ansible-playbook = ""/path/to/your/playbook/ping.yml""
-        ansible-ssh-passphrase-option = ""option.password""
-        ansible-ssh-use-agent = ""false""
-      }
-    }
-  }
-}
-
-",OpenTofu
-"I have an OpenTofu state that works fine.
-Here is the provider definition:
-terraform {
-  required_providers {
-    azurerm = {
-      source = ""hashicorp/azurerm""
-    }
-    databricks = {
-      source = ""databricks/databricks""
-      version = ""1.31.0""
-    }
-  }
-}
-
-It works fine to do tofu init/plan/apply.
-When I try to do terraform init I get the following errors:
-
-Error: Incompatible provider version │ │ Provider registry.terraform.io/databricks/databricks v1.31.0 does not have a  package available for your current platform, windows_386.
-Error: Incompatible provider version │ │ Provider registry.terraform.io/databricks/databricks v1.38.0 does not have a package available for your current platform, windows_386.
-
-Output of terraform and tofu version
-PS C:\code\terraform-infrastructure> terraform --version
-Terraform v1.7.2
-on windows_386
-+ provider registry.opentofu.org/databricks/databricks v1.31.0
-+ provider registry.opentofu.org/hashicorp/azurerm v3.83.0
-
-Your version of Terraform is out of date! The latest version
-is 1.7.5. You can update by downloading from https://www.terraform.io/downloads.html
-
-PS C:\code\terraform-infrastructure> tofu --version
-OpenTofu v1.6.1
-on windows_amd64
-+ provider registry.opentofu.org/databricks/databricks v1.31.0
-+ provider registry.opentofu.org/hashicorp/azurerm v3.83.0
-PS C:\code\terraform-infrastructure> 
-
-I have tried:
-
--upgrade, -migrate-state, -reconfigure flags
-Downgrade from 1.38.0 to 1.31.0 with OpenTofu
-
-","1. The issue was that I was using windows_386 Terraform, not the windows_amd64 version. It works fine after switching to the amd64 version.
-",OpenTofu
-"I'm trying to run and debug a lambda function locally using the AWS CLI and OpenTofu. Using sam build --hook-name terraform works great. However, now that Terraform is no longer open-source, I'd like to migrate to OpenTofu. Is there a way to use sam build with an OpenTofu hook?
-","1. Since OpenTofu is backwards-compatible with Terraform, create a symbolic link for Terraform that is actually OpenTofu.
-Note: you may need to use sudo.
-$ which tofu
-/usr/local/bin/tofu
-
-$ ln -s /usr/local/bin/tofu /usr/local/bin/terraform
-
-Another option is to install Terraform locally and use Terraform for sam build.
-If you are only looking to debug a TypeScript lambda locally with VSCode, follow this.
-",OpenTofu
-"I have set my GCP service account keys as instructed in this tutorial:
-pulumi --config-file stacks/Pulumi.dev-core.yaml \
-   -s dev-core config \
-   set gcp:credentials ./stacks/dec.sa-pulumi-dev-keys.json    
-
-This maps to a service account with GCP role Cloud KMS CryptoKey Encrypter/Decrypter,
-which should allow me to set secrets using KMS, example:
-pulumi config set --path stack:data.test-foo-bar --secret “testvalue” --config-file stacks/Pulumi.dev-core.yaml
-
-but I get error:
-error: secrets (code=PermissionDenied): rpc error: code = PermissionDenied 
-desc = Permission 'cloudkms.cryptoKeyVersions.useToDecrypt' 
-denied on resource 'projects/example/locations/global/keyRings/example/cryptoKeys/my-key' 
-(or it may not exist).
-
-I have double checked the resource path and it does exist in GCP.
-Also this is how my config file looks like:
-config:
-  gcp:credentials: ./stacks/dec.sa-dev-pulumi-keys.json # file is gitignored must be downloaded from lastpass
-  gcp:impersonateServiceAccount: my-sa@example.iam.gserviceaccount.com
-
-If I set service account keys via following command:
-export GOOGLE_CREDENTIALS=$(cat stacks/dec.sa-dev-pulumi-keys.json)  
-
-Then I can run set secret command without issues:
-# now it works
-pulumi config set --path stack:data.test-foo-bar --secret “testvalue” --config-file stacks/Pulumi.dev-core.yaml
-
-But doing this is not scalable for multiple stacks and environments. Why doesn't the initial command work?:
-pulumi --config-file stacks/Pulumi.dev-core.yaml \
-   -s dev-core config \
-   set gcp:credentials ./stacks/dec.sa-pulumi-dev-keys.json    
-
-","1. Unfortunately, as of May 2024 there's no fix for it the only workaround it to use GOOGLE_CREDENTIALS as per pulumi's own documentation
-export GOOGLE_CREDENTIALS=$(cat credentials.json)
-
-# or 
-
-FILE_PATH=""relative/path/my-keys.json""
-export GOOGLE_CREDENTIALS=$(cat $FILE_PATH)
-
-
-Here is the github issue and answer from pulumi maintainer recommending to use GOOGLE_CREDENTIALS since this is not fixed: https://github.com/pulumi/pulumi-gcp/issues/989#issuecomment-2090906460
-",Pulumi
-"I have installed the openssh rpms
-In the default sshd_config file, I do not see ""Include"" directive mentioned in it. Also ""/etc/ssh/sshd_config.d"" is not created by rpm. So what I did is created /etc/ssh/sshd_config.d directory and added this ""Include /etc/ssh/sshd_config.d/*.conf"" in last line of /etc/ssh/sshd_config. I am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to ""/etc/sshd_config.d/01_sshd_config.conf"". ssh module of puppet is just take a copy of sshd_config file and replacing the lines as per puppet configurations. With this I face issues like having conflicting & duplicate values for many sshd_config configurations. It would be really helpful if someone helps me out with this issue. Thanks in advance!!
-Adding the Include directive in the top also doesn't solve my problem. I am aware of the sshd man page note
-
-first obtained value for each parameter is used in sshd : Order matters only when conflicting parameters exist, as the first obtained value for each parameter is used
-
-","1. 
-In the default sshd_config file, I do not see ""Include"" directive mentioned in it. grep -nr ""Include"" /etc/ssh/sshd_config returns nothing. Also ""/etc/ssh/sshd_config.d"" is not created by rpm.
-
-I don't find that particularly surprising.  The logical contents of sshd_config are order- and context-sensitive, so although there is an Include directive available, using it to provide for generic drop-in sshd configuration files doesn't work very well.  I could see a more targeted approach involving drop-in files, perhaps, but not what you're actually trying to do.
-Nevertheless, ...
-
-what I did is created /etc/ssh/sshd_config.d directory and added this ""Include /etc/ssh/sshd_config.d/*.conf"" in last line of /etc/ssh/sshd_config.
-
-... sure, you can do that if you want.  But this ...
-
-I am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to ""/etc/sshd_config.d/custom_sshd_config.conf"".
-
-... seems both to misrepresent fact and to be unwise.  In the first place, no, you are not overriding the default config file.  That suggests that sshd would use the config file you specify instead of /etc/sshd/sshd_config, but clearly that's not happening.  What you are doing is simply telling Puppet to manage a different file instead.
-In the second place, doing that in the way you are doing it is downright begging for exactly the kind of problem you observe: duplicate / inconsistent configuration.  You're managing etc/sshd_config.d/custom_sshd_config.conf as if it were a complete sshd configuration file (because that's what the module does), yet the only way it gets used at all is by being included by the main config file.
-It's not clear how you even expect to gain anything from this, when you could simply manage the regular config file directly.  You say that you can't do that, but you already are doing it, in the sense that you are placing an Include directive in it that was not provided by the RPM.
-
-What I expect is ""Include directive file should behave like overrides of default sshd_config"". Is there any way to automate this in puppet like whenever an sshd configuration is overridden in custom_sshd_config file that needs to be commented in default sshd_config so that it will be overridden in real.
-
-The module you're using (see also below) does not do this, and I don't see why it would.  If you're going to modify the main config file anyway, then why would you not put the configuration directives you want there?  Or if indeed you must not modify that file, then why are you proposing an approach that involves modifying it (further)?
-One way to move forward would be to indeed change which file sshd uses for its main config file.  You could do that on EL8 by managing sshd's systemd unit file to add an appropriate -f option to the sshd command line it uses.
-Or if you're ok with modifying /etc/ssh/sshd_config after all, but you still want drop-in files, then you could consider removing everything but the Include directive from the main config file, and otherwise proceeding as you already are doing.
-But if you want exactly what you describe, then you'll need to write your own module to handle it.
-Speaking of modules, though, the one you linked to is a less-maintained and little used fork of the massively popular, Puppet-approved one by ghoneycutt.  At this point, the two are out of sync.  Unless you need something that's not in ghoneycutt's version, I would strongly suggest switching.
-
-2. You should be using the Include method as you've described. However, it needs to be near the top of your sshd_config file not at the bottom.
-SSH will use the first defined value that it finds for a given directive. It will ignore any definitions after that.
-So you can create a file in /etc/ssh/config.d/* and it will override any settings in /etc/ssh/sshd_config so long as the Include directive comes before any directives you intend to override.
-",Puppet
-"I would like to use Puppet to manage a directory. I want the directory to be owned by user root and group admin, with 0770 permissions.
-I would like all files in the directory to be owned by user apache and group admin with 0600 permissions.
-I have yet to find a way to achieve this using the file resource in Puppet. I have tried using two resources like so:
-file { 'phpsessions_files':
-    path => '/var/phpsessions',
-    ensure => directory,
-    owner => 'apache',
-    group => 'admin',
-    mode => 0600,
-    recurse => true,
-    before => File['phpsessions_dir'],
-}
-
-file { 'phpsessions_dir':
-    path => '/var/phpsessions',
-    recurse => false,
-    owner => 'root',
-    group => 'admin',
-    mode => 0770,
-}
-
-But I am not allowed to create two file resources to the same path and I can't see how to achieve what I want with just one resource.
-Your help is much appreciated.
-","1. Create a define containing an exec to change the mode of the directory after it is recursed.
-http://projects.puppetlabs.com/projects/1/wiki/File_Permission_Check_Patterns (WayBackMachine memento)
-
-2. To the best of my knowledge this is not possible in puppet.  i would manage only the following 
-  file { 'phpsessions_dir':
-      path => '/var/phpsessions',
-      recurse => false,
-      owner => 'root',
-      group => 'admin',
-      mode => 0770,
-  }
-
-php/apache should create the files within this folder with the correct permissions, If they don't fix that in php.ini.  If you are worried that something else is gonna come along and change the permissions then fall back to a cron job or better yet a systemd.timer to periodicity check and correct them
-",Puppet
-"I'm new to puppet and attempting to use the chocolatey module for windows clients.  I've installed all of the module dependencies:
-root@puppet:/etc/puppetlabs/code/environments/production/modules/chocolatey/manifests# puppet module list --environment production
-/etc/puppetlabs/code/environments/production/site-modules
-├── adhoc (???)
-├── profile (???)
-└── role (???)
-/etc/puppetlabs/code/environments/production/modules
-├── puppetlabs-acl (v5.0.0)
-├── puppetlabs-apt (v9.4.0)
-├── puppetlabs-chocolatey (v8.0.0)
-├── puppetlabs-concat (v9.0.2)
-├── puppetlabs-facts (v1.4.0)
-├── puppetlabs-inifile (v6.1.1)
-├── puppetlabs-powershell (v6.0.0)
-├── puppetlabs-puppet_agent (v4.19.0)
-├── puppetlabs-pwshlib (v1.1.1)
-├── puppetlabs-reboot (v5.0.0)
-├── puppetlabs-registry (v5.0.1)
-├── puppetlabs-ruby_task_helper (v0.6.1)
-└── puppetlabs-stdlib (v9.6.0)
-/etc/puppetlabs/code/modules (no modules installed)
-/opt/puppetlabs/puppet/modules (no modules installed)
-
-For whatever reason when I try to run puppet agent --test --trace --debug --verbose
-I get the following error:
-Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Unknown resource type: 'registry_value' (file: /etc/puppetlabs/code/environments/production/modules/chocolatey/manifests/install.pp, line: 38, column: 3) on node desktop-pmmjds3.wftigers.org
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/indirector/catalog/rest.rb:48:in `rescue in find'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/indirector/catalog/rest.rb:9:in `find'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/indirector/indirection.rb:230:in `find'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:731:in `block in retrieve_new_catalog'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/util.rb:518:in `block in thinmark'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/3.2.0/benchmark.rb:311:in `realtime'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/util.rb:517:in `thinmark'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:730:in `retrieve_new_catalog'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:84:in `retrieve_catalog'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:269:in `prepare_and_retrieve_catalog'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:430:in `run_internal'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/configurer.rb:341:in `run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:85:in `block (6 levels) in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/context.rb:64:in `override'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet.rb:288:in `override'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:84:in `block (5 levels) in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/3.2.0/timeout.rb:189:in `block in timeout'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/3.2.0/timeout.rb:196:in `timeout'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:83:in `block (4 levels) in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent/locker.rb:23:in `lock'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:73:in `block (3 levels) in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:164:in `with_client'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:69:in `block (2 levels) in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:129:in `run_in_fork'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:68:in `block in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application.rb:174:in `controlled_run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/agent.rb:49:in `run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application/agent.rb:437:in `onetime'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application/agent.rb:394:in `block in run_command'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/context.rb:64:in `override'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet.rb:288:in `override'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application/agent.rb:391:in `run_command'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application.rb:423:in `block in run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/util.rb:706:in `exit_on_fail'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/application.rb:423:in `run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:145:in `run'
-C:/Program Files/Puppet Labs/Puppet/puppet/lib/ruby/vendor_ruby/puppet/util/command_line.rb:79:in `execute'
-C:/Program Files/Puppet Labs/Puppet/puppet/bin/puppet:5:in `<main>'
-Warning: Not using cache on failed catalog
-Error: Could not retrieve catalog; skipping run
-
-I'm sure I'm missing something obvious, but I thought that if the registry module was installed, the resource type should not be unknown.
-","1. It turns out that I needed to reboot the server after I installed the module.  I don't understand why, maybe it is some sort of cataloging that happens on some modules.  I didn't have to restart for chocolatey to be recognized and useable...
-John mentioned it might be environment caching, which I don't recall changing from the default, but that at least points in a direction.
-Thanks, John!
-",Puppet
-"I'm hosting Rundeck in AWS ECS Fargate - docker image v5.2.0. It runs successfully but I see in the start-up logs a Warning message:
-May 28, 2024 at 17:16 (UTC+1:00)    [2024-05-28T16:16:15,238] WARN rundeckapp.BootStrap - Running in cluster mode without rundeck.primaryServerId set. Please set rundeck.primaryServerId to the UUID of the primary server in the cluster  rundeck
-May 28, 2024 at 17:16 (UTC+1:00)    [2024-05-28T16:16:15,237] WARN rundeckapp.BootStrap - Cluster mode enabled, this server's UUID: a14bc3e6-75e8-4fe4-a90d-a16dcc976bf6    rundeck
-May 28, 2024 at 17:16 (UTC+1:00)    [2024-05-28T16:16:15,235] INFO rundeckapp.BootStrap - loaded configuration: /home/rundeck/etc/framework.properties  rundeck
-
-(surprisingly that UUID always remains the same, even when I change the docker image or change the number of containers from 1 to 2)
-I see that the property rundeck.clusterMode.enabled=true in rundeck-config.properties file but I can't find (or there isn't one) environment variable, unlike many other configuration options in Rundeck, to set the clusterMode to false.
-I also tried to look at whether ECS Fargate supports something like a postStart hook, like in k8s, but apparently, that feature has been requested (https://github.com/aws/containers-roadmap/issues/952) but is not yet available.
-How could I achieve this? Looking at it just from the ECS point of view, I need to modify a file in the ECS Fargate container after the ECS task runs successfully.
-TIA!
-","1. That doesn't affect the single instance mode (it's considered a ""single node cluster instance"").
-But if you need to set that property to false, rebuild the image using the Remco template like this answer (based on this doc entry):
-In this case the rundeck-config-extra.properties will include this:
-rundeck.clusterMode.enabled={{ getv(""/rundeck/clusterMode/enabled"", ""false"") }}
-
-Build the image: docker compose build
-And then, deploy it: docker compose up
-So, the initialization must omit the cluster mode message:
-[2024-05-28T20:57:46,408] INFO  liquibase.lockservice - Successfully released change log lock
-[2024-05-28T20:57:46,560] INFO  rundeckapp.BootStrap - Starting Rundeck 5.3.0-20240520 (2024-05-20) ...
-[2024-05-28T20:57:46,560] INFO  rundeckapp.BootStrap - using rdeck.base config property: /home/rundeck
-[2024-05-28T20:57:46,568] INFO  rundeckapp.BootStrap - loaded configuration: /home/rundeck/etc/framework.properties
-[2024-05-28T20:57:46,673] INFO  rundeckapp.BootStrap - RSS feeds disabled
-[2024-05-28T20:57:46,673] INFO  rundeckapp.BootStrap - Using jaas authentication
-[2024-05-28T20:57:46,675] INFO  rundeckapp.BootStrap - Preauthentication is disabled
-[2024-05-28T20:57:46,702] INFO  rundeckapp.BootStrap - Rundeck is ACTIVE: executions can be run.
-[2024-05-28T20:57:46,728] WARN  rundeckapp.BootStrap - [Development Mode] Usage of H2 database is recommended only for development and testing
-[2024-05-28T20:57:46,755] INFO  rundeckapp.BootStrap - workflowConfigFix973: applying... 
-[2024-05-28T20:57:46,760] INFO  rundeckapp.BootStrap - workflowConfigFix973: No fix was needed. Storing fix application state.
-[2024-05-28T20:57:47,015] INFO  rundeckapp.BootStrap - Rundeck startup finished in 502ms
-[2024-05-28T20:57:47,019] INFO  rundeckapp.Application - Started Application in 20.393 seconds (JVM running for 21.949)
-
-",Rundeck
-"I am trying to find a way to make rundeck interactive with a slack channel such that someone could send a note to the channel and it would go to Rundeck and run a job with a parameter supplied by the user.
-I have a plugin already which goes in the reverse direction giving status from the rundesk jobs to the slack channel, but I'd also like the reverse.
-Does anyone know of a feature/integration like the above?
-","1. A good way to do that is to create a Rundeck webhook and call it from Slack creating a slash command. Take a look at how Rundeck Webooks works and how to enable interactivity with Slack.
-Also, you have a legacy way to call custom curl commands (to call Rundeck API, here some examples).
-",Rundeck
-"I have read through posts which are a couple years old that has information about implementing SSO for Rundeck community version with a pre-auth mode.
-Is there a working example of this? or any new method that has surfaced recently?
-Thanks in advance.
-","1. Short answer: no new methods right now.
-About the examples: You can put a service in front of a web server that passes the headers to Rundeck via pre-auth config. Similar to this or this.
-",Rundeck
-"I'm using terraform to trying to deploy event-hubs to Azure, but I always get this error when I do terraform plan:
-│ Error: making Read request on Azure KeyVault Secret evnhs-d-test-01-tp-test-01-tp-seli: keyvault.BaseClient#GetSecret: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code=""Forbidden"" Message=""The user, group or application 'appid=xxx;oid=xxx;iss=https://sts.windows.net/6f/' does not have secrets get permission on key vault 'kv-d-test-01-tp;location=westeurope'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287"" InnerError={""code"":""AccessDenied""}
-│ 
-│   with module.eventhub_02.azurerm_key_vault_secret.event_hub_namespace_secrets_send_listen,
-│   on .terraform/modules/eventhub/event-hub/main.tf line 235, in resource ""azurerm_key_vault_secret"" ""event_hub_namespace_secrets_send_listen"":
-│  235: resource ""azurerm_key_vault_secret"" ""event_hub_namespace_secrets_send_listen"" {
-│ 
-
-I have 2 repositories, one with the modules and another where I deploy the Azure infrastructure.
-I have a module called events-hub with this main.tf:
-# Event Hub namespace to store Event Hubs
-resource ""azurerm_eventhub_namespace"" ""event_hub_namespace"" {
-  name                 = format(""evnhs-%s"", var.event_hub_namespace_name)
-  location             = var.location
-  resource_group_name  = var.resource_group_name
-  sku                  = var.event_hub_namespace_sku
-  capacity             = var.event_hub_namespace_capacity
-  auto_inflate_enabled = false
-
-  lifecycle {
-    ignore_changes = [auto_inflate_enabled]
-  }
-
-  tags = var.tags
-}
-
-...
-
-}
-
-# Defines a namespace-level authorization rule for send and listen operations.
-resource ""azurerm_eventhub_namespace_authorization_rule"" ""evhns_auth_rule"" {
-  name                = format(""send-list-%s"", var.event_hub_namespace_name)
-  namespace_name      = format(""evhns-%s"", var.event_hub_namespace_name)
-  resource_group_name = var.resource_group_name
-
-  listen     = true
-  send       = true
-  manage     = false
-  depends_on = [azurerm_eventhub_namespace.event_hub_namespace]
-}
-
-resource ""azurerm_key_vault_secret"" ""event_hub_namespace_secrets_send_listen"" {
-  name         = format(""%s-seli"", azurerm_eventhub_namespace.event_hub_namespace.name)
-  value        = azurerm_eventhub_namespace_authorization_rule.evhns_auth_rule.primary_connection_string
-  key_vault_id = var.keyvault_id
-
-  depends_on = [azurerm_eventhub_namespace_authorization_rule.evhns_auth_rule]
-}
-
-In my other repository I have a key-vaul.tf that has the necessary permissions for deployment and another file called event-hub.tf:
-
-module ""eventhub"" {
-  source = ""git::git@github.com:...""
-
-  keyvault_id = module.kv_data_02.vault_id
-
-  event_hub_namespace_name     = format(""%s-test-01-tp"", var.environment)
-  event_hub_namespace_sku      = ""Standard""
-  event_hub_namespace_capacity = ""5""
-  location                     = var.default_location
-  resource_group_name          = var.default_resource_group
-  tags                         = merge(var.tags, local.eventhub_tags)
-
-  event_hubs_with_capture = [
-    {
-      resource_group_name = var.default_resource_group
-      eventhub_name       = ""raw-logsin""
-      ...
-      archive_name_format = ""{Namespace}/{EventHub}/captured/{Year}_{Month}_{Day}_{Hour}_{Minute}_{Second}_{PartitionId}""
-    },
-  ]
-
-  event_hubs_without_capture = []
-
-  log_analytics_workspace_id = module.log_analytics.resource_id
-}
-
-
-
-
-I have owner permission on my key vault and I think it's fine...I don't understand why this name is like that: evnhs-d-test-01-tp-test-01-tp-seli shouldn't it be like this?evnhs-d-test-01-tp-seli? How can I correct the error I have above?
-I know this is something simple, and that it must be in front of my eyes, but I'm not seeing it
-I've already tried to see if there was a problem with event_hub_namespace_name, if it was repeating somewhere, I confirmed that my key vault is ok with the necessary permissions...
-
-My key-vault.tf it's something like this:
-module ""kv_data_01"" {
-
-  source               = ""git::git@github.com:...""
-  resource_group_name  = var.default_resource_group
-  kv_name              = format(""kv-%s-test-data-01-tp"", var.environment) 
-  location             = var.default_location
-  environment          = var.environment
-  key_vault_secrets    = []
-  kv_tenant_id         = var.sp_tenantid
-  kv_sku_name          = ""standard""
-  tags                 = merge(local.kv_test_data_tags, var.tags)
-
-   access_policies_list = [
-    {
-      object_id : var.kv_test_data_01_default_access[var.environment],
-      key_permissions         = [""Get"", ""List"", ""Create"", ""Encrypt"", ""Decrypt"", ""Update""],
-      secret_permissions      = [""Get"", ""List"", ""Purge"", ""Set"", ""Delete""],
-      storage_permissions     = [""Get"", ""List""],
-      certificate_permissions = [""Backup"", ""Create"", ""Delete"", ""Get"", ""List"", ""Import"", ""Purge"", ""Recover"", ""Restore"", ""Update""]
-    }
-    ,
-    {
-      object_id : var.adsg-iot-team, 
-      key_permissions         = [""Get"", ""List"", ""Create"", ""Encrypt"", ""Decrypt"", ""Update""],
-      secret_permissions      = [""Get"", ""List"", ""Purge"", ""Set"", ""Delete""],
-      storage_permissions     = [""Get"", ""List""],
-      certificate_permissions = [""Backup"", ""Create"", ""Delete"", ""Get"", ""List"", ""Import"", ""Purge"", ""Recover"", ""Restore"", ""Update""]
-    }
-    ,
-    {
-      object_id : var.sp_objid # service principal that deploys objects
-      key_permissions         = [""Get"", ""List"", ""Create"", ""Encrypt"", ""Decrypt"", ""Update""],
-      secret_permissions      = [""Get"", ""List"", ""Purge"", ""Set"", ""Delete"", ""Recover"", ""Restore""],
-      storage_permissions     = [""Get"", ""List""],
-      certificate_permissions = [""Backup"", ""Create"", ""Delete"", ""Get"", ""List"", ""Import"", ""Purge"", ""Recover"", ""Restore"", ""Update""]
-    }
-    ,
-    {
-      object_id : var.adsg-owners,
-      key_permissions         = [""Get"", ""List"", ""Create"", ""Encrypt"", ""Decrypt"", ""Update""],
-      secret_permissions      = [""Get"", ""List"", ""Purge"", ""Set"", ""Delete"", ""Recover"", ""Restore""],
-      storage_permissions     = [""Get"", ""List""],
-      certificate_permissions = [""Backup"", ""Create"", ""Delete"", ""Get"", ""List"", ""Import"", ""Purge"", ""Recover"", ""Restore"", ""Update""]
-    }
-    ,
-   ]
-
-   log_analytics_workspace_id = module.log_analytics.resource_id
-}
-
-
-","1. 
-Message=""The user, group or application 'appid=xxx;oid=xxx;iss=https://sts.windows.net/6f/' does not have secrets get permission:
-
-The above issue comes when the user you logged in the current environment doesn't have necessary Getpermissions to get the secret from key vault. You need to set the tenant_id and object_id under access policy in the key vault as shown below.
-Refer: data ""azurerm_client_config"" ""current"" {}
-azurerm_key_vault
-tenant_id = data.azurerm_client_config.current.tenant_id
-object_id = data.azurerm_client_config.current.object_id
-
-I have tried the same code as you just by adding the above lines worked for me as expected.
-
-
-
-",Terraform
-"I am using this main.tf module in my Terraform script to create an Azure Container App. I need to pass in a complex object with all the values needed to create a Container App.
-In the main.tf file of this module, the complex object is iterated over and populates the secrets dynamic block. Notice that it will loop over the secrets list and populate each name/value pair.
-resource ""azurerm_container_app"" ""container_app"" {
-  for_each                     = {for app in var.container_apps: app.name => app}
-
-  name                         = each.key
-  ...
-  template {
-    dynamic ""container"" {
-      ...
-      }
-    }
-    ...
-  }
-
-  ...
-
-  dynamic ""secret"" {
-    for_each                     = each.value.secrets != null ? [each.value.secrets] : []
-    content {
-      name                       = secret.value.name
-      value                      = secret.value.value
-    }
-  }
-...
-
-In the variables.tf file for the container_app module, the format of these parameters is specified. Notice that it wants a list of objects (with attributes name and value).
-variable ""container_apps"" {
-  description = ""Specifies the container apps in the managed environment.""
-  type = list(object({
-    name                           = string
-    ...
-    secrets                        = optional(list(object({
-      name                         = string
-      value                        = string
-    })))
-    ...
-    template                       = object({
-      containers                   = list(object({
-        ...
-
-I want to specify the list of secrets to pass. Here is how I am calling the module.
-module ""container_app"" {
-  source         = ""./modules/container_app""
-  location       = var.location
-  ...
-  container_apps = [ 
-    {
-      name = ""api""
-      ...
-      configuration = {
-        ...
-      }
-      secrets = [
-        {
-          name = ""azure-openai-api-key"",
-          value = module.cognitive_services.azure_cognitive_services_key
-        },
-        {
-          name = ""container-registry-admin-secret"",
-          value = module.container_registry.container_registry_admin_password
-        }
-      ]
-      ...
-      template = {
-        containers = [
-          ...
-
-My complex object includes variables, references to other module's output, etc. Notice that the secrets object is a list of objects (with attributes name and value)
-However, this results in an error when I try to apply the Terraform.
-  ╷
-  │ Error: Unsupported attribute
-  │
-  │   on modules\container_app\main.tf line 88, in resource ""azurerm_container_app"" 
-  ""container_app"":
-  │   88:       name                       = secret.value.name
-  │     ├────────────────
-  │     │ secret.value is list of object with 2 elements
-  │
-  │ Can't access attributes on a list of objects. Did you mean to access attribute ""name"" 
-  for a specific element of the list, or across all elements of the list?
-  ╵
-  ╷
-  │ Error: Unsupported attribute
-  │
-  │   on modules\container_app\main.tf line 89, in resource ""azurerm_container_app"" 
-  ""container_app"":
-  │   89:       value                      = secret.value.value
-  │     ├────────────────
-  │     │ secret.value is list of object with 2 elements
-  │
-  │ Can't access attributes on a list of objects. Did you mean to access attribute ""value"" 
-  for a specific element of the list, or across all elements of the list?
-  ╵ 
-
-I don't know how to pass in the correct list of object such that this module can then add those values as secrets in the Container App specification.
-","1. You don't need square brackets around [each.value.secrets] as you create a list of lists. It should be:
-for_each                     = each.value.secrets != null ? each.value.secrets : []
-
-",Terraform
-"I am trying to find out what is the distribution of values of purchases done by consumers. It is Zero Inflated, as most of the consumers do not make any purchase in a given time constrain. I use python. As it is the value of items bought, my data is not desecrated like in Poisson distribution, but always nonnegative and continuous, which might mean log-normal, exponential, gamma, inverse gamma, etc. distributions
-My question boils down to how to fit distributions to Zero Inflated data and check which fits better?
-I have found quite a lot of information on how to make Zero Inflated Poisson regression, but my aim is to find out what is the underlying distribution of the process, not to make predictions, as I want to know the variance.
-What are unknown:
-
-Probability of inflated zeros - not all zeros are inflated, as they might also be a result of the underlying distribution
-What is the family of distribution generating the purchase values
-What  are the parameters of distribution generating the purchase values
-
-I have created a sample code to generate example data and my attempt to fit two distributions.
-Unfortunately, SSE for the true one is higher than for the alternative.
-import numpy as np 
-import pandas as pd
-import scipy
-from scipy import stats 
-import matplotlib.pyplot as plt
-
-
-N = 1000 * 1000
-p_of_inflated_zeros = 0.20
-
-#generation of data
-Data = pd.DataFrame({""Prob_bought"" : np.random.uniform(0, 1, N) })
-Data[""If_bought""] = np.where(Data[""Prob_bought""] > p_of_inflated_zeros , 1 , 0)
-Data[""Hipotetical_purchase_value""] = scipy.stats.expon.rvs(scale = 50, loc = 10, size = N) 
-#Data[""Hipotetical_purchase_value""] = scipy.stats.lognorm.rvs(s = 1, scale = 50, loc = 10, size = N)
-Data[""Hipotetical_purchase_value""] = np.where(Data[""Hipotetical_purchase_value""] < 0 ,0 , Data[""Hipotetical_purchase_value""]) 
-Data[""Purchase_value""] = Data[""If_bought""]  * Data[""Hipotetical_purchase_value""] 
-
-# fit distribiution
-# based on https://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python
-#create 
-#x = np.linspace(min(gr_df_trans_tmp), max(gr_df_trans_tmp), 200)
-y, x = np.histogram(Data[""Purchase_value""], bins = 1000, density = True)
-x = (x + np.roll(x, -1))[:-1] / 2.0
-
-#lognormal
-FIT_lognorm_sape, FIT_lognorm_loc, FIT_lognorm_scale = scipy.stats.lognorm.fit(Data[""Purchase_value""])  
-FIT_lognorm_pdf = scipy.stats.lognorm.pdf(x, s = FIT_lognorm_sape, loc = FIT_lognorm_loc, scale = FIT_lognorm_scale)
-SSE_lognorm = np.sum(np.power(y - FIT_lognorm_pdf, 2.0))
-print(SSE_lognorm)
-# 0.036408827144038584
-
-#exponental
-FIT_expo_loc, FIT_expo_scale = scipy.stats.expon.fit(Data[""Purchase_value""])  
-FIT_expo_pdf = scipy.stats.expon.pdf(x, FIT_expo_loc, FIT_expo_scale)
-SSE_expo = np.sum(np.power(y - FIT_expo_pdf, 2.0))
-print(SSE_expo)
-# 0.07564960702319487
-
-# chart
-# wykres histogram
-axes = plt.gca()
-axes.set_xlim([-2, 200])
-plt.hist(Data[""Purchase_value""], bins = 1000, alpha = 1, density = True)
-    
-# Plot the PDFs
-plt.plot(x, FIT_lognorm_pdf, 'k', linewidth = 1, alpha = 0.5, color = 'red', label = 'lognormal')  
-plt.plot(x, FIT_expo_pdf,    'k', linewidth = 1, alpha = 0.5, color = 'blue', label = 'exponental')   
-plt.legend(loc='upper right', title = """")
-
-plt.title(""Fitting distribiution to ilustrativ data"")
-plt.xlabel(""Hipotetical purchase value"")
-plt.ylabel('Density')
-
-
-","1. The distribution of data, with many zeros and a long right tail, suggests that the data may be best modeled using a mixture model that can accommodate both the point mass at zero and the continuous part of the distribution. Commonly used models for such data include:
-Zero-Inflated Models: Models like Zero-inflated Poisson (ZIP) or zero-inflated negative binomial (ZINB) are the best choices when dealing with count data. However, for continuous data, a zero-inflated Gaussian might be more recommended.
-",Distribution
-"I'm trying to calculate the SPI from CHIRPS monthly mean precipitation data, because it's too large I cut it down to my area of interest and here it is: https://www.dropbox.com/s/jpwcg8j5bdc5gq6/chirps_mensual_v1.nc?dl=0 
-I did this to open it:
-require(utils)
-require(colorRamps)
-require(RNetCDF)
-require(rasterVis)
-require(rgdal)
-library(ncdf4)
-library(raster)
-
-
-datos2 <- nc_open(""Datos/chirps_mensual_v1.nc"")
-ppt_array <- ncvar_get(datos2, ""precip"")
-
-#I'm only taking complete years so I took out two months from 2018
-
-ppt_mes <- ppt_array[ , ,1:444]
-
-I know there is a SPI library but I don't know how should I format the data in order to use it. So I tried to do it without the function by fitting the gamma distribution but I dont' know how to do it for this data base.
-Does anyone know how to calculate SPI either with the function or by fitting the distribution?
-","1. I don't think the SPI package is doing what you (or anyone) thinks it is doing.  If you use debug(spi) and step through the code, you'll see that in one step it fits a empirical cumulative distribution function (with ecdf()) to the first two and last rows of data.  Why the first two and last rows?  I have no clue, but whoever wrote this package also used a for loop to do t() to a matrix.  Not to mention that I think it should use a Gamma distribution or Pearson III distribution not ecdf() (according to Guttman, N.B. (1999) Accepting the standardized precipitation index: a calculation algorithm. JAWRA Journal of the American Water Resources Association, 35, 311–322.).
-
-2. At the end I made it by using the SPI library, the result will be a value for each month in each grid point, if you want to calculate the value over a specific area I made that too but I can share it if you want it too:
-Also, this one I made it using CRU data but you can adjust it:
-#spei cru 1x1
-rm(list=ls(all=TRUE)); dev.off()
-
-require(utils)
-require(RNetCDF)
-require(rasterVis)
-require(rgdal)
-library(ncdf4)
-require(SPEI)
-
-########################################################################################################
-
-
-prec <- open.nc(""pre_mensual.nc"")
-
-lon <- length(var.get.nc(prec, ""lon""))
-lat <- length(var.get.nc(prec, ""lat""))
-lon1 <- var.get.nc(prec, ""lon"")
-lat1 <- var.get.nc(prec, ""lat"")
-ppt  <- var.get.nc(prec, ""pre"") 
-ppt  <- ppt[ , ,109:564] #31 18 456 (1980-2017)
-anio = 456/12
-
-###########################################################################################################
-#Reshape data 
-
-precip <- sapply(1:dim(ppt)[3], function(x)t(ppt[,,x]))
-
-############################################################################################################
-#This is for SPI-6, you can use either of them
-
-spi_6 <- array(list(),(lon*lat))
-
-for (i in 1:(lon*lat)) {
-  spi_6[[i]] <- spi(precip[i,], scale=6, na.rm=TRUE)
-}
-#############################################################################################################
-#Go back to an array form
-
-sapply(spi_6, '[[',2 )->matriz_ppt 
-ppt_6 <- array(aperm(matriz_ppt, c(2,1),c(37,63,456)));spi_c <- array(t(ppt_6), dim=c(37,63,456))
-#############################################################################################################
-    #Save to netcdf
-
-for(i in 1:456) { 
-  nam <- paste(""SPI"", i, sep = """")
-  assign(nam,raster((spi_c[ , ,i]), xmn=min(lon1), xmx=max(lon1), ymn=min(lat1), ymx=max(lat1), crs=CRS(""+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs+ towgs84=0,0,0"")) )
-}
-
-gpcc_spi <- stack(mget(paste0(""SPI"", 1:456)))
-
-outfile <- ""spi6_cru_1980_2017.nc""
-crs(gpcc_spi) <- ""+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"" 
-writeRaster(gpcc_spi, outfile, overwrite=TRUE, format=""CDF"", varname=""SPEI"", varunit=""units"",longname=""SPEI CRU"", xname=""lon"", yname=""lat"")
-
-It's not the most stylish way to calculate it but it does work. :)
-EDIT: If you want to calculate the SPI/SPEI over an area this is what I did:
-library(SPEI)
-library(ncdf4)
-library(raster)
-#
-
-pre_nc <- nc_open(""pre_1971_2017_Vts4.nc"")
-pre <- ncvar_get(pre_nc, ""pre"")
-pre <- pre[, , 109:564] #This is for the time I'm interested in
-lats <- ncvar_get(pre_nc, ""lat"")
-lons <- ncvar_get(pre_nc, ""lon"")
-times <- 0:467
-
-# Read mask
-
-#This is a mask you need to create that adjusts to your region of interest
-#It consist of a matrix of 0's and 1's, the 1's are placed in the area
-#you are interested in
-
-mask1 <- nc_open(""cuenca_IV_CDO_05_final.nc"")
-m1 <- ncvar_get(mask1, ""Band1"")
-m1[m1 == 0] <- NA
-#
-# Apply mask to data
-#
-pre1 <- array(NA, dim=dim(pre))
-
-#
-for(lon in 1:length(lons)){
-  for(lat in 1:length(lats)){
-    pre1[lon,lat,] <- pre[lon,lat,]*m1[lon,lat]
-  } 
-}
-
-#
-# Mean over the area of interest
-#
-mean_pre1 <- apply(pre1,c(3),mean, na.rm=TRUE)
-
-# Calculate SPI/SPEI
-
-spi1 <- matrix(data= NA, nrow = 456, ncol = 48)
-for (i in 1:48) {
-  spi1[,i] <- spi(data=ts(mean_pre1,freq=12),scale= i)$fitted
-}
-
-#This calculates SPI/SPEI-1 to SPI/SPEI-48, you can change it
-# Save
-#
-write.table(spi1,'spi_1980_2017.csv',sep=';',row.names=FALSE)
-
-",Distribution
-"I want an alternative to this Matlab function in Python 
-evrnd(mu,sigma,m,n)
-I think We can use something  like this  
-numpy.random.gumbel
-or just 
-numpy.random.uniform
-Thanks in advance. 
-","1. Matlab's evrnd generates random variates from the Gumbel distribution, also known as the Type I extreme value distribution.  As explained in that link,
-
-The version used here is suitable for modeling minima; the mirror image of this distribution can be used to model maxima by negating R.
-
-You can use NumPy's implementation of the Gumbel distribution, but it uses the version of the distribution that models maxima, so you'll have to flip the values around the location (i.e. mu) parameter.
-Here's a script containing the Python function evrnd. The plot that it generates is below.
-import numpy as np
-
-
-def evrnd(mu, sigma, size=None, rng=None):
-    """"""
-    Generate random variates from the Gumbel distribution.
-
-    This function draws from the same distribution as the Matlab function
-
-        evrnd(mu, sigma, n)
-
-    `size` may be a tuple, e.g.
-
-    >>> evrnd(mu=3.5, sigma=0.2, size=(2, 5))
-    array([[3.1851337 , 3.68844487, 3.0418185 , 3.49705362, 3.57224276],
-           [3.32677795, 3.45116032, 3.22391284, 3.25287589, 3.32041355]])
-
-    """"""
-    if rng is None:
-        rng = np.random.default_rng()
-    x = -rng.gumbel(loc=-mu, scale=sigma, size=size)
-    return x
-
-
-if __name__ == '__main__':
-    import matplotlib.pyplot as plt
-
-    mu = 10
-    sigma = 2.5
-    n = 20000
-
-    x = evrnd(mu, sigma, n)
-
-    # Plot the normalized histogram of the sample.
-    plt.hist(x, bins=100, density=True, alpha=0.7)
-    plt.grid(alpha=0.25)
-    plt.show()
-
-
-
-If you are already using SciPy, an alternative is to use the rvs method of scipy.stats.gumbel_l.  The SciPy distribution scipy.stats.gumbel_l implements the Gumbel distribution for minima,
-so there is no need to flip the results returned by the rvs method.
-For example,
-from scipy.stats import gumbel_l                                      
-
-
-mu = 10
-sigma = 2.5
-n = 20000
-
-x = gumbel_l.rvs(loc=mu, scale=sigma, size=n)
-
-
-",Distribution
-"I am running a K8s Cluster on GKE on which I've deployed harbor and i've made it publically available on a specific domain: ""harb.pop.com"".
-I've created a kubernetes deployment that uses an image from this registry and configured it like below:
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-name: jupiter
-labels:
-app: jupiter
-spec:
-replicas: 1
-selector:
-matchLabels:
-app: jupiter
-template:
-metadata:
-labels:
-app: jupiter
-spec:
-#serviceAccountName: build-robot
-containers:
-- name: jupiter
-image: harb.pop.com/sam-test-project/jupiter-app:latest
-ports:
-- containerPort: 80
-imagePullSecrets:
-- name: harborloginsecret
-This deployment uses imagepullsecrets with the below secret to authenticate with harbor:
-apiVersion: v1
-kind: Secret
-metadata:
-name: harborloginsecret
-type: Opaque
-data:
-username: YWRtaW4= #admin
-password: SGFyYm9yMTIzNDU= #Harbor12345
-This is an admin account which I can login to the publically available harbor registry with, and i can confirm that project and image repository exsists with images in.However, when the pod tries to deploy, I keep getting this error:
-Failed to pull image ""harb.pop.com/sam-test-project/jupiter-app"": failed to pull and unpack image ""harb.pop.com/sam-test-project/jupiter-app:latest"": failed to resolve reference ""harb.pop.com/sam-test-project/jupiter-app:latest"": unexpected status from HEAD request to https://harb.pop.com/v2/sam-test-project/jupiter-app/manifests/latest: 401 Unauthorized
-I've tried creating a robot account in Harbor with full permisisons on everything also, but still can't get it to autheticate properly. Do i need to add any GCP IAM permissions given the harbor backend is a GCS bucket? Or do i need to grant the deployment a GKE service account as the pod is deployed in a different namespace to my Harbor pods? I have no idea why this isn't working so any help would be appreciated
-","1. Your secret type is wrong. You need to create secret which is type should be kubernetes.io/dockerconfigjson
-one way to do that is like this.
-kubectl create secret generic harborloginsecret \
-      --from-file=.dockerconfigjson=<.docker/config.json> \
-      --type=kubernetes.io/dockerconfigjson
-
-",Harbor
-"have a pipeline for installation/deploying db cluster Patroni(Postgress). Cheerfully had been designing my pipeline and stack with kinda type of error below.
-Most obvious remarks:
-
-And Yes, i had logged in my remote registry
-Harbor had been deployed by official helm chart
-3  Docker images have pulled succesfully
-Keep my credentials in Jenkins. -
-
-usernamePassword(credentialsId: 'HarborCredentials', 
-                 passwordVariable: 'PASSWORD_VAR', 
-                 usernameVariable: 'USERNAME')]) {
-                    sh ""helm registry login -u $USERNAME -p $PASSWORD_VAR mydomain
-
-QUESTION:
-How i can keep myself logged in registry for pull/push my helm charts? pipe ""|"" doesn't work
-GOAL:
-What i want? Get a good solution for this issue. Theoretically find out several of them.
-ERROR:
-Login Succeeded
-helm pull oci://mydomain/helm-charts/pxc-operator --version 1.14.0
-Error: pulling from host mydomain failed with status code [manifests 1.14.0]: 401 Unauthorized
-
-Bruh, I described above
-","1. Solved this issue by upgrading helm version and pedantically observing path  oci://[you_domain]/[your_repo]/[your_chart] - exclude last if you have to push
-Also you can use Robot Account for it simplification
-",Harbor
-"I have set up harbor behind an nginx reverse-proxy, and when I push docker images to it, I get this error
-5bfe9e7b484e: Retrying in 1 second
-b59ac5e5fd8f: Retrying in 1 second
-3a69b46a1ce6: Retrying in 1 second
-473959d9af57: Retrying in 1 second
-fe56a5801ec1: Retrying in 1 second
-1f34686be733: Waiting
-691621f13fd5: Waiting
-bf27900f6443: Waiting
-01fd502f0720: Waiting
-first path segment in URL cannot contain colon
-
-What could be causing this error? There is no error in nginx or harbor ontainer. Thanks!
-","1. I solved this issue by adding the following to my nginx config (the Host header is what made it work, the others are also needed).
-proxy_set_header  Host              $http_host;
-proxy_set_header  X-Real-IP         $remote_addr;
-proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
-proxy_set_header  X-Forwarded-Proto $scheme;
-
-",Harbor
-"I am attempting to make a secured repo for our internal docker registry. Github has a ready to go docker-compose however it is using MariaDB and Postgres as highlighted below. 
-What would be the best practice to utilize the same informix container to run 2 databases for the frontend and backend support of Portus & Docker Registry.
-I feel I have to post the entire docker-compose yaml for context. I am also not clear on if i really need Clair for anything. 
-I am running this on a Open SUSE Leap 15 system. Thank you!
-I have been messing around with this and as its written the registry and portus will not connect for some reason, but the underlining Databases seem to work fine and those are a bigger concern at this moment. 
-version: '2'
-
-services:
-  portus:
-    build: .
-    image: opensuse/portus:development
-    command: bundle exec rails runner /srv/Portus/examples/development/compose/init.rb
-    environment:
-      - PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
-      - PORTUS_PUMA_HOST=0.0.0.0:3000
-      - PORTUS_CHECK_SSL_USAGE_ENABLED=false
-      - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
-
-      - CCONFIG_PREFIX=PORTUS
-
-      - PORTUS_DB_HOST=db
-      - PORTUS_DB_PASSWORD=portus
-      - PORTUS_DB_POOL=5
-
-      - RAILS_SERVE_STATIC_FILES=true
-    ports:
-      - 3000:3000
-    depends_on:
-      - db
-    links:
-      - db
-    volumes:
-      - .:/srv/Portus
-
-  background:
-    image: opensuse/portus:development
-    entrypoint: bundle exec rails runner /srv/Portus/bin/background.rb
-    depends_on:
-      - portus
-      - db
-    environment:
-      - PORTUS_MACHINE_FQDN_VALUE=${MACHINE_FQDN}
-      - PORTUS_SECURITY_CLAIR_SERVER=http://clair:6060
-
-      # Theoretically not needed, but cconfig's been buggy on this...
-      - CCONFIG_PREFIX=PORTUS
-
-      - PORTUS_DB_HOST=db
-      - PORTUS_DB_PASSWORD=portus
-      - PORTUS_DB_POOL=5
-    volumes:
-      - .:/srv/Portus
-    links:
-      - db
-
-  webpack:
-    image: kkarczmarczyk/node-yarn:latest
-    command: bash /srv/Portus/examples/development/compose/bootstrap-webpack
-    working_dir: /srv/Portus
-    volumes:
-      - .:/srv/Portus
-
-  clair:
-    image: quay.io/coreos/clair:v2.0.2
-    restart: unless-stopped
-    depends_on:
-      - postgres
-    links:
-      - postgres
-    ports:
-      - ""6060-6061:6060-6061""
-    volumes:
-      - /tmp:/tmp
-      - ./examples/compose/clair/clair.yml:/clair.yml
-    command: [-config, /clair.yml]
-
- **db:
-    image: library/mariadb:10.0.23
-    command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
-    environment:
-      MYSQL_ROOT_PASSWORD: portus**
-
- **postgres:
-    image: library/postgres:10-alpine
-    environment:
-      POSTGRES_PASSWORD: portus**
-
-  registry:
-    image: library/registry:2.6
-    environment:
-      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /registry_data
-      REGISTRY_STORAGE_DELETE_ENABLED: ""true""
-
-      REGISTRY_HTTP_ADDR: 0.0.0.0:5000
-      REGISTRY_HTTP_DEBUG_ADDR: 0.0.0.0:5001
-
-      REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/docker/registry/portus.crt
-
-      REGISTRY_AUTH_TOKEN_REALM: http://${MACHINE_FQDN}:3000/v2/token
-      REGISTRY_AUTH_TOKEN_SERVICE: ${MACHINE_FQDN}:${REGISTRY_PORT}
-      REGISTRY_AUTH_TOKEN_ISSUER: ${MACHINE_FQDN}
-
-      REGISTRY_NOTIFICATIONS_ENDPOINTS: >
-        - name: portus
-          url: http://${MACHINE_FQDN}:3000/v2/webhooks/events
-          timeout: 2000ms
-          threshold: 5
-          backoff: 1s
-    volumes:
-      - /registry_data
-      - ./examples/development/compose/portus.crt:/etc/docker/registry/portus.crt:ro
-    ports:
-      - ${REGISTRY_PORT}:5000
-      - 5001:5001
-    links:
-      - portus
-
-The databases seem to run fine but I am still what i would consider a novice with docker-compose and informix on the setup side. 
-Any pointers or documentations recommendations would be most helpful as well.
-","1. unfortunately, Portus does not support informix DB. see this link 
-",Portus
-"Deploying Portus in GCP with an Nginx Ingress load balancer implemented. Portus loads up just fine but when trying to use the application and fill out some of the forms I get the following error: 
-
-VM798:1 Mixed Content: The page at
-  'https://staging.foo.bar/admin/registries/new' was loaded over HTTPS,
-  but requested an insecure XMLHttpRequest endpoint
-  'http://staging.foo.bar//api/v1/registries/validate?name=devreg&hostname=staging-foo-barregistry%3A5000&external_hostname=&use_ssl=false&force=false&only%5B%5D=hostname'.
-  This request has been blocked; the content must be served over HTTPS.
-
-Nginx configuration: https://github.com/kubic-project/caasp-services/blob/master/contrib/helm-charts/portus/templates/nginx-configmap.yaml
-Environment:  
-
-Kubernetes in GCP
-all resources deployed through helm
-ssl is provided by kube-lego
-Rails app with Grape API gem
-Grape mounts the api as follows: mount API::RootAPI => ""/""
-
-So I've made sure to check the code for manual http calls and didn't see anything. And I've spent a day now trying to dig through rails docs and nginx docs to see what is causing some of the app to load properly with ssl and the API to not follow the same rules
------ Update 1 ------
-Upon further investigation, It looks like it has something to do with Vue validator. Checking the developer tools revealed the following:
-
-curl
-  'http://staging.foo.bar//api/v1/registries/validate?name=devreg&hostname=st&external_hostname=&use_ssl=false&force=false&only%5B%5D=name'
-  -X OPTIONS -H 'Access-Control-Request-Method: GET' -H 'Origin: https://staging.foo.bar' -H 'Access-Control-Request-Headers:
-  x-csrf-token' --compressed
-
-And it looks like the root url is being called here: 
-javascript:
-      window.API_ROOT_URL = '#{root_url}';
-
-root_url is set to / as mentioned above. 
-However, analyzing the Vue code closer revels:  
-Vue.http.options.root = window.API_ROOT_URL;
-
-Vue.http.interceptors.push((_request, next) => {
-  window.$.active = window.$.active || 0;
-  window.$.active += 1;
-
-  next(() => {
-    window.$.active -= 1;
-  });
-});
-
-Vue.http.interceptors.push((request, next) => {
-  if ($.rails) {
-    // eslint-disable-next-line no-param-reassign
-    request.headers.set('X-CSRF-Token', $.rails.csrfToken());
-  }
-  next();
-});
-
-// we are not a SPA and when user clicks on back/forward
-// we want the page to be fully reloaded to take advantage of
-// the url query params state
-window.onpopstate = function (e) {
-  // phantomjs seems to trigger an oppopstate event
-  // when visiting pages, e.state is always null and
-  // in our component we set an empty string
-  if (e.state !== null) {
-    window.location.reload();
-  }
-};
-
-Vue.config.productionTip = process.env.NODE_ENV !== 'production';
-
-Params are set to use SSL in the query 
-params do
-          requires :name,
-                   using: API::Entities::Registries.documentation.slice(:name)
-          requires :hostname,
-                   using: API::Entities::Registries.documentation.slice(:hostname)
-          optional :external_hostname,
-                   using: API::Entities::Registries.documentation.slice(:external_hostname)
-          requires :use_ssl,
-                   using: API::Entities::Registries.documentation.slice(:use_ssl)
-          optional :only, type: Array[String]
-        end
-
-","1. I'm not sure about how your app works, and the mechanics of what data is being passed where, but I suspect you might need to be passing use_ssl=true in the querystring parameter to your /validate endpoint. 
-Currently, use_ssl=false is being passed, which is likely returning a non-SSL response.
-",Portus
-"Im was trying out the VAqua Look and Feel on MacOS Catalina, its included in classpath and I call it like this
-UIManager.setLookAndFeel(""org.violetlib.aqua.AquaLookAndFeel"");
-
-but then my code fails with
-4/11/2019 13.16.22:GMT:UncaughtExceptionHandler:uncaughtException:SEVERE: An unexpected error has occurred com.apple.laf.ScreenMenu.addMenuListeners(Lcom/apple/laf/ScreenMenu;J)J on thread main, please report to support @jthink.net
-java.lang.UnsatisfiedLinkError: com.apple.laf.ScreenMenu.addMenuListeners(Lcom/apple/laf/ScreenMenu;J)J
-    at com.apple.laf.ScreenMenu.addMenuListeners(Native Method)
-    at com.apple.laf.ScreenMenu.addNotify(ScreenMenu.java:254)
-    at java.awt.Menu.addNotify(Menu.java:183)
-    at com.apple.laf.ScreenMenu.addNotify(ScreenMenu.java:234)
-    at com.apple.laf.ScreenMenuBar.add(ScreenMenuBar.java:285)
-    at com.apple.laf.ScreenMenuBar.addSubmenu(ScreenMenuBar.java:223)
-    at com.apple.laf.ScreenMenuBar.addNotify(ScreenMenuBar.java:66)
-    at java.awt.Frame.addNotify(Frame.java:483)
-    at java.awt.Window.pack(Window.java:807)
-    at com.jthink.songkong.ui.MainWindow.setupScreen(MainWindow.java:322)
-    at com.jthink.songkong.cmdline.SongKong.guiStart(SongKong.java:1494)
-    at com.jthink.songkong.cmdline.SongKong.finish(SongKong.java:1602)
-    at com.jthink.songkong.cmdline.SongKong.main(SongKong.java:1627)
-
-any ideas ?
-Although the lib is opensrc it does not seem to be hosted on github, although the author has other libs on github 
-","1. EDIT:
-I checked the website and it seems that the author for the website has extended support for macOS Catalina. I suggest you try downloading and using the LaF now.
-
-I don't think the VAqua look and feel is supported on Catalina, but I maybe wrong about the source of the error. Below are screenshots from the VAqua website.
-
-
-",Aqua
-"I would like to connect from Power BI to BlackDuck using Rest API.
-Initially, I would like to do everything using Power Query.
-I just started exploring the possibilities of how to retrieve data without using a PostgreSQL database of the BlackDuck.
-Maybe you had an experience of how to authenticate BlackDuck?
-Maybe you have such experience and can share it?
-","1. I tested using API, but the easiest way was connected to PostgreSQL ( synopsis support  helped with it)
-About API
-
-need to get bearer token
-
-create function (ex name GetToken):
-    ()=>
-    let
-    
-    url = ""https://XXX.blackduck.com/api/tokens/authenticate"",
-    headers = [
-       #""Accept"" = ""application/vnd.blackducksoftware.user-4+json"",
-       #""Authorization""= token XXXXXXXXXXX
-       
-       ],
-    
-    PostData=Json.FromValue([Authorization =  prToken]), 
-    
-    response = Web.Contents( url, 
-              [ Headers = headers,
-                Content = PostData]),
-    
-    Data = Json.Document(response),
-    
-    access_token=""Bearer ""&Data[bearerToken]
-    
-    
-    in
-    access_token
-
-// where XXXXXXXXXXX  is your token from BlackDuck 
-
-it helps you to get Bearer token
-
-as example to get information about project
-let
- Source = Json.Document(Web.Contents(""https://xxxxx.app.blackduck.com/api/projects"", 
- [Headers=[Accept=""application/vnd.blackducksoftware.project-detail-4+json"",
- Authorization = GetToken()]]))
-
-in
-Source[totalCount]
-
-
-
-2. I have the same question, though what I could understand from their synopsis community, you cannot authenticate like this and you need to use their postgres DB
-",Black Duck
-"My blackduck synopsis scan result shown esapi-java-legacy2.5.3.1 come with high risk license issue to BSD 3-clause ""New"" or ""Revised"" License and Creative Commons Attribution Share Alike 3.0.
-I had tried to put BSD 3-clause license at the Java Ear file root directory with named LICENSE.txt, however it is still shown up with same error. I am also tried to put LICENSE-esapi-java-legacy2.5.3.1.txt, LICENSE-esapi-java-legacy2.5.3.1 in /licenses but still not working. What is the correct way to place these licenses?
-","1. I'm not sure what exactly BlackDuck SCA is specifically looking for. I would try /LICENSE.md or maybe placing it under /META-INF under various file names and if none of those work, I'd recommend contacting Synopsis tech support.
-But given that you didn't show any detailed error message, the best I can do is to make an uneducated guess.
-",Black Duck
-"We have integrated Azure Pipeline with Black Duck Synopsys task and it's limits up to 10 versions. For every pipeline runs versions will be created and pipeline runs successfully up to 10 runs only. For 11th run pipeline will be failed because of version limitation in Black Duck. Here we can delete the older versions manually in Black Duck but instead of doing manually in black duck, is it possible to do automatically through ADO pipeline by adding any task ?
-In short, can we use any powershell or other tasks in the pipeline which automatically deletes the versions when count reach to 10 ?
-Thanks..
-","1. It is possible using the Blackduck API to remove a project programmatically. This is a REST API so could be called from Azure DevOps Pipelines.
-You first need the UI to generate an API token. This can then be used to generate a bearer token to be used with the API to remove the older project versions.
-Once you have logged into BlackDuck the full API guide is under the help menu.
-
-2. I had the same issue and I spoke to sales and support about it.
- There are 3 solutions :
-
-raise the limit (paid)
-It seems that Blackduck could clean old versions automatically
-using API, list versions and remove old ones
-
-",Black Duck
-"I'm signing a pdf document with PdfPadesSigner's SignWithBaselineLTAProfile method in IText 8.0.3. I use my certificate in the USB token as ExternalSignature (pkcs11library). Everything is very simple. The only problem is that there is no IssuerSerial match in the signature structure. I can't solve this problem. Thank you from now.
-
-var padesSigner = new PdfPadesSigner(new PdfReader(content.Document.Stream), signedStream);
-
-IList<IX509Certificate> trusted = new List<IX509Certificate>();
-
-        foreach (var crnCert in parameters.TrustedCertificates)
-            trusted.Add(new X509CertificateBC(crnCert));
-
-padesSigner.SetTrustedCertificates(trusted);
-
-    TSAClientBouncyCastle timeStampInfo = null;
-
-        if (parameters.TimeStampSettings != null)
-            timeStampInfo = new TSAClientBouncyCastle(parameters.TimeStampSettings.HostUrl, parameters.TimeStampSettings.LoginId, parameters.TimeStampSettings.Password);
-
-  var certificates = new IX509Certificate[]
-        {
-            new X509CertificateBC(parameters.SigningCertificate)
-        };
-
-padesSigner.SignWithBaselineLTAProfile(properties, certificates, this, timeStampInfo);
-
-
-Must be;
-<RelatedCertificate Certificate=""CERTIFICATE_FATİH-POLAT_20231017-0920"">
-                     <Origin>SIGNED_DATA</Origin>
-                     <Origin>DSS_DICTIONARY</Origin>
-                     <Origin>VRI_DICTIONARY</Origin>
-                     <CertificateRef>
-                         <Origin>SIGNING_CERTIFICATE</Origin>
-                         **<IssuerSerial match=""true"">MIIBAzCB86SB8DCB7TELMAKGI....</IssuerSerial>**
-                         <DigestAlgoAndValue match=""true"">
-                             <DigestMethod>SHA256</DigestMethod>
-                             <DigestValue>Fuxst8SamhghugBDp/6FD+kzENHiYzqEyKOXaFZL2jc=</DigestValue>
-                         </DigestAlgoAndValue>
-                     </CertificateRef>
-                 </RelatedCertificate>
-
-What i have;
-<RelatedCertificate Certificate=""CERTIFICATE_FATİH-POLAT_20231017-0920"">
-                    <Origin>SIGNED_DATA</Origin>
-                    <Origin>DSS_DICTIONARY</Origin>
-                    <Origin>VRI_DICTIONARY</Origin>
-                    <CertificateRef>
-                        <Origin>SIGNING_CERTIFICATE</Origin>
-                        <DigestAlgoAndValue match=""true"">
-                            <DigestMethod>SHA256</DigestMethod>
-                            <DigestValue>Fuxst8SamhghugBDp/6FD+kzENHiYzqEyKOXaFZL2jc=</DigestValue>
-                        </DigestAlgoAndValue>
-                    </CertificateRef>
-                </RelatedCertificate>
-
-","1. The XML excerpts you show appear to be diagnostic data generated by eSig DSS. Thus, I'll explain here what the missing entry means and why it is ok that it is missing.
-What does this missing element indicate?
-That CertificateRef element inside the RelatedCertificate here indicates that the certificate is referenced as signer certificate from a signingCertificateV2 signed attribute of the signature.
-The value of that attribute is specified in RFC 5035 as
-SigningCertificateV2 ::=  SEQUENCE {
-    certs        SEQUENCE OF ESSCertIDv2,
-    policies     SEQUENCE OF PolicyInformation OPTIONAL
-}
-
-ESSCertIDv2 ::=  SEQUENCE {
-    hashAlgorithm           AlgorithmIdentifier
-           DEFAULT {algorithm id-sha256},
-    certHash                 Hash,
-    issuerSerial             IssuerSerial OPTIONAL
-}
-
-Hash ::= OCTET STRING
-
-IssuerSerial ::= SEQUENCE {
-    issuer                   GeneralNames,
-    serialNumber             CertificateSerialNumber
-}
-
-The IssuerSerial element inside the CertificateRef element you are missing in your iText created signatures refers to the issuerSerial member of the ESSCertIDv2 in the attribute value.
-As you can already see in the definition, this issuerSerial member is marked OPTIONAL.
-Thus, the observation that the IssuerSerial element is missing shows that iText makes use of that OPTIONALity and does not include a issuerSerial member in the signingCertificateV2 attribute.
-Is it ok that iText does not include a issuerSerial here?
-First of all, as remarked above, according to RFC 5035 the element is optional, so any signature processor (e.g. a validator) must expect signatures to not include a issuerSerial member in their signingCertificateV2 attribute.
-But do the specifications in question go into more detail on this?
-RFC 5035 says
-
-The first certificate identified in the
-sequence of certificate identifiers MUST be the certificate used
-to verify the signature.  The encoding of the ESSCertIDv2 for this
-certificate SHOULD include the issuerSerial field.
-
-I.e. it recommends inclusion of the field which at first glance would make iText's decision not to do so look inappropriate.
-But RFC 5035 here is used in the context of a CAdES signature container in a PAdES signature, so the CAdES and PAdES specification may modify this recommendation.
-The CAdES specification (ETSI EN 319 122-1) says
-
-The information in the IssuerSerial element is only a hint that can help to identify the certificate
-whose digest matches the value present in the reference. But the binding information is the digest of the
-certificate.
-
-(section 5.2.2.3 ""ESS signing-certificate-v2 attribute"")
-Furthermore, in the requirements for baseline profiles, that specification even says
-
-The issuerSerial field should not be included
-
-(section 6.3 ""Requirements on components and services"", ""Additional requirements"", item g)
-The PAdES specification (ETSI EN 319 142-1) here merely says
-
-Generators shall use either the signing certificate or the signing-certificate-v2 attribute, depending on the hash
-function, in accordance with ETSI EN 319 122-1
-
-Thus, taken all together the issuerSerial member is optional and its use, if anything, is not recommended.
-What does this mean in your case?
-The signatures you create with PdfPadesSigner's SignWithBaselineLTAProfile method in IText 8.0.3 are good, at least in respect to the aspect we discussed here, and the excerpt you called a ""Must be"" actually only is a ""Can be"" or even a ""Should not be"".
-Also your comment ""I found out that this is used to verify signatures"" indicates that the verifier who told you so does not correctly handle the attribute in question, in particular not in the context of PAdES signatures.
-An aside: an error in the time stamps in your example
-The time stamps (both the signature time stamp and the document time stamp) incorrectly encode their signed attributes: The signingCertificateV2 attribute in them is encoded like this:
-SEQUENCE (2 elem)
-  OBJECT IDENTIFIER 1.2.840.113549.1.9.16.2.47 signingCertificateV2 (S/MIME Authenticated Attributes)
-  SET (1 elem)
-    SEQUENCE (1 elem)
-      SEQUENCE (1 elem)
-        SEQUENCE (2 elem)
-          SEQUENCE (2 elem)
-            OBJECT IDENTIFIER 2.16.840.1.101.3.4.2.1 sha-256 (NIST Algorithm)
-            NULL
-          OCTET STRING (32 byte) 2E8EFCDD4C13BCB9F18E2AAD1A5391EEF0415D041171794C51EBD9BB8C5E23EE
-
-The signed attributes must be DER encoded. DER encoding in particular means that DEFAULT values are not included.
-The hash algorithm SHA-256 indicated there in your time stamps is the default value (see my quote from RFC 5035 above), so the encoding above is erroneous.
-(Not many validators actually check the DER encoding of the signed attributes in this depth, but some do. In the recent ETSI plug test the software of at least one participant did check this. Thus, with those time stamps your signatures may often be accepted but sometimes suddenly not.)
-By the way, these signingCertificateV2 attributes in your time stamps also don't include the optional issuerSerial member... ;)
-",Bouncy Castle
-"I am trying to read a PKCS#8 private key which looks like following:
-key.k8 --> (Sample key. Passphrase - 123456):
------BEGIN ENCRYPTED PRIVATE KEY-----
-MIIFLTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQILbKY9hPxYSoCAggA
-MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBCvaGt2Hmm2NpHpxbLvHKyOBIIE
-0IQ7dVrAGXLZl0exYIvyxLAu6zO00jL6b3sb/agTcCFOz8JU6fBanxY0d5aYO4Dn
-mynQG7BoljU470s0zIwW/wk0MmdUFl4nXWBX/4qnG0sZqZ9KZ7I8R/WrBkmpX8C/
-4pjdVhu8Ht8dfOYbkbjMBTohDJz8vJ0QwDIXi9yFjjef+QjwrFOl6kAeDJFVMGqc
-s7K/wOnhsL1XxfW9uTulPiZh5YTZKcatMkeGDR7c+cg5I+Mutim92diWuCekhNoa
-uvhUy1M3cbs7Azp1Mhz+V0CDKklI95EvN4u23WhiJPCjAofC/e45/heOP3Dwm7WZ
-zHEY1C/X8PsTl6MEEIF3ZJP+4Vr0corAs1L2FqE6oOng8dFFYmF5eRyBx6bxFd05
-iYbfOH24/b3qtFKPC689kGEd0gWp1dwES35SNNK+cJqVRTjgI0oKhOai3rhbGnmp
-tx4+JqploQgTorj4w9asbtZ/qZA2mYSSR/Q64SHv7LfoUCI9bgx73MqRQBgvI5yS
-b4BoFBnuEgOduZLaGKGjKVW3m5/q8oiDAaspcSLCJMIrdOTYWJB+7mfxX4Xy0vEe
-5m2jXpSLQmrfjgpSTpHDKi/3b6OzKOcHjSFBf8IoiHuLc5DVvLECzDUxxaMrTZ71
-0YXvEPwl2R9BzEANwwR9ghJvFg1Be/d5W/WA1Efe6cNQNBlmErxD6l+4KDUgGjTr
-Aaksp9SZAv8uQAsg7C57NFHpTA5Hznr5JctL+WlO+Gk0cAV6i4Py3kA6EcfatsnS
-PqP2KbxT+rb2ATMUZqgWc20QvDt6j0CTA1BuVD1PNhnAUFvb2ocyEEXOra22DPPS
-UPu6jirSIyFcjqFjJ9A1FD9L4/UuX2UkDSLqblFlYB1+G55KZp+EKz8SZoN5qXy1
-LyMtnacEP5OtRDrOjopzVNiuV1Uv63M9QVi1hZlVLJEomgjWuvuyEuIwDaY2uryW
-vx+jJEZyySFkb1JwAbrm+p6sCTFnbQ/URKC2cit/FJyKqNim6VQvGL8Sez34qV3z
-D13QJgTZfsy+BaZoaQ6cJTXtJ8cN0IcQciOiDNBKMW66zO6ujS8G+KNviNQypDm6
-h4sOgjMqLaZ4ezPEdNj/gaxV7Y15nVRu0re8dVkaa5t9ft/sh6A+yeTD5tS5hHkf
-NI7uJPTaTXVoz7xq2PAJUTWujMLMZKtmNOzNqYvxWRy3tCOFobBQkMxqEBEwHd+x
-SA+gFcJKJ+aNfCGZJ5fFr8rNlhtOF6uMwOAlfiUlP/pCUDUCKPjZVj4K95yNc8Io
-jSZSPb5tGPe0HqXgc6IAfQarlUZt90oVtzL0OfOfTxe1bEzS2ccNadbx/6vjLBc4
-q5UuUBppl3rXpbuZ7J1Rp3n2byF4APxFdT2LHKq+MYMfWUToau/TCMT4lFIM9tM8
-7TuuyUT2PKzf/xlsl4iScw96z9xxGPQrXn7IA2W5iL+0eCLztJdjNRX1FisdfIBL
-PraOVlmF8jHKbFdRZ8Yi8pApbQjvHi24g7dX7u/cq1FH/VE+nJ0O8YVCYVDw13CW
-h0p7yD7BuB0R+0WnR0yvkp30vK4/rtCB+Ob8bH/+HvAZrAU5X8jq/wsQbLkrLHZV
-6A6GGfX8+hy5AoaXsH1BHnMyXkaF6Mv29z8JcslDJxX/
------END ENCRYPTED PRIVATE KEY-----
-
-Following code is being used to parse the private key:
- InputStream privateKeyInputStream = getPrivateKeyInputStream(); // reads the key file from classpath and share as DataStream
- logger.info(""InputStreamExists --> {} "", privateKeyInputStream.available());
- PEMParser pemParser = new PEMParser(new InputStreamReader(privateKeyInputStream));
- Object pemObject = pemParser.readObject();
- if (pemObject instanceof PKCS8EncryptedPrivateKeyInfo) {
-     // Handle the case where the private key is encrypted.
-     PKCS8EncryptedPrivateKeyInfo encryptedPrivateKeyInfo = (PKCS8EncryptedPrivateKeyInfo) pemObject;
-     InputDecryptorProvider pkcs8Prov =
-            new JceOpenSSLPKCS8DecryptorProviderBuilder().build(passphrase.toCharArray());
-     privateKeyInfo = encryptedPrivateKeyInfo.decryptPrivateKeyInfo(pkcs8Prov); // fails here
-}
-
-
-InputStream resourceAsStream = null;
-    if (""local"".equals(privateKeyMode)) {
-      resourceAsStream = this.getClass().getResourceAsStream(privateKeyPath);
-    } else {
-      File keyFile = new File(privateKeyPath);
-      logger.info(
-          ""Key file found in {} mode. FileName : {}, Exists : {}"",
-          privateKeyMode,
-          keyFile.getName(),
-          keyFile.exists());
-      try {
-        resourceAsStream = new DataInputStream(new FileInputStream(keyFile));
-      } catch (FileNotFoundException e) {
-        e.printStackTrace();
-      }
-
-When I am running this code through intelliJ on windows, the code works fine but when I run it through docker container I am getting following exception:
-org.bouncycastle.pkcs.PKCSException: unable to read encrypted data: failed to construct sequence from byte[]: Extra data detected in stream
-snowflake-report-sync    |      at org.bouncycastle.pkcs.PKCS8EncryptedPrivateKeyInfo.decryptPrivateKeyInfo(Unknown Source) ~[bcpkix-jdk15on-1.64.jar!/:1.64.00.0]
-snowflake-report-sync    |      at com.optum.snowflakereportsync.configuration.SnowFlakeConfig.getPrivateKey(SnowFlakeConfig.java:103) ~[classes!/:na]
-snowflake-report-sync    |      at com.optum.snowflakereportsync.configuration.SnowFlakeConfig.getConnectionProperties(SnowFlakeConfig.java:67) ~[classes!/:na]
-
-Following is Dockerfile used:
-FROM adoptopenjdk/openjdk11-openj9:latest
-COPY build/libs/snowflake-report-sync-*.jar snowflake-report-sync.jar
-RUN mkdir /encryption-keys
-COPY encryption-keys/ /encryption-keys/ #keys are picked from docker filesystem when running in container
-EXPOSE 8080
-CMD java -Dcom.sun.management.jmxremote -noverify ${JAVA_OPTS} -jar snowflake-report-sync.jar
-
-Options tried:
-
-Ensured that key file is being read while running in container. Logger ""InputStreamExists --> {}"" gives number of bytes
-Ran dos2unix on key.k8 just to make sure there are no Window's ""^M"" characters which be could be causing issue as container is linux one : FROM adoptopenjdk/openjdk11-openj9:latest
-
-Not sure what I am doing wrong but any help or pointers would be appreciated.
-","1. Like @Bragolgirith suspected, BouncyCastle seems to have problems with OpenJ9. I guess it is not a Docker issue, because I can reproduce it on GitHub Actions, too. It is also not limited to BouncyCastle 1.64 or 1.70, it happens in both versions. It also happens on OpenJ9 JDK 11, 14, 17 on Windows, MacOS and Linux, but for the same matrix of Java and OS versions it works on Adopt-Hotspot and Zulu.
-Here is an example Maven project and a failed matrix build. So if you select another JVM type, you should be fine. I know that @Bragolgirith already suggested that, but I wanted to make the problem reproducible for everyone and also provide an MCVE, in case someone wants to open a BC or OpenJ9 issue.
-P.S.: It is also not a character set issue with the InputStreamReader. This build fails exactly the same as before after I changed the constructor call.
-
-Update: I have created BC-Java issue #1099. Let's see what the maintainers can say about this.
-
-Update 2: The solution to your problem is to explicitly set the security provider to BC for your input decryptor provider. Thanks to David Hook for his helpful comment in #1099.
-BouncyCastleProvider securityProvider = new BouncyCastleProvider();
-Security.addProvider(securityProvider);
-
-// (...)
-
-InputDecryptorProvider pkcs8Prov = new JceOpenSSLPKCS8DecryptorProviderBuilder()
-  // Explicitly setting security provider helps to avoid ambiguities
-  // which otherwise can cause problems, e.g. on OpenJ9 JVMs
-  .setProvider(securityProvider)
-  .build(passphrase.toCharArray());
-
-See this commit and the corresponding build, now passing on all platforms, Java versions and JVM types (including OpenJ9).
-Because @Bragolgirith mentioned it in his answer: If you want to avoid the explicit new JceOpenSSLPKCS8DecryptorProviderBuilder().setProvider(securityProvider), the call Security.insertProviderAt(securityProvider, 1) instead of simply Security.addProvider(securityProvider) would in this case also solve the problem. But this holds true only as long as no other part of your code or any third-party library sets another provider to position 1 afterwards, as explained in the Javadoc. So maybe it is not a good idea to rely on that.
-
-2. Edit:
-On second thought, when creating the JceOpenSSLPKCS8DecryptorProviderBuilder, you're not explicitly specifying the provider:
-new JceOpenSSLPKCS8DecryptorProviderBuilder()
-    .setProvider(BouncyCastleProvider.PROVIDER_NAME) // add this line
-    .build(passphrase.toCharArray());
-
-It seems OpenJ9 uses a different provider/algo selection mechanism and selects the SunJCE's AESCipher class as CipherSpi by default, while Hotspot selects BouncyCastleProvider's AES class.
-Explicitly specifying the provider should work in all cases.
-Alternatively, when adding the BouncyCastleProvider you could insert it at the first preferred position (i.e. Security.insertProviderAt(new BouncyCastleProvider(), 1) instead of Security.addProvider(new BouncyCastleProvider())) so that it gets selected.
-(It's still unclear to me why the provider selection mechanism differs between the different JVMs.)
-
-Original post:
-I've managed to reproduce the issue and at this point I'd say it's an incompatibility issue with the OpenJ9 JVM.
-Starting from a Hotspot base image instead, e.g.
-FROM adoptopenjdk:11-jre-hotspot
-
-makes the code work.
-(Not yet entirely sure whether the fault lies with the Docker image itself, the OpenJ9 JVM or BouncyCastle)
-",Bouncy Castle
-"I have been trying to find a way to BouncyCastleProvider in Java 11 from openjdk. Since there is no ext folder, I can't figure out where to put the jar file. I am using gradle build on MacOS Catalina. It will really help if someone can help me out on this. 
-I am getting the following error while running gradle build. I have dependency mentioned in gradle as well.
-java.lang.ClassNotFoundException: org.bouncycastle.jce.provider.BouncyCastleProvider
-
-","1. You can use the way you import all your other gradle dependencies.
-For example with the dependency:
-compile group: 'org.bouncycastle', name: 'bcprov-jdk15on', version: '1.64'
-Taken from:
-https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on/1.64
-(Where you can find other versions if you need them)
-If you're unsure how to import gradle dependencies I'd suggest searching Stackoverflow, as I'm sure that been asked many times before.
-For example:
-How to import external dependencies in gradle?
-EDIT: You added another error. Could be the path of the jar needs to be added / classpath.
-If you want to add the JAR to your ide, which I guess is Gradle Build Tool if I understood you right. They speak of how to do it here:
-How to add local .jar file dependency to build.gradle file?
-",Bouncy Castle
-"We are using Cerbos as an authorization server and one of the features we want to use is the queryPlanner.
-My ultimate goal is to be able to create a TypeORM ""selectQueryBuilder"" from the AST response that I'm getting.
-The following AST is an example of a queryPlanner response that we might get from cerbos:
-{
-  ""operator"": ""and"",
-  ""operands"": [
-    {
-      ""operator"": ""gt"",
-      ""operands"": [
-        {
-          ""name"": ""request.resource.attr.foo""
-        },
-        {
-          ""value"": 4
-        }
-      ]
-    },
-    {
-      ""operator"": ""or"",
-      ""operands"": [
-        {
-          ""operator"": ""eq"",
-          ""operands"": [
-            {
-              ""name"": ""request.resource.attr.bar""
-            },
-            {
-              ""value"": 5
-            }
-          ]
-        },
-        {
-          ""operator"": ""and"",
-          ""operands"": [
-            {
-              ""operator"": ""eq"",
-              ""operands"": [
-                {
-                  ""name"": ""request.resource.attr.fizz""
-                },
-                {
-                  ""value"": 6
-                }
-              ]
-            },
-            {
-              ""operator"": ""in"",
-              ""operands"": [
-                {
-                  ""value"": ""ZZZ""
-                },
-                {
-                  ""name"": ""request.resource.attr.buzz""
-                }
-              ]
-            }
-          ]
-        }
-      ]
-    }
-  ]
-}
-
-I thought about utilizing ucast library to translate this response to some ""CompoundCondition"" and then use the @ucast/sql package to create my selectQueryBuilder.
-I believe that the condition should look something like this in my case:
-import { CompoundCondition, FieldCondition } from '@ucast/core'
-
-const condition = new CompoundCondition('and', [
-  new FieldCondition('gt', 'R.attr.foo', 4),
-  new CompoundCondition('or', [
-    new FieldCondition('eq', 'R.attr.bar', 5),
-    new CompoundCondition('and', [
-      new FieldCondition('eq', 'R.attr.fizz', 6),
-      new FieldCondition('in', 'R.attr.buzz', 'ZZZ')
-    ])
-  ])
-])
-
-Then it should be very easy to create the queryBuilder:
-  const conn = await createConnection({
-    type: 'mysql',
-    database: ':memory:',
-    entities: [User]
-  });
-
-
-  const qb = interpret(condition, conn.createQueryBuilder(User, 'u'));
-}
-
-I am just having trouble creating the needed function (AST to compoundCondition)...
-","1. Something like this might help.
-import { CompoundCondition, Condition, FieldCondition } from '@ucast/core';
-
-export abstract class Expression {
-  protected constructor() {
-  }
-
-  abstract renderCondition(): Condition;
-
-  static create(json: any): Expression {
-    const {
-      operator,
-      operands,
-      name,
-      value
-    } = json;
-
-    switch (operator) {
-      case 'and':
-        return AndExpression.create(operands);
-
-      case 'or':
-        return OrExpression.create(operands);
-
-      case 'gt':
-        return GreaterThanExpression.create(operands);
-
-      case 'gte':
-        return GreaterThanOrEqualToExpression.create(operands);
-
-      case 'lt':
-        return LesserThanExpression.create(operands);
-
-      case 'lte':
-        return LesserThanOrEqualToExpression.create(operands);
-
-      case 'eq':
-        return EqualToExpression.create(operands);
-
-      case 'ne':
-        return NotEqualToExpression.create(operands);
-
-      case 'in':
-        return InExpression.create(operands);
-
-      default: {
-        if (name && !value) {
-          return NameExpression.create(name);
-        }
-
-        if (!name && value) {
-          return ValueExpression.create(value);
-        }
-
-        throw new Error(`unsupported expression operator ${operator}`);
-      }
-    }
-  }
-}
-
-export abstract class OpExpression extends Expression {
-  protected constructor() {
-    super();
-  }
-
-  abstract renderOperator(): string;
-}
-
-export abstract class BinaryExpression extends OpExpression {
-  protected constructor(
-    protected leftOperand: Expression,
-    protected rightOperand: Expression
-  ) {
-    super();
-  }
-
-  override renderCondition(): Condition {
-    const isLeftOperandName: boolean = this.leftOperand instanceof NameExpression;
-    const isLeftOperandValue: boolean = this.leftOperand instanceof ValueExpression;
-
-    const isRightOperandName: boolean = this.rightOperand instanceof NameExpression;
-    const isRightOperandValue: boolean = this.rightOperand instanceof ValueExpression;
-
-    if (isLeftOperandName) {
-      const leftExpression: NameExpression = this.leftOperand as NameExpression;
-
-      if (isRightOperandName) {
-        const rightExpression: NameExpression = this.rightOperand as NameExpression;
-
-        return new FieldCondition(this.renderOperator(), leftExpression.name, rightExpression.name);
-      } else if (isRightOperandValue) {
-        const rightExpression: ValueExpression = this.rightOperand as ValueExpression;
-
-        return new FieldCondition(this.renderOperator(), leftExpression.name, rightExpression.value);
-      }
-    } else if (isLeftOperandValue) {
-      const leftExpression: ValueExpression = this.leftOperand as ValueExpression;
-
-      if (isRightOperandName) {
-        const rightExpression: NameExpression = this.rightOperand as NameExpression;
-
-        return new FieldCondition(this.renderOperator(), rightExpression.name, leftExpression.value);
-      } else if (isRightOperandValue) {
-        const rightExpression: ValueExpression = this.rightOperand as ValueExpression;
-
-        return new FieldCondition(this.renderOperator(), rightExpression.value, leftExpression.value);
-      }
-    }
-
-    return new CompoundCondition(this.renderOperator(), [
-      this.leftOperand.renderCondition(),
-      this.rightOperand.renderCondition()
-    ]);
-  }
-}
-
-export abstract class UnaryExpression extends OpExpression {
-  protected constructor(
-    protected operand: Expression
-  ) {
-    super();
-  }
-
-  override renderCondition(): Condition {
-    return new CompoundCondition(this.renderOperator(), [
-      this.operand.renderCondition()
-    ]);
-  }
-}
-
-export abstract class NaryExpression extends OpExpression {
-  protected constructor(
-    protected operands: Expression[]
-  ) {
-    super();
-  }
-
-  override renderCondition(): Condition {
-    return new CompoundCondition(this.renderOperator(), this.operands.map((operand: Expression) => operand.renderCondition()));
-  }
-}
-
-export class AndExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'and';
-  }
-
-  static create(json: any): AndExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new AndExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class OrExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'or';
-  }
-
-  static create(json: any): OrExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new OrExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class GreaterThanExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'gt';
-  }
-
-  static create(json: any): GreaterThanExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new GreaterThanExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class GreaterThanOrEqualToExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'gte';
-  }
-
-  static create(json: any): GreaterThanExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new GreaterThanOrEqualToExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class LesserThanExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'lt';
-  }
-
-  static create(json: any): LesserThanExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new LesserThanExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class LesserThanOrEqualToExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'lte';
-  }
-
-  static create(json: any): LesserThanExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new LesserThanOrEqualToExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class EqualToExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'eq';
-  }
-
-  static create(json: any): EqualToExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new EqualToExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class NotEqualToExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'ne';
-  }
-
-  static create(json: any): NotEqualToExpression {
-    const [leftOperand, rightOperand, ...otherOperands] = json;
-
-    if (otherOperands.length !== 0) {
-      throw new Error('too many operands for a binary expression');
-    }
-
-    return new NotEqualToExpression(
-      Expression.create(leftOperand),
-      Expression.create(rightOperand)
-    );
-  }
-}
-
-export class InExpression extends BinaryExpression {
-  protected constructor(
-    leftOperand: Expression,
-    rightOperand: Expression
-  ) {
-    super(leftOperand, rightOperand);
-  }
-
-  override renderOperator(): string {
-    return 'in';
-  }
-
-  static create(json: any): InExpression {
-    const [operand, ...operands] = json;
-
-    return new InExpression(
-      Expression.create(operand),
-      ValueExpression.create(
-        operands
-          .map((operand: any) => ValueExpression.create(operand))
-          .map((valueExpression: ValueExpression) => valueExpression.value)
-      )
-    );
-  }
-}
-
-export class NameExpression extends Expression {
-  protected constructor(public name: string) {
-    super();
-  }
-
-  override renderCondition(): Condition {
-    throw new Error('Method not implemented.');
-  }
-
-  static create(name: string): NameExpression {
-    return new NameExpression(name);
-  }
-}
-
-export class ValueExpression extends Expression {
-  protected constructor(public value: any) {
-    super();
-  }
-
-  override renderCondition(): Condition {
-    throw new Error('Method not implemented.');
-  }
-
-  static create(value: string): ValueExpression {
-    return new ValueExpression(value);
-  }
-}
-
-
-",Cerbos
-"Currently running tests in cypress and am trying to mock a function that wraps cerbos calls. Getting the following error, and not really sure how to fix it. Anyone know how to modify the loader to handle this?
-Error: Webpack Compilation Error
-./node_modules/@cerbos/http/lib/index.js 177:56
-Module parse failed: Unexpected token (177:56)
-You may need an appropriate loader to handle this file type, currently no loaders are configured to process this file. See https://webpack.js.org/concepts#loaders
-| const headers = ({ headers: optionalHeaders, playgroundInstance }, adminCredentials) => {
-|     const headers = new Headers(typeof optionalHeaders === ""function"" ? optionalHeaders() : optionalHeaders);
->     headers.set(""User-Agent"", headers.get(""User-Agent"")?.concat("" "", defaultUserAgent) ?? defaultUserAgent);
-|     if (adminCredentials) {
-|         headers.set(""Authorization"", `Basic ${(0, abab_1.btoa)(`${adminCredentials.username}:${adminCredentials.password}`) ?? """"}`);
- @ ./src/shared/cerbos/cerbos.tsx 1:0-36 16:23-27
- @ ./cypress/e2e/reviewPage.cy.ts
-    at Watching.handle [as handler] (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/packages/server/node_modules/@cypress/webpack-preprocessor/dist/index.js:212:23)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Watching.js:99:9
-    at AsyncSeriesHook.eval [as callAsync] (eval at create (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:6:1)
-    at Watching._done (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Watching.js:98:28)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Watching.js:73:19
-    at Compiler.emitRecords (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:499:39)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Watching.js:54:20
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:485:14
-    at AsyncSeriesHook.eval [as callAsync] (eval at create (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/tapable/lib/HookCodeFactory.js:33:10), <anonymous>:6:1)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:482:27
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/neo-async/async.js:2818:7
-    at done (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/neo-async/async.js:3522:9)
-    at writeOut (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:452:16)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:476:7
-    at arrayIterator (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/neo-async/async.js:3467:9)
-    at timesSync (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/neo-async/async.js:2297:7)
-    at Object.eachLimit (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/neo-async/async.js:3463:5)
-    at emitFiles (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/webpack/lib/Compiler.js:358:13)
-    at /Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/node_modules/mkdirp/index.js:49:26
-    at callback (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/packages/server/node_modules/graceful-fs/polyfills.js:299:20)
-    at callback (/Users/thomasheill/Library/Caches/Cypress/12.16.0/Cypress.app/Contents/Resources/app/packages/server/node_modules/graceful-fs/polyfills.js:299:20)
-    at FSReqCallback.oncomplete (node:fs:200:5)
-
-","1. So we are using vite as our build tool, and to fix this i hade to update the preprocessor in cypress.
-had to install vite-plugin-ngmi-polyfill and then update the cypress.config.ts file like so
-import { defineConfig } from 'cypress'
-import vitePreprocessor from 'cypress-vite'
-
-export default defineConfig({
-  e2e: {
-    setupNodeEvents(on) {
-      on('file:preprocessor', vitePreprocessor())
-    },
-  },
-})
-
-
-",Cerbos
-"The domain for an ingress running on my AWS EKS cluster is certified via cert-manager and Let's Encrypt.
-The domain is considered secured in my desktop browser and my newer android phone.
-In the browser of my older android phone, from nearly 10 years ago, the domain is considered insecure.
-My question is, I am testing an android APP that makes API request to the above domain. In the newer android phone, the app works fine. but in the older android phone, the app is unable to login because I suspect that the API request to this domain is blocked, somehow.
-Is that the case?
-","1. The reason why it stopped working on API <= 25 is explained here: https://letsencrypt.org/2023/07/10/cross-sign-expiration.html
-The solution is to add X1 and X2 root certificates directly to the app:
-
-Add android:networkSecurityConfig=""@xml/network_security_config"" to <application> in AndroidManifest.xml.
-Create a res/xml/network_security_config.xml:
-
-<?xml version=""1.0"" encoding=""utf-8""?>
-
-<network-security-config>
-    <base-config cleartextTrafficPermitted=""false"">
-        <trust-anchors>
-            <certificates src=""@raw/isrg_root_x2"" />
-            <certificates src=""@raw/isrg_root_x1"" />
-            <certificates src=""system"" />
-        </trust-anchors>
-    </base-config>
-</network-security-config>
-
-
-Download ISRG Root X1 from https://letsencrypt.org/certs/isrgrootx1.der, save it as res/raw/isrg_root_x1.der
-Download ISRG Root X2 from https://letsencrypt.org/certs/isrg-root-x2.der, save it as res/raw/isrg_root_x2.der
-
-",cert-manager
-"Below is the describe output for both my clusterissuer and certificate reource. I am brand new to cert-manager so not 100% sure this is set up properly - we need to use http01 validation however we are not using an nginx controller. Right now we only have 2 microservices so the public-facing IP address simply belongs to a k8s service (type loadbalancer) which routes traffic to a pod where an Extensible Service Proxy container sits in front of the container running the application code. Using this set up I haven't been able to get anything beyond the errors below, however as I mentioned I'm brand new to cert-manager & ESP so this could be configured incorrectly...
-Name:         clusterissuer-dev
-Namespace:    
-Labels:       <none>
-Annotations:  kubectl.kubernetes.io/last-applied-configuration:
-API Version:  cert-manager.io/v1beta1
-Kind:         ClusterIssuer
-Metadata:
-  Creation Timestamp:  2020-08-07T18:46:29Z
-  Generation:          1
-  Resource Version:    4550439
-  Self Link:           /apis/cert-manager.io/v1beta1/clusterissuers/clusterissuer-dev
-  UID:                 65933d87-1893-49af-b90e-172919a18534
-Spec:
-  Acme:
-    Email:  email@test.com
-    Private Key Secret Ref:
-      Name:  letsencrypt-dev
-    Server:  https://acme-staging-v02.api.letsencrypt.org/directory
-    Solvers:
-      http01:
-        Ingress:
-          Class:  nginx
-Status:
-  Acme:
-    Last Registered Email:  email@test.com
-    Uri:                    https://acme-staging-v02.api.letsencrypt.org/acme/acct/15057658
-  Conditions:
-    Last Transition Time:  2020-08-07T18:46:30Z
-    Message:               The ACME account was registered with the ACME server
-    Reason:                ACMEAccountRegistered
-    Status:                True
-    Type:                  Ready
-Events:                    <none>
-
-
-Name:         test-cert-default-ns
-Namespace:    default
-Labels:       <none>
-Annotations:  kubectl.kubernetes.io/last-applied-configuration:
-API Version:  cert-manager.io/v1beta1
-Kind:         Certificate
-Metadata:
-  Creation Timestamp:  2020-08-10T15:05:31Z
-  Generation:          2
-  Resource Version:    5961064
-  Self Link:           /apis/cert-manager.io/v1beta1/namespaces/default/certificates/test-cert-default-ns
-  UID:                 259f62e0-b272-47d6-b70e-dbcb7b4ed21b
-Spec:
-  Dns Names:
-    dev.test.com
-  Issuer Ref:
-    Name:       clusterissuer-dev
-  Secret Name:  clusterissuer-dev-tls
-Status:
-  Conditions:
-    Last Transition Time:        2020-08-10T15:05:31Z
-    Message:                     Issuing certificate as Secret does not exist
-    Reason:                      DoesNotExist
-    Status:                      False
-    Type:                        Ready
-    Last Transition Time:        2020-08-10T15:05:31Z
-    Message:                     Issuing certificate as Secret does not exist
-    Reason:                      DoesNotExist
-    Status:                      True
-    Type:                        Issuing
-  Next Private Key Secret Name:  test-cert-default-ns-rrl7j
-Events:
-  Type    Reason     Age    From          Message
-  ----    ------     ----   ----          -------
-  Normal  Requested  2m51s  cert-manager  Created new CertificateRequest resource ""test-cert-default-ns-c4wxd""
-
-One last item - if I run the command kubectl get certificate -o wide I get the following output.
-  NAME                           READY   SECRET                         ISSUER                     STATUS                                         AGE
-  test-cert-default-ns           False   clusterissuer-dev-tls          clusterissuer-dev          Issuing certificate as Secret does not exist   2d23h
-
-","1. I had the same issue and I followed the advice given in the comments by @Popopame suggesting to check out the troubleshooting guide of cert-manager to find out how to troubleshoot cert-manager. or [cert-managers troubleshooting guide for acme issues] to find out which part of the acme process breaks the setup.
-It seems that often it is the acme-challenge where letsencrypt verifies the domain ownership by requesting a certain code be offered at port 80 at a certain path. For example: http://example.com/.well-known/acme-challenge/M8iYs4tG6gM-B8NHuraXRL31oRtcE4MtUxRFuH8qJmY. Notice the http:// that shows letsencrypt will try to validate domain ownership on port 80 of your desired domain.
-So one of the common errors is, that cert-manager could not put the correct challenge in the correct path behind port 80. For example due to a firewall blocking port 80 on a bare metal server or a loadbalancer that only forwards port 443 to the kubernetes cluster and redirects to 443 directly.
-Also be aware of the fact, that cert-manager tries to validate the ACME challenge as well so you should configure the firewalls to allow requests coming from your servers as well.
-If you have trouble getting your certificate to a different namespace, this would be a good point to start with.
-In your specific case I would guess at a problem with the ACME challenge as the CSR (Certificate Signing Request) was created as indicated in the bottom most describe line but nothing else happened.
-
-2. 1. Setup Using Helm
-By far the easiest method I've found was to use helm v3 to install cert-manager. I was able to set it up on a k3s cluster as follows:
-$   helm repo add jetstack https://charts.jetstack.io
-$   helm repo update
-$   helm install \
-        cert-manager jetstack/cert-manager \
-        --namespace cert-manager \
-        --version v1.2.0 \
-        --create-namespace \
-        --set installCRDs=true
-
-2. Setup ClusterIssuer
-Once it's installed you need to create a ClusterIssuer which can then be used when requesting certificates from let's encrypt.
-$ more cert-clusterissuer.yaml
-apiVersion: cert-manager.io/v1
-kind: ClusterIssuer
-metadata:
-  name: letsencrypt-stg
-spec:
-  acme:
-    email: my_letsencrypt_email@mydom.com
-    server: https://acme-staging-v02.api.letsencrypt.org/directory
-    privateKeySecretRef:
-      # Secret resource that will be used to store the account's private key.
-      name: le-issuer-acct-key
-    solvers:
-    - dns01:
-        cloudflare:
-          email: my_cloudflare_email@mydom.com
-          apiTokenSecretRef:
-            name: cloudflare-api-token-secret
-            key: api-token
-      selector:
-        dnsZones:
-        - 'mydomdom.org'
-        - '*.mydomdom.org'
-
-Deploy that, notice it'll get deployed into the same namespaces as cert-manager:
-$ kubectl apply -f cert-clusterissuer.yaml
-
-$ kubectl get clusterissuers
-NAME              READY   AGE
-letsencrypt-stg   True    53m
-
-3. Setup Cloudflare API Token Secret
-Deploy your Cloudflare API token into a secret and put it into the cert-manager namespace:
-$ more cloudflare-api-token.yaml
-apiVersion: v1
-kind: Secret
-metadata:
-  name: cloudflare-api-token-secret
-  namespace: cert-manager
-type: Opaque
-stringData:
-  api-token: <my cloudflare api token key>
-
-$ kubectl apply -f cloudflare-api-token.yaml
-
-4. Create a test Certificate
-Now attempt to request the generation of a certificate from let's encrypt:
-$ more test-certificate.yaml
-apiVersion: cert-manager.io/v1
-kind: Certificate
-metadata:
-  name: le-test-mydomdom-org
-  namespace: cert-manager
-spec:
-  secretName: le-test-mydomdom-org
-  issuerRef:
-    name: letsencrypt-stg
-    kind: ClusterIssuer
-  commonName: 'le-test.mydomdom.org'
-  dnsNames:
-  - ""le-test.mydomdom.org""
-
-$ kubectl -n cert-manager apply -f test-certificate.yaml
-
-5. Debugging Certificate Creation
-You can then watch the request as it flows through the various stages. I believe the flow is certificates -> certificaterequests -> orders -> challenges.
-NOTE: Knowing this general flow was hugely helpful for me in terms of understanding where a request was failing within kubernetes as I was attempting to debug it.
-When debugging you'll typically want to do kubectl get -n cert-manager <stage> -A to see a list of all the outstanding resources within that stage. Keep in mind that after a challenge is fulfilled it'll no longer show up within the output of kubectl get -n cert-manager challenges.
-Also keep in mind that any DNS entries created to fulfill the challenge stage will typically have their TTL set to ~2min so if you go looking in your Cloudflare UI and do not see them, they likely already timed out and rolled off.
-For e.g.:
-
-References
-
-https://cert-manager.io/docs/configuration/acme/
-Installing and using cert-manager with k3s
-Install and Setup Cert-Manager for Automated SSL Certificates
-Make SSL certs easy with k3s
-Installing cert-manager on Kubernetes with CloudFlare DNS - Update
-
-
-3. I had this problem on DigitalOcean, for me disabling proxy protocol and tls-passthrough fixed the problem.
-These configs should be commented on ingress-nginx service:
-# Enable proxy protocol
-service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: ""true""
-# Specify whether the DigitalOcean Load Balancer should pass encrypted data to backend droplets
-service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: ""true""
-
-",cert-manager
-"I've been trying for a couple of days now to get my AKS to issue a certificate request to Cloudflare via its API key. From what I can see, the API key has all the right permissions, but the certificate can't seem to finish its round-robin and complete.
-I've tried various things and different versions of cert-manager, but nothing seems to work.
-It just comes back with the error: Issuing certificate as Secret does not exist
-I've also followed these links to try and resolve this issue:
-https://cert-manager.io/docs/troubleshooting/
-Issuing certificate as Secret does not exist
-Basically no matter what I do I am left with this pending state:
-
-
-  Normal  OrderCreated     <invalid>  cert-manager  Created Order resource ingress-nginx/jc-aks-testing-cert-5z48g-3941378753
-  Normal  cert-manager.io  <invalid>  cert-manager  Certificate request has been approved by cert-manager.io
-  Normal  OrderPending     <invalid>  cert-manager  Waiting on certificate issuance from order ingress-nginx/jc-aks-testing-cert-5z48g-3941378753: """"
-
-
-
-Here is my long script to make all this:
-
-
-#!/bin/bash
-rg=""jc-testing5-aks-rg""
-location=""francecentral""
-cluster=""jc-aks-testing5-cluster""
-keyvaultname=""jc-aks-testing5-kv""
-
-## Create RG
-echo ""Creating Resource Group $rg""
-az group create --name $rg --location $location
-
-## Create AKS Cluster
-echo ""Creating AKS Cluster $cluster""
-
-az aks create -g $rg -n $cluster --load-balancer-managed-outbound-ip-count 1 --enable-managed-identity --node-vm-size Standard_B2s --node-count 1 --generate-ssh-keys
-
-## Create KeyVault
-echo ""Creating KeyVault $keyvaultname""
-az key vault create --resource-group $rg --name $keyvaultname
-
-## Connect to Cluster
-echo ""Connecting to AKS Cluster.""
-az aks get-credentials --resource-group $rg --name $cluster --overwrite-existing
-
-## Install Nginx
-echo ""Installing Nginx into the cluster""
-helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-helm repo update
-
-helm install nginx-ingress ingress-nginx/ingress-nginx \
-    --namespace ingress-nginx --create-namespace\
-    --set controller.replicaCount=2 \
-    --set controller.nodeSelector.""kubernetes\.io/os""=linux \
-    --set controller.admissionWebhooks.patch.nodeSelector.""kubernetes\.io/os""=linux \
-    --set defaultBackend.nodeSelector.""kubernetes\.io/os""=linux
-
-#CERT_MANAGER_TAG=v1.3.1
-CERT_MANAGER_TAG=v1.13.6
-
-# Label the ingress-basic namespace to disable resource validation
-kubectl label namespace ingress-nginx cert-manager.io/disable-validation=true
-
-# Add the Jetstack Helm repository in preparation to install Cert-Manager
-echo ""Installing Cert-Manager""
-helm repo add jetstack https://charts.jetstack.io --force-update
-
-# Update your local Helm chart repository cache
-helm repo update
-
-# Install the cert-manager Helm chart
-helm install cert-manager jetstack/cert-manager \
-  --namespace ingress-nginx \
-  --version $CERT_MANAGER_TAG \
-  --set installCRDs=true \
-  --set nodeSelector.""kubernetes\.io/os""=linux
-
-## Create a Cert-Cluster Issuer.
-echo ""Creating Certmanger Cluster Issuer for ArgoCD""
-
-cat << EOF | kubectl apply -f - 
-apiVersion: v1
-kind: Secret
-metadata:
-  name: cloudflare-api-key-secret
-  namespace: ingress-nginx
-type: Opaque
-Data:
-  api-key: MYVALUE
-EOF
-
-cat << EOF | kubectl apply -f -
-apiVersion: cert-manager.io/v1
-kind: Issuer
-metadata:
-  name: letsencrypt
-  namespace: ingress-nginx
-spec:
-  acme:
-    server: https://acme-v02.api.letsencrypt.org/directory
-    email: jason@mydomain.com
-    privateKeySecretRef:
-      name: letsencrypt
-    solvers:
-    solvers:
-    - dns01:
-        cloudflare:
-          apiKeySecretRef:
-            key: api-key
-            name: cloudflare-api-key-secret
-          email: jason@mydomain.com
-EOF
-
-
-cat << EOF | kubectl apply -f -
-apiVersion: cert-manager.io/v1
-kind: Certificate
-metadata:
-  name: jc-aks-testing-cert
-  namespace: ingress-nginx
-spec:
-  secretName: mydomain.com-tls
-  issuerRef:
-    name: letsencrypt
-
-  duration: 2160h # 90d
-  renewBefore: 720h # 30d before SSL will expire, renew it
-  dnsNames:
-    - ""mydomain.com""
-    - ""mydomain.com""
-EOF
-
-## Install Argo CD
-echo ""Installing Argo CD""
-kubectl create namespace argocd
-
-kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
-
-## Configure Argo CD to Look at Custom Domain
-echo ""Configuring Argo CD to Look at Custom Domain""
-cat << EOF | kubectl apply -f -
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  name: argocd-server-ingress
-  namespace: argocd
-  annotations:
-    cert-manager.io/issuer: letsencrypt
-    kubernetes.io/ingress.class: nginx
-    kubernetes.io/tls-acme: ""true""
-    nginx.ingress.kubernetes.io/ssl-passthrough: ""true""
-    # If you encounter a redirect loop or are getting a 307 response code
-    # then you need to force the nginx ingress to connect to the backend using HTTPS.
-    #
-    nginx.ingress.kubernetes.io/backend-protocol: ""HTTPS""
-spec:
-  rules:
-  - host: mydomain.com
-    http:
-      paths:
-      - path: /
-        pathType: Prefix
-        backend:
-          service:
-            name: argocd-server
-            port:
-              name: https
-  tls:
-  - hosts:
-    - mydomain.com
-    secretName: argocd-secret # do not change, this is provided by Argo CD
-EOF
-
-
-## Get the Password for Argo CD Login
-echo ""Getting the Password to login into Argo-CD""
-argo_cd_pass=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=""{.data.password}"" | base64 -d)
-echo ""$argo_cd_pass""
-
-
-
-","1. The way I got this to work in the end was from these three sources:
-
-https://community.cloudflare.com/t/unable-to-update-ddns-using-api-for-some-tlds/167228/71
-https://blog.cloudflare.com/automated-origin-ca-for-kubernetes
-https://tjtharrison.medium.com/deploying-ingress-in-kubernetes-with-cert-manager-letsencrypt-and-cloudflare-a016735446b2
-
-The first item led me down a rabbit hole of figuring out if something was wrong with the API key. I believe Cloudflare doesn't want you to use the Global API key and the General API key.
-I then found out on another blog post, but I don't have this to hand that rights for the API key need to be Zone > Zone > Read, Zone > DNS > Edit and Zone > DNS > Read for some reason and I can't for the life of me figure out why, if you have just Zone > DNS > Edit the API Key can not see the DNS records. That was a weird issue.
-Once the API key was set correctly, I followed the Cloudflare blog post to put the origin ca issuer repo tech into the cluster. I found I had to do this to get an issuer communicating correctly with the API key into Cloudflare. Without the Cloudflare Origin Issuer tech, you get some weird errors, and the communication does not work correctly. Also, note that when you install this onto your cluster, your cluster needs to have three nodes minimum.
-Once that was done, I followed the third resource on how to make an Issuer, Certificate, and Nginx load balancer. This works, but there are a couple of things to note.
-First, ensure your issuer is a ClusterIssuer and not just an Issuer. A cluster issuer works for all namespaces, but an issuer only works for the namespace you put it in. In my case, it was cert-manager, which is not great if your application is in its own namespace with an Nginx service. The whole thing will not work.
-Also, the creation of the secret is like this:
-
-
-kubectl create secret generic jasons-api-key \
-    -n cert-manager\
-    --from-literal api-token='api-key-value'
-
-
-
-If you notice, before I specify the API-key-value, there is an = and then a name associated with that called API-token. This is important because when you come to your issuer code under apiTokenSecretRef:, you have to make sure that the key: value is the same as that association.
-So your cluster issuer code will look as follows:
-
-
-cat  << EOF | kubectl apply -f -
-apiVersion: cert-manager.io/v1
-kind: ClusterIssuer
-metadata:
-  name: lets-encrypt-jasons-cert
-  namespace: cert-manager
-spec:
-  acme:
-    email: <email address>
-    server: https://acme-v02.api.letsencrypt.org/directory
-    privateKeySecretRef:
-      # Secret resource that will be used to store the account's private key.
-      name: lets-encrypt-jasons-cert
-    solvers:
-    - dns01:
-        cloudflare:
-          email: <email address>
-          apiTokenSecretRef:
-            name: jasons-api-key
-            key: api-token
-      selector:
-        dnsZones:
-        - <central domain name of the account, so the top-level domain>
-EOF
-
-
-
-Also note under solvers -dns01: the DNS zones: needs to be the domain name, for example , example.com it's NOT the DNS record that you are pointing Cloudflare to. This caught me out for a while
-I hope this helps someone that might come across this issue.
-",cert-manager
-"I'm currently setting up an Ingress in Kubernetes to work with an AWS ALB, and I need to manage TLS certificates via AWS Certificate Manager (ACM). I had to create the ACM certificate manually to make the ingress work, but I'm looking for a way to automate this process directly from Kubernetes.
-Here is my current Ingress configuration:
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  namespace: game-2048
-  name: ingress-2048
-  annotations:
-    alb.ingress.kubernetes.io/scheme: internet-facing
-    alb.ingress.kubernetes.io/target-type: ip
-    alb.ingress.kubernetes.io/listen-ports: '[{""HTTPS"":443}]'
-    alb.ingress.kubernetes.io/group.name: ""application-shared-lb""
-spec:
-  tls:
-  - hosts:
-    - snpv.cclab.cloud-castles.com
-    secretName: game-2048-tls
-  ingressClassName: alb
-  rules:
-    - host: snpv.cclab.cloud-castles.com
-      http:
-        paths:
-        - path: /
-          pathType: Prefix
-          backend:
-            service:
-              name: service-2048
-              port:
-                number: 80
-
-
-I've found this documentation about AWSPCAClusterIssuer which creates private ACM certificate but it only terminates when ingressclass is nginx and doesn't suit my needs.
-Is there a recommended way or existing tool to automate ACM certificate provisioning and integration with Kubernetes, especially for scenarios like mine where the Ingress needs to interface directly with AWS resources?
-","1. ACM public ssl certificate creation from EKS feature-request is being tracked here https://github.com/aws-controllers-k8s/community/issues/482
-AWS Loadbalancer Controller can automatically discover certificate from the host https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.5/guide/ingress/cert_discovery/#discover-via-ingress-rule-host
-
-below attaches a cert for dev.example.com or *.example.com to the ALB
-apiVersion: networking.k8s.io/v1 
-kind: Ingress 
-metadata:
-  namespace: default 
-  name: ingress 
-annotations:  
-  alb.ingress.kubernetes.io/listen-ports: '[{""HTTPS"":443}]' 
-spec:  
-  ingressClassName: alb   
-  rules:
-  - host: dev.example.com
-    http:
-      paths:
-      - path: /users
-        pathType: Prefix
-        backend:
-          service:
-            name: user-service
-            port:
-              number: 80 
-
-
-",cert-manager
-"I want to make some playbooks for checkpoint; My question is: for checkpoint is there a specific connection string from ansible?
-`Procedure to generate database backup in Security Management Server:
-$MDS_FWDIR/scripts/migrate_server import/export -v R81.10 -skip_upgrade_tools_check /path_to_file/export.tgz`
-Regards;
-I would like to be able to do without modules, since I use an offline installation
-","1. You can use match,search or regex to match strings against a substring.
-Read more about this in official docs testing strings
-Or if you need specific package(Nginx example) then
-when: nginxVersion.stdout != 'nginx version: nginx/1.2.6'
-
-will check if Nginx is not present on your server and install 1.2.6.
-",Check Point
-"# I'm running this python code in my pc windows 10 on PyCharm 2024 version in Virtual enviroment-----: -
-`import os
-import numpy as np
-import librosa
-import soundfile as sf
-import tensorflow as tf
-from tensorflow.keras.models import Model, load_model
-from tensorflow.keras.layers import Input, Dense, Masking, LSTM, TimeDistributed
-from tensorflow.keras.callbacks import ModelCheckpoint
-
-def extract_features(file_path, sample_rate=22050, duration=4):
-    try:
-        audio, _ = librosa.load(file_path, sr=sample_rate, duration=duration, mono=True)
-        mel_spec = librosa.feature.melspectrogram(y=audio, sr=sample_rate)
-        mel_spec = librosa.power_to_db(mel_spec, ref=np.max)
-        return mel_spec
-    except Exception as e:
-        print(f""Error processing {file_path}: {e}"")
-        return None
-
-def load_dataset(directory, sample_rate=22050, duration=4):
-    audio_extensions = ('.wav', '.mp3', '.flac', '.aac', '.m4a', '.ogg')
-    features = []
-    for root, _, files in os.walk(directory):
-        for filename in files:
-            if filename.lower().endswith(audio_extensions):
-                file_path = os.path.join(root, filename)
-                print(f""Processing file: {file_path}"")
-                mel_spec = extract_features(file_path, sample_rate, duration)
-                if mel_spec is not None:
-                    features.append(mel_spec)
-                else:
-                    print(f""Failed to extract features from {file_path}"")
-    if len(features) == 0:
-        print(""No valid audio files found in the directory."")
-    return features
-
-def pad_sequences(sequences, maxlen=None):
-    if maxlen is None:
-        maxlen = max(seq.shape[1] for seq in sequences)
-    padded sequences = []
-    for seq in sequences:
-        if seq.shape[1] < maxlen:
-            pad_width = maxlen - seq.shape[1]
-            padded_seq = np.pad(seq, ((0, 0), (0, pad_width)), mode='constant')
-        else:
-            padded_seq = seq[:, :maxlen]
-        padded_sequences.append(padded_seq)
-    return np.array(padded_sequences)
-
-def create_sequence_autoencoder(input_shape):
-    input_layer = Input(shape=input_shape)
-    masked = Masking(mask_value=0.0)(input_layer)
-    encoded = LSTM(128, activation='relu', return_sequences=True)(masked)
-    encoded = LSTM(64, activation='relu', return_sequences=False)(encoded)
-    repeated = tf.keras.layers.RepeatVector(input_shape[0])(encoded)
-    decoded = LSTM(64, activation='relu', return_sequences=True)(repeated)
-    decoded = LSTM(128, activation='relu', return_sequences=True)(decoded)
-    decoded = TimeDistributed(Dense(input_shape[1], activation='sigmoid'))(decoded)
-    autoencoder = Model(input_layer, decoded)
-    autoencoder.compile(optimizer='adam', loss='mean_squared_error')
-    return autoencoder
-
-# Training the model
-qari_dataset_directory = r""E:\quran\Hindi\Hindi_Translation_Splitter\pythonProject1\pythonProject1\qari_voice\qari-dataset""  # Adjust the path as needed
-X = load_dataset(qari_dataset_directory)
-
-print(""Loaded dataset shape:"", [x.shape for x in X])
-
-if len(X) > 0:
-    max_length = max(x.shape[1] for x in X)
-    X_padded = pad_sequences(X, maxlen=max_length)
-    input_shape = (X_padded.shape[1], X_padded.shape[2])
-    autoencoder = create_sequence_autoencoder(input_shape)
-
-    # Save the best model
-    while True:
-        try:
-            checkpoint_path = input(""Enter the path to save the model checkpoint (e.g., qari_autoencoder.keras): "")
-            checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_loss', save_best_only=True, mode='min')
-            autoencoder.fit(X_padded, X_padded, epochs=10, batch_size=16, validation_split=0.2, callbacks=[checkpoint])
-            if os.path.exists(checkpoint_path):
-                print(f""Model checkpoint saved at: {checkpoint_path}"")
-                break
-            else:
-                raise Exception(""Checkpoint not saved."")
-        except Exception as e:
-            print(f""Failed to save the model checkpoint at: {checkpoint_path}, error: {e}"")
-
-# Load the trained model
-if os.path.exists(checkpoint_path):
-    autoencoder = load_model(checkpoint_path)
-    print(""Model loaded successfully."")
-else:
-    print(f""Model checkpoint not found at: {checkpoint_path}"")
-    exit(1)
-
-def preprocess_audio(file_path, sample_rate=22050, duration=4):
-    mel_spec = extract_features(file_path, sample_rate, duration)
-    if mel_spec is None:
-        raise ValueError(f""Failed to extract features from {file_path}"")
-    return mel_spec
-
-def pad_and_reshape(mel_spec, max_length):
-    if mel_spec.shape[1] < max_length:
-        pad_width = max_length - mel_spec.shape[1]
-        mel_spec_padded = np.pad(mel_spec, ((0, 0), (0, pad_width)), mode='constant')
-    else:
-        mel_spec_padded = mel_spec[:, :max_length]
-    return np.expand_dims(mel_spec_padded, axis=0)  # Reshape to match model input shape
-
-# Example file to process
-audio_file_path = r""E:\quran\Hindi\Hindi_Translation_Splitter\Output\114 MSTR.wav""
-
-# Preprocess the audio
-mel_spec = preprocess_audio(audio_file_path)
-max_length = autoencoder.input_shape[1]
-mel_spec_padded = pad_and_reshape(mel_spec, max_length)
-
-# Predict using the autoencoder
-output = autoencoder.predict(mel_spec_padded)
-
-# Reshape and convert the output back to the original shape
-output_mel_spec = output[0]
-
-# Convert mel spectrogram back to audio
-def mel_spec_to_audio(mel_spec, sample_rate=22050):
-    mel_spec = librosa.db_to_power(mel_spec)
-    audio = librosa.feature.inverse.mel_to_audio(mel_spec, sr=sample_rate)
-    return audio
-
-# Convert the output mel spectrogram back to audio
-audio_without_qari_voice = mel_spec_to_audio(output_mel_spec)
-
-# Save the audio without Qari voice
-output_audio_path = r""E:\quran\Hindi\Hindi_Translation_Splitter\Output\Without_qari_output.wav""
-os.makedirs(os.path.dirname(output_audio_path), exist_ok=True)
-sf.write(output_audio_path, audio_without_qari_voice, 22050)
-print(f""Processed audio saved at: {output_audio_path}"")`
-
-My model is not saving after training, what can I do ?, Guide me please ,after completing  10 epoch this code has to save the model, but it doesn't save the model after taking 1 hour of training...........After that I thought to use google colab for this, but I have large dataset containing more than 20000 files , and the size of that dataset is 5gb . so, i can't upload this dataset on google colab `
-
-I tried to solve this issue by chatgpt4o 
-
-","1. The Problem :-
-Actually I have too much of Data And the code I wrote Is not compatible with that data due to which model was not saving because in Training session I was seeing NaN value which means model is not training correctly . To solve This issue I used For loop to train The model ...................................
-",Check Point
-"I deploy my flink tasks based on flink-kubernetes-operator. At the same time, I set up a checkpoint, where the checkpoint directory is a mounted pvc. StateBackend uses RocksDB and is configured with incremental checkpoints. But my program will encounter some problems
-
-When restarting the service, some sub tasks will always be in the DEPLOYING or INITIALIZING state, as if blocked, and will no longer continue to run, while other sub tasks will be in the RUNNING state. But sometimes the task can be deleted again and it can run normally again after redeployment.
-
-The incremental checkpoint size feels like it is continuing to grow, but I have set TTL for the custom state, and other functions are ReduceFunction. How should I check where there is a possibility of state leakage?
-
-Sometimes when a task is running, there will be a situation similar to blocking, and consumption will no longer continue. At the same time, checkpoint continues to fail. Is there any good troubleshooting method for me?
-
-flink version:1.14.4
-flink-kubernetes-operator version: release-1.4.0, link: https://github.com/apache/flink-kubernetes-operator commit:7fc23a1;
-I don't know how to debug. When my job blocked,I artahs/ jstack try to looking at the thread stack, you will find that there is thread blocking; but the result of each blocking is different. sometimes will be like
-[arthas@1]$ thread -b 
-""Window(TumblingEventTimeWindows(60000), EventTimeTrigger, CommonWindowReduceFunction, PassThroughWindowFunction) -> Flat Map (22/72)#0"" Id=157 BLOCKED on java.util.jar.JarFile@449cf4a0 owned by ""Window(TumblingEventTimeWindows(60000), EventTimeTrigger, AppGroupTransactionEventApplyFunction) -> (Sink: Unnamed, Timestamps/Watermarks -> Flat Map) (31/32)#0"" Id=151
-    at java.util.zip.ZipFile$ZipFileInputStream.read(ZipFile.java:719)
-    -  blocked on java.util.jar.JarFile@449cf4a0
-    at java.util.zip.ZipFile$ZipFileInflaterInputStream.fill(ZipFile.java:434)
-    at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
-    at sun.misc.Resource.getBytes(Resource.java:124)
-    at java.net.URLClassLoader.defineClass(URLClassLoader.java:463)
-    at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
-    at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
-    at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
-    at java.security.AccessController.doPrivileged(Native Method)
-    at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
-    at org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:71)
-    at org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48)
-    -  locked org.apache.flink.util.ChildFirstClassLoader@697c9014 <---- but blocks 93 other threads!
-    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
-    at org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:172)
-    at java.lang.Class.forName0(Native Method)
-    at java.lang.Class.forName(Class.java:348)
-    at org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
-    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
-    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
-    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
-    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
-    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
-    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
-    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
-    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
-    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
-    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
-    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
-    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
-    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
-    at java.util.ArrayList.readObject(ArrayList.java:797)
-    at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
-    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-    at java.lang.reflect.Method.invoke(Method.java:498)
-    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
-    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
-    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
-    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
-    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
-    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:617)
-    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:602)
-    at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:589)
-    at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:543)
-    at org.apache.flink.streaming.api.graph.StreamConfig.getOutEdgesInOrder(StreamConfig.java:485)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriters(StreamTask.java:1612)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.createRecordWriterDelegate(StreamTask.java:1596)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:376)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:359)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:332)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:324)
-    at org.apache.flink.streaming.runtime.tasks.StreamTask.<init>(StreamTask.java:314)
-    at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.<init>(OneInputStreamTask.java:75)
-    at sun.reflect.GeneratedConstructorAccessor37.newInstance(Unknown Source)
-    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
-    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
-    at org.apache.flink.runtime.taskmanager.Task.loadAndInstantiateInvokable(Task.java:1582)
-    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:740)
-    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
-    at java.lang.Thread.run(Thread.java:748)
-
-but sometime is other:
-""System Time Trigger for Window(TumblingEventTimeWindows(60000), EventTimeTrigger, CommonWindowReduceFunction, PassThroughWindowFunction) (32/48)#0"" Id=232 TIMED_WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783b47ad
-    at sun.misc.Unsafe.park(Native Method)
-    -  waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783b47ad
-    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
-    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
-    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
-    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
-    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
-    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
-    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
-    ...
-
-
-""KeyedProcess -> Sink: CCLOUD_APP_SPAN_PORTRAIT_DETECT (24/32)#0"" Id=134 BLOCKED on org.apache.flink.util.ChildFirstClassLoader@53624f6b owned by ""Window(TumblingEventTimeWindows(60000), EventTimeTrigger, CommonWindowReduceFunction, PassThroughWindowFunction) (32/48)#0"" Id=92
-    at java.lang.Class.getDeclaredFields0(Native Method)
-    -  blocked on org.apache.flink.util.ChildFirstClassLoader@53624f6b
-    at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
-    at java.lang.Class.getDeclaredField(Class.java:2068)
-    at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1857)
-    at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:79)
-    at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:506)
-    at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
-    at java.security.AccessController.doPrivileged(Native Method)
-
-","1. To first problem:
-If you state is very big, much bigger than you managed memory size, maybe far greater then 4GB. When you restart you job from savepoint/checkpoint , Flink will consume many disk I/O.
-If you found disk IOPS is reaching the limit , many be you can try a upgrade for you disk can resolve this problem for now, but eventually you need to lower you state size.
-To last log:
-If you find a TM thread dump full with many TIMED_WAITING for AbstractQueuedSynchronizer, may be you encountered some bug Similar to: https://issues.apache.org/jira/browse/FLINK-14872
-",Check Point
-"I am getting the 3 below Chokov's failed tests:
-Check: CKV_GCP_109: ""Ensure the GCP PostgreSQL database log levels are set to ERROR or lower""
-    FAILED for resource: google_sql_database_instance.cloud_sql
-    File: /cloud_sql.tf:1-74
-    Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/google-cloud-policies/logging-policies-1/bc-google-cloud-109
-
-        Code lines for this resource are too many. Please use IDE of your choice to review the file.
-Check: CKV_GCP_110: ""Ensure pgAudit is enabled for your GCP PostgreSQL database""
-    FAILED for resource: google_sql_database_instance.cloud_sql
-    File: /cloud_sql.tf:1-74
-    Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/google-cloud-policies/logging-policies-1/bc-google-cloud-110
-
-        Code lines for this resource are too many. Please use IDE of your choice to review the file.
-Check: CKV_GCP_55: ""Ensure PostgreSQL database 'log_min_messages' flag is set to a valid value""
-    FAILED for resource: google_sql_database_instance.cloud_sql
-    File: /cloud_sql.tf:1-74
-    Guide: https://docs.prismacloud.io/en/enterprise-edition/policy-reference/google-cloud-policies/cloud-sql-policies/bc-gcp-sql-6
-
-
-I followed the Pao Alto's links and changed my code accordingly:
-resource ""google_sql_database_instance"" ""cloud_sql"" {
-  name             = ""cloud-sql""
-  database_version = ""POSTGRES_15""
-  region           = var.region
-  project          = var.project_id
-
-  settings {
-    tier = ""db-f1-micro""
-
-    backup_configuration {
-      enabled = true
-    }
-    ip_configuration {
-      ipv4_enabled = false
-      require_ssl     = false
-      private_network = ""projects/${var.project_id}/global/networks/${var.network}""
-    }
-    database_flags {
-      name  = ""log_statement""
-      value = ""all""
-    }
-    database_flags {
-      name  = ""log_lock_waits""
-      value = ""on""
-    }
-    database_flags {
-      name  = ""log_connections""
-      value = ""on""
-    }
-    database_flags {
-      name  = ""log_checkpoints""
-      value = ""on""
-    }
-    database_flags {
-      name  = ""log_disconnections""
-      value = ""on""
-    }
-    database_flags {
-      name  = ""log_hostname""
-      value = ""on""
-    }
-    database_flags {
-      name  = ""log_min_error_statement""
-      value = ""ERROR""
-    }
-    database_flags {
-      name  = ""log_min_messages""
-      value = ""ERROR""
-    }
-#    database_flags {
-#      name  = ""log_min_messages""
-#      value = ""DEBUG5""
-#    }
-#    database_flags {
-#      name  = ""enable_pgaudit""
-#      value = ""on""
-#    }
-    database_flags {
-      name  = ""pgaudit.log""
-      value = ""'all'""
-    }
-    database_flags {
-      name  = ""log_duration""
-      value = ""on""
-    }
-  }
-  deletion_protection = false
-  depends_on          = [google_service_networking_connection.private_vpc_connection]
-}
-
-However, the checks are still failing.
-I have tried a few different things.
-For CKV_GCP_110 I tried adding:
-    database_flags {
-      name  = ""enable_pgaudit""
-      value = ""on""
-    }
-
-or removing a single quotation in value:
-    database_flags {
-      name  = ""pgaudit.log""
-      value = ""all""  // was ""'all'""
-    }
-
-For CKV_GCP_109 and CKV_GCP_55 I tried various values like ERROR or DEBUG5.
-I also tried adding:
-    database_flags {
-      name  = ""log_min_error_statement""
-      value = ""ERROR""
-    }
-
-The checks are still failing.
-","1. So to pass CKV_GCP_109 and CKV_GCP_55 both of the below flags are necessary with values in lowercase.
-   database_flags {
-      name  = ""log_min_error_statement""
-      value = ""error""
-    }
-    database_flags {
-      name  = ""log_min_messages""
-      value = ""error""
-    }
-
-For CKV_GCP_110 both of the below flags are necessary(pay attention to quotation marks in values):
-database_flags {
-  name  = ""enable_pgaudit""
-  value = ""on""
-}
-database_flags {
-  name  = ""pgaudit.log""
-  value = ""'all'""
-}
-
-References:
-https://github.com/bridgecrewio/checkov/issues/6057
-https://github.com/bridgecrewio/checkov/issues/6058
-",Checkov
-"I have following terraform code with a policy that is overly permissive for resources.. I want to check this using Checkov custom yaml policy but I don't find a way to validate the json policy document that is part of resources. Is there a way to do it ?
-  name        = ""test_policy""
-  path        = ""/""
-  description = ""My test policy""
-
-  # Terraform's ""jsonencode"" function converts a
-  # Terraform expression result to valid JSON syntax.
-  policy = jsonencode({
-    Version = ""2012-10-17""
-    Statement = [
-      {
-        Action = [
-          ""ec2:Describe*"",
-        ]
-        Effect   = ""Allow""
-        Resource = ""*""
-      },
-    ]
-  })
-}```
-
-","1. Disregard this, i am able to find the policy. For anybody who stops by here looking for a solution..
-metadata:
-  name: <name>
-  id : <some_id>
-  category: ""general""
-  severity: ""high""
-  guidelines: <some guideline on how to fix it""
-scope:
-  provider: ""aws""
-definition:
-  cond_type: ""attribute""
-  resource_types:
-    - ""aws_iam_policy""
-  attribute: ""policy.Statement.Resource""
-  operator: ""not_contains""
-  value: ""*""
-
-Thank you.
-",Checkov
-"Who can help to deal with Docker Static Analysis With Clair?
-I get an error when analyzing help me figure it out or tell me how to install the Docker Clair scanner correctly?
-Getting Setup
-git clone git@github.com:Charlie-belmer/Docker-security-example.git  
-
-docker-compose.yml 
-version: '2.1'
-
-services:
-  postgres:
-    image: postgres:12.1
-    restart: unless-stopped
-    volumes:
-      - ./docker-compose-data/postgres-data/:/var/lib/postgresql/data:rw
-    environment:
-      - POSTGRES_PASSWORD=ChangeMe
-      - POSTGRES_USER=clair
-      - POSTGRES_DB=clair
-    
-  clair:
-    image: quay.io/coreos/clair:v4.3.4
-    restart: unless-stopped
-    volumes:
-      - ./docker-compose-data/clair-config/:/config/:ro
-      - ./docker-compose-data/clair-tmp/:/tmp/:rw
-    depends_on: 
-      postgres:
-        condition: service_started
-    command: [--log-level=debug, --config, /config/config.yml]
-    user: root
-
-  clairctl:
-    image: jgsqware/clairctl:latest
-    restart: unless-stopped
-    environment: 
-      - DOCKER_API_VERSION=1.41
-    volumes:
-      - ./docker-compose-data/clairctl-reports/:/reports/:rw
-      - /var/run/docker.sock:/var/run/docker.sock:ro
-    depends_on: 
-      clair: 
-        condition: service_started
-    user: root
-
-docker-compose up
-
-The server starts without errors but gets stuck on the same message
-I don't understand what he doesn't like
-test@parallels-virtual-platform:~/Docker-security-example/clair$ docker-compose up
-clair_postgres_1 is up-to-date
-Recreating clair_clair_1 ... done
-Recreating clair_clairctl_1 ... done
-Attaching to clair_postgres_1, clair_clair_1, clair_clairctl_1
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-postgres_1  | 
-postgres_1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
-postgres_1  | 
-postgres_1  | 2021-11-16 22:55:36.851 UTC [1] LOG:  starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
-postgres_1  | 2021-11-16 22:55:36.851 UTC [1] LOG:  listening on IPv4 address ""0.0.0.0"", port 5432
-postgres_1  | 2021-11-16 22:55:36.851 UTC [1] LOG:  listening on IPv6 address ""::"", port 5432
-postgres_1  | 2021-11-16 22:55:36.853 UTC [1] LOG:  listening on Unix socket ""/var/run/postgresql/.s.PGSQL.5432""
-postgres_1  | 2021-11-16 22:55:36.877 UTC [24] LOG:  database system was shut down at 2021-11-16 22:54:58 UTC
-postgres_1  | 2021-11-16 22:55:36.888 UTC [1] LOG:  database system is ready to accept connections
-postgres_1  | 2021-11-16 23:01:15.219 UTC [1] LOG:  received smart shutdown request
-postgres_1  | 2021-11-16 23:01:15.225 UTC [1] LOG:  background worker ""logical replication launcher"" (PID 30) exited with exit code 1
-postgres_1  | 
-postgres_1  | PostgreSQL Database directory appears to contain a database; Skipping initialization
-postgres_1  | 
-postgres_1  | 2021-11-16 23:02:11.993 UTC [1] LOG:  starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
-postgres_1  | 2021-11-16 23:02:11.994 UTC [1] LOG:  listening on IPv4 address ""0.0.0.0"", port 5432
-postgres_1  | 2021-11-16 23:02:11.994 UTC [1] LOG:  listening on IPv6 address ""::"", port 5432
-postgres_1  | 2021-11-16 23:02:11.995 UTC [1] LOG:  listening on Unix socket ""/var/run/postgresql/.s.PGSQL.5432""
-postgres_1  | 2021-11-16 23:02:12.009 UTC [26] LOG:  database system was interrupted; last known up at 2021-11-16 23:00:37 UTC
-postgres_1  | 2021-11-16 23:02:12.164 UTC [26] LOG:  database system was not properly shut down; automatic recovery in progress
-postgres_1  | 2021-11-16 23:02:12.166 UTC [26] LOG:  redo starts at 0/1745C50
-postgres_1  | 2021-11-16 23:02:12.166 UTC [26] LOG:  invalid record length at 0/1745D38: wanted 24, got 0
-postgres_1  | 2021-11-16 23:02:12.166 UTC [26] LOG:  redo done at 0/1745D00
-postgres_1  | 2021-11-16 23:02:12.180 UTC [1] LOG:  database system is ready to accept connections
-postgres_1  | 2021-11-16 23:02:12.471 UTC [33] ERROR:  duplicate key value violates unique constraint ""lock_name_key""
-postgres_1  | 2021-11-16 23:02:12.471 UTC [33] DETAIL:  Key (name)=(updater) already exists.
-postgres_1  | 2021-11-16 23:02:12.471 UTC [33] STATEMENT:  INSERT INTO Lock(name, owner, until) VALUES($1, $2, $3)
-clair_clair_1 exited with code 2
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-clair_clair_1 exited with code 2
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-clair_clair_1 exited with code 2
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-clair_clair_1 exited with code 2
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-clair_clair_1 exited with code 2
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-
-installing a bad container
-docker pull imiell/bad-dockerfile
-
-docker-compose exec clairctl clairctl analyze -l imiell/bad-dockerfile
-
-
-client quit unexpectedly
-2021-11-16 23:05:19.221606 C | cmd: pushing image ""imiell/bad-dockerfile:latest"": pushing layer to clair: Post http://clair:6060/v1/layers: dial tcp: lookup clair: Try again
-
-I don't understand what he doesn't like for analysis?
-","1. I just solved this yesterday, the 4.3.4 version of Clair only supports two command-line options, mode, and conf. Your output bears this out:
-clair_1     | flag provided but not defined: -log-level
-clair_1     | Usage of /bin/clair:
-clair_1     |   -conf value
-clair_1     |       The file system path to Clair's config file.
-clair_1     |   -mode value
-clair_1     |       The operation mode for this server. (default combo)
-
-Change the command line to only specify your configuration file (line 23 of your docker-compose.yml) and place your debug directive in the configuration file.
-command: [--conf, /config/config.yml]
-
-This should get Clair running.
-
-2. I think your are using the old clairctl with the new Clair v4. You should be using clairctl from here: https://github.com/quay/clair/releases/tag/v4.3.5.
-",Clair
-"I’m attempting to create a solana bot that has limit order to buy $MAGA using raydium. However everytime i run the code, the transaction does not go through.
-<!-- begin snippet: js hide: false console: true babel: false -->
-# Returns the swap_transaction to be manipulated in sendTransaction()
-async def create_transaction(quote: dict, input_token_mint, output_token_mint) -> dict:
-    log_transaction.info(f""""""Soltrade is creating transaction for the following quote: 
-{quote}"""""")
-
-    if 'error' in quote:
-        log_transaction.error(f""Error in quote: {quote['error']}"")
-        raise Exception(f""Error in quote: {quote['error']}"")
-
-    pool_id = get_pool_id(input_token_mint)
-    #pool_id = ""9XsGAA3xHC6gqRgThRrcaUPU6jzerZacWgfyMb17579t""
-
-    # Parameters used for the Raydium POST request
-    parameters = {
-        ""quoteResponse"": quote,
-        ""userPublicKey"": str(configs['public_address']),
-        ""wrapUnwrapSOL"": True,
-        ""computeUnitPriceMicroLamports"": 20 * 3_000_000  # fee of roughly $.4  :shrug:
-    }
-    #9XsGAA3xHC6gqRgThRrcaUPU6jzerZacWgfyMb17579t
-    # Returns the JSON parsed response of Jupiter
-    async with httpx.AsyncClient() as client:
-        response = await client.post(f""https://api.raydium.io/v2/swap?poolId={pool_id}"", json=parameters)
-        exchange_data = response.json()
-
-        pprint(f""TRANSACTION CREATE:\n{exchange_data}"")
-        return exchange_data
-
-
-async def perform_swap(sent_amount: float, price_limit, sent_token_mint: str, mode : str):
-    global position
-    log_general.info(""Soltrade is taking a limit position."")
-
-    #TODO: fetch the current price and create a limit order
-    current_price = get_price(sent_token_mint)
-
-    base_token = await get_token_decimals(sent_token_mint)
-    quote = trans = opts = txid = tx_error = None
-    is_tx_successful = False
-
-    for i in range(0,3):
-        if not is_tx_successful:
-            try:
-                if (mode == ""buy"") or (mode == ""sell""):
-                    quote = await create_exchange(sent_amount, sent_token_mint, mode)
-                    trans = await create_transaction(quote, sent_token_mint, SOL_MINT_ADDRESS)
-                    print(f""TRANS:\n{trans}"")
-                    opts = TxOpts(skip_preflight=False, preflight_commitment=""confirmed"", last_valid_block_height=find_last_valid_block_height())
-                    txid = send_transaction(trans[""swapTransaction""], opts)
-
-                    for i in range(3):
-                        await asyncio.sleep(35)
-                        tx_error = find_transaction_error(txid)
-                        if not tx_error:
-                            is_tx_successful = True
-                            break
-                else:
-                    log_general.info(f""Price hasn't reached {price_limit}. Waiting for the next opportunity."")
-                    await asyncio.sleep(60)
-                    continue
-                    #current_price = get_price(sent_token_mint)
-
-            except Exception as e:
-                if RPCException:
-                    print(traceback.format_exc())
-                    log_general.warning(f""Soltrade failed to complete transaction {i}. Retrying."")
-                    continue
-                else:
-                    raise
-            for i in range(0, 3):
-                try:
-                    await asyncio.sleep(35)
-                    tx_error = find_transaction_error(txid)
-                    if not tx_error:
-                        is_tx_successful = True
-                        break
-                except TypeError as e:
-                    print(traceback.format_exc())
-                    log_general.warning(""Soltrade failed to verify the existence of the transaction. Retrying."")
-                    continue
-        else:
-            break
-
-<!-- end snippet -->
-<!-- begin snippet: js hide: false console: true babel: false -->
-2024-05-27 14:19:22       Soltrade has detected a buy signal.
-2024-05-27 14:19:22       Soltrade is taking a limit position.
-Response: SOL
-2024-05-27 14:19:22       Soltrade is creating exchange for 12.103126943600001 dollars in ('', '')
-Pool ID: 8sLbNZoA1cfnvMJLPfp98ZLAnFSYCFApfJKMbiXNLwxj
-('EXCHANGE CREATED:\n'
- ""{'id': '03c2fd251bb64b3a85a3207deae7b010', 'success': False}"")
-2024-05-27 14:19:23       Soltrade is creating transaction for the following quote: 
-{'id': '03c2fd251bb64b3a85a3207deae7b010', 'success': False}
-('TRANSACTION CREATE:\n'
- ""{'id': '64218b3c90b943d0a1069e43248f406f', 'success': False}"")
-TRANS:
-{'id': '64218b3c90b943d0a1069e43248f406f', 'success': False}
-Traceback (most recent call last):
-  File ""/Users/dekahalane/soltrade-1/soltrade/transactions.py"", line 204, in perform_swap
-    txid = send_transaction(trans[""swapTransaction""], opts)
-KeyError: 'swapTransaction'
-
-2024-05-27 14:19:24       Soltrade failed to complete transaction 0. Retrying.
-
-<!-- end snippet -->
-I’ve tried debugging, increasing the slippage and the fees. I've researched any Solana python documentation and i couldn't find any. I think the problem could be wrong links.
-","1. I happened to fix this by switching to Jupiter api, and using these links :
-This in create_exchange():
-https://quote-api.jup.ag/v6/quote?inputMint={input_token_mint}&outputMint={output_token_mint}&amount={int(amount_in)}&slippageBps={config().slippage}
-
-This in create_transaction():
- https://quote-api.jup.ag/v6/swap
-
-",Dex
-"How can one decompile Android DEX (VM bytecode) files into corresponding Java source code?
-","1. It's easy
-Get these tools:
-
-dex2jar to translate dex files to jar files
-
-jd-gui to view the java files in the jar
-
-
-The source code is quite readable as dex2jar makes some optimizations.
-Procedure:
-And here's the procedure on how to decompile:
-Step 1:
-Convert classes.dex in test_apk-debug.apk to test_apk-debug_dex2jar.jar
-d2j-dex2jar.sh -f -o output_jar.jar apk_to_decompile.apk
-d2j-dex2jar.sh -f -o output_jar.jar dex_to_decompile.dex
-
-
-Note 1: In the Windows machines all the .sh scripts are replaced by .bat scripts
-
-
-Note 2: On linux/mac don't forget about sh or bash. The full command should be:
-
-sh d2j-dex2jar.sh -f -o output_jar.jar apk_to_decompile.apk 
-
-
-Note 3: Also, remember to add execute permission to dex2jar-X.X directory e.g. sudo chmod -R +x dex2jar-2.0
-
-dex2jar documentation
-Step 2:
-Open the jar in JD-GUI
-
-
-2. To clarify somewhat, there are two major paths you might take here depending on what you want to accomplish:
-Decompile the Dalvik bytecode (dex) into readable Java source. You can do this easily with dex2jar and jd-gui, as fred mentions. The resulting source is useful to read and understand the functionality of an app, but will likely not produce 100% usable code. In other words, you can read the source, but you can't really modify and repackage it. Note that if the source has been obfuscated with proguard, the resulting source code will be substantially more difficult to untangle.
-The other major alternative is to disassemble the bytecode to smali, an assembly language designed for precisely this purpose. I've found that the easiest way to do this is with apktool. Once you've got apktool installed, you can just point it at an apk file, and you'll get back a smali file for each class contained in the application. You can read and modify the smali or even replace classes entirely by generating smali from new Java source (to do this, you could compile your .java source to .class files with javac, then convert your .class files to .dex files with Android's dx compiler, and then use baksmali (smali disassembler) to convert the .dex to .smali files, as described in this question. There might be a shortcut here). Once you're done, you can easily package the apk back up with apktool again. Note that apktool does not sign the resulting apk, so you'll need to take care of that just like any other Android application.
-If you go the smali route, you might want to try APK Studio, an IDE that automates some of the above steps to assist you with decompiling and recompiling an apk and installing it on a device.
-In short, your choices are pretty much either to decompile into Java, which is more readable but likely irreversible, or to disassemble to smali, which is harder to read but much more flexible to make changes and repackage a modified app. Which approach you choose would depend on what you're looking to achieve. 
-Lastly, the suggestion of dare is also of note. It's a retargeting tool to convert .dex and .apk files to java .class files, so that they can be analyzed using typical java static analysis tools.
-
-3. I'd actually recommend going here:
-https://github.com/JesusFreke/smali
-It provides BAKSMALI, which is a most excellent reverse-engineering tool for DEX files.
-It's made by JesusFreke, the guy who created the fameous ROMs for Android.
-",Dex
-"I've got a class in which I do some runtime annotation scanning, but it uses the deprecated DexFile APIs which causes a warning to appear in LogCat: 
-
-W/zygote64: Opening an oat file without a class loader. Are you using the deprecated DexFile APIs?. 
-
-I'd like to get rid of this message and use the proper APIs. The docs suggest PathClassLoader, but I don't see how it is equivalent to DexFile in functionality. I can use a PathClassLoader in conjunction with a DexFile instance, and while it does work, it gives me even more warnings and takes longer to scan. I've included the annotation scanner I wrote below for the sake of clarity. If anyone can suggest how to get rid of these warning messages and an alternative to DexFile, so I don't get hit with broken functionality after it's removed, I'd be super appreciative.
-class AnnotationScanner {
-    companion object {
-        fun classesWithAnnotation(
-            context: Context,
-            annotationClass: Class<out Annotation>,
-            packageName: String? = null
-        ): Set<Class<*>> {
-
-            return Pair(context.packageCodePath, context.classLoader)
-                .letAllNotNull { packageCodePath, classLoader ->
-                    Pair(DexFile(packageCodePath), classLoader)
-                }
-                ?.letAllNotNull { dexFile, classLoader ->
-                    dexFile
-                        .entries()
-                        ?.toList()
-                        ?.filter { entry ->
-                            filterByPackageName(packageName, entry)
-                        }
-                        ?.map {
-                            dexFile.loadClass(it, classLoader)
-                        }
-                        ?.filter { aClass ->
-                            filterByAnnotation(aClass, annotationClass)
-                        }
-                        ?.toSet()
-                } ?: emptySet<Class<*>>().wlog { ""No ${annotationClass.simpleName} annotated classes found"" }
-        }
-
-        private fun filterByAnnotation(aClass: Class<*>?, annotationClass: Class<out Annotation>): Boolean {
-            return aClass
-                ?.isAnnotationPresent(annotationClass)
-                ?.also {
-                    it.ifTrue {
-                        Timber.w(""Found ${annotationClass.simpleName} on $aClass"")
-                    }
-                }
-                ?: false
-        }
-
-        private fun filterByPackageName(packageName: String?, entry: String) =
-            packageName?.let { entry.toLowerCase().startsWith(it.toLowerCase()) } ?: true
-    }
-}
-
-","1. You can say that there's nothing that replace DexFile for your case but there's another way to scan files using Annotation Processor you can search to find documentation about it
-I'll give you an example on how to get classes names
-instead of going to scan classes in the runtime you can scan in the build time and write on a java class a list of classes names and then use that generated class to get the classes names
-
-
-@SupportedAnnotationTypes(""*"")
-public class Processor extends AbstractProcessor {
-    private ProcessingEnvironment mProcessingEnvironment;
-    @Override
-    public synchronized void init(ProcessingEnvironment processingEnvironment) {
-        super.init(processingEnvironment);
-        mProcessingEnvironment = processingEnvironment;
-    }
-
-    @Override
-    public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
-        Types typeUtils = mProcessingEnvironment.getTypeUtils();
-        List<String> modelsClassesNames = new ArrayList<>();
-        TypeElement oModelTypeElement = processingEnv.getElementUtils().getTypeElement(""com.insidjam.core.orm.OModel""); // Replace com.example.OModel with the package and name of your OModel class
-        mProcessingEnvironment.getMessager().printMessage(Diagnostic.Kind.NOTE, ""Generating models names"");
-        for (TypeElement annotation : annotations) {
-            for(Element element : roundEnv.getRootElements()){
-                if (element.getKind().isClass()) {
-                    TypeMirror oModelType = oModelTypeElement.asType();
-                    TypeMirror elementType = element.asType();
-                    if (typeUtils.isSubtype(elementType, oModelType)) {
-                        String className = ((TypeElement) element).getQualifiedName().toString();
-                        modelsClassesNames.add(className);
-                        System.out.println(""Processing model: "" + className);
-                    }
-                }
-            }
-        }
-        generateClass(modelsClassesNames);
-        return true;
-    }
-    private void generateClass(List<String> classesNames) {
-        try {
-            String baseClassName = ""ModelRegistry"";
-            String relativeClassName = ""com.example.annotationprocessor.""+baseClassName;
-            JavaFileObject jfo = mProcessingEnvironment.getFiler().createSourceFile(relativeClassName);
-            try (Writer writer = jfo.openWriter()) {
-                writer.write(""package com.example.annotationprocessor;\n\n"");
-                writer.write(""public class "" + baseClassName + "" {\n\n"");
-                writer.write(""    public static String[] getClassesNames() {\n"");
-                writer.write(""        return new String[] {\n"");
-                for(int i = 0; i < classesNames.size(); i++){
-                    String className = classesNames.get(i);
-                    writer.write(""            \"""");
-                    writer.write(className);
-                    if(i < classesNames.size() -1) {
-                        writer.write(""\"","");
-                    }else{
-                        writer.write(""\"""");
-                    }
-                }
-                writer.write(""                             };\n"");
-                writer.write(""    }\n"");
-                writer.write(""}\n"");
-            }
-        } catch (Exception e) {
-            mProcessingEnvironment.getMessager().printMessage(Diagnostic.Kind.NOTE, ""Unable to write ******"" + e.getMessage());
-            e.printStackTrace();
-        }
-    }
-}
-
-
-
-and then use that generated class as follows
- import com.example.annotationprocessor.ModelRegistry;
-ModelRegistry.getClassesNames()
-",Dex
-"Let' say I have an app that is more or less like an editor and executor of what was created with the editor function.
-I thought about two ways: I can either develop an algorithm and a structure to perform my job or I can literally write and compile at runtime .java file.
-To get a better idea of what I am saying I will create a simplified example.
-First case scenario, I will create multiple instances of SpecialMove for each creation by the user:
-public class SpecialMove{
-private String name;
-private Type type;
-private int damage;
-...
-}
-
-In the second case scenario, classes that extend SpecialMove would be written and compiled at runtime:
-    public class SpecialMove{
-        protected Type type;
-        protected int damage;
-        ...
-        }
-
-//this class will be written into a separate file and compiled at runtime
-    public class FireBall extends SpecialMove{
-        protected Type type;
-        protected int damage;
-        ...
-        }
-
-In the past I've chosen to use the second case scenario but it was an application for desktop. Since I am not much skilled in Android, I would like to ask you if generating Dalvik byte code might be trickier and/or less efficient and in general any pros and cons.
-","1. Runtime code generation is banned by the Play Store.  You can download scripts in a sandbox like Javascript, but you can't download or compile native code at  runtime.  If caught doing it your app will be removed.  This is a security policy by Google to reduce the risk of malware and trojan horses.
-In fact in general compiling code at runtime, on any platform, is almost certainly a wrong choice.  Sounds like a debugging nightmare.  You might sometimes download extentions with a plugin system, but you wouldn't be compiling it live.
-",Dex
-"I have implemented external secrets to fetch values from azure key vault in kubernetes cluster. It worked fine for two environments but in third environment it is not working. It created secret store and validates it but the external secret doesn't provide any status and creates no secrets. Here is the screenshot os external secret resource in kubernetes:
-
-I have tested the configurations and found that everything is configured perfectly but still unable to create the secrets using external secrets
-","1. After syncing your changes into the cluster, try deleting the pod and wait for kubernetes to recreate the pod and check again if the secret are being exposed.
-I faced the same problem while trying to use argocd to sync the changes but what I realised is that although most of the changes are deployed into the cluster, external secrets won't and manually restarting the pod fetches the updated secrets.
-",external-secrets
-"I'm getting into Kubernetes security and I'm looking at various ways to encrypt and use Secrets values in pods but I think I'm not grasping some key concepts.
-As I understood it, from the cluster security standpoint encrypting secrets should avoid that in case of cluster attack the attacker wouldn't be able to get api keys, access tokens, usernames and passwords, just by simply base64 decode the secrets values.
-I'm comparing the use of secrets managers like Vault and Sealed Secrets against enabling Encryption at rest.
-I see that with implementing either Vault + Vault Secrets Operator,or Vault + External Secrets, or Sealed Secrets, a normal Secret is generated from encrypted secrets and laying around in the cluster.
-From Vault Secrets Operator GitHub
-
-The Operator writes the source Vault secret data directly to the destination Kubernetes Secret, ensuring that any changes made to the source are replicated to the destination over its lifetime.In this way, an application only needs to have access to the destination secret in order to make use of the secret data contained within.
-
-From Seal Secret GitHub I see that their Sealed Secret custom resource will get converted to a normal Kubernetes Secret..
-
-This normal kubernetes secret will appear in the cluster after a few seconds you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).
-
-Encryption at rest on the other hand, will actually encrypt the secrets upon creation, dough you need to also configure RBAC rules, or use envelope encryption with a third-party KMS provider like Azure Key Vaults for example.
-The three methods are almost equally complicated to implement as for example Vault needs a lot of manual configuration for unsealing and create encrypted secrets for example, but only encryption at rest will actually secure sensitive data against cluster attacks as Secrets are actually encrypted and decrypted when used from pods.
-Given that above considerations, what are Vault and Sealed Secret good for, and what's even the point of going through all that trouble for setting them up if then Secrets end up laying around unencrypted?
-Update - I love this security thing, I really do -
-As suggested by David, passing sensitive data to the containerised Node.js app environment through Pod's container environment variables just makes the easy to get, so they're better off in a secure store like  Azure Key Vault and retrieved directly in app via the SDK. Fair enough..just update the app to us that.
-But just there it all starts again.
-You now have to secure the KEYVAULT_URI, AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET.
-This is the exact situation I thought of when I first started programming and started using .env file not to hard code api keys and other sensitive data and avoid pushing it to the repo. But then you have to pass those values to the containerised app.
-No worries,  there you have a Secret to inject them.. but no..that way with a simple exec command on the pod an attacker could get them all,
-So you gotta stored them in a secure remote encryted and re-encrypted NASA level service on Mars, but hey..
-You need access to them so there you have a bounce of keys, secrets and ids to pass to an SDK in your app to do that..
-This way it takes just an extra step to get all securely stored sensitive data. Instead of cracking your app and get them, they need to crack your app and get the keys to go and get them.. oh I almost forgot. Thanks for your Azure credentials too..very kind of yours.
-This whole security thing is just a sort of a treasure hunt where a clue takes you to the next one and away to the treasure..
-Is there an end to all this security, encryption, key rotation thing? I mean one way or another sensitive data seem to get exposed somewhere.. I might just leave the keys in the car..
-Seriously.. how you manage all this?
-Sorry for the rant.. hope I made you at least smile
-","1. Getting the Secret into the cluster at all can be a challenge.  You can use Kubernetes RBAC to limit the Secret's visibility, which may help some of the security concerns.
-I'd suggest there are three basic levels here:
-
-The secret values don't exist in Kubernetes at all, but the application directly integrates with Vault or a similar service.  (Hardest to set up and run, especially in non-production.)
-
-Kubernetes Secrets exist but they're populated by an operator or integration.
-
-Kubernetes Secrets are created at deploy time via kubectl apply or Helm.
-
-
-If the Secret exists at all, as you note, it's fairly straightforward to get its value.  kubectl get secret will have it in all but plain text, kubectl exec can find it in the running Pod, if you can kubectl run a new Pod or create any of the workload-type resources then you can mount the Secret into a new Pod and print out its value.  That's an argument for having the application tie directly into the secret store, but that's a more complex setup.
-Let's say you're not hard-wired into Vault and need to provide a database password as an environment variable.  Where does it actually live; when you deploy the application, how does it get set?  One option is to put the password in your CI system's credentials store, but this is a very ""leaky"" option – the CI script and every step along the chain can see the secret, and you need an administrator of the CI system to create or modify the value.
-This is where the various secret-manager tools come in.  Sealed Secrets lets you commit an encrypted secret file to source control, so you don't need to coördinate with the CI system to create or update a credential.  External Secrets creates a Secret from a system like Vault or AWS Secrets Manager.  Vault has a fairly rich access-control system, so this makes it possible for a user to create a secret, and list existing secrets, but not directly retrieve the secret value (at least, not outside of Kubernetes).
-So, if you install the secret through the CI system:
-credential (only CI admins can update)
-    v
-CI system --> helm install --> Secret object --> container --> application
-    ^              ^                 ^               ^              ^
-    credential is visible everywhere
-
-If you let an operator like Sealed Secrets or External Secrets create the Secret object:
-                secret store --> operator
-                                     v
-CI system --> helm install --> Secret object --> container --> application
-                                     ^               ^              ^
-                                  Kubernetes operations can get credential
-
-And if you change the application code to directly wire into the secret store (Vault, AWS Secrets Manager, ...):
-                                                              secret store
-                                                                    ^
-CI system --> helm install --> Secret object --> container --> application
-                                                                    ^
-                           credential only visible inside application code
-                                  requires wiring specific to secret store
-
-",external-secrets
-"I have a yaml file which is similar to the following (FYI: ssm_secrets can be an empty array):
-rabbitmq:
-  repo_name: bitnami
-  namespace: rabbitmq
-  target_revision: 11.1.1
-  path: rabbitmq
-  values_file: charts/rabbitmq/values.yaml
-  ssm_secrets: []
-app_name_1:
-  repo_name: repo_name_1
-  namespace: namespace_1
-  target_revision: target_revision_1
-  path: charts/path
-  values_file: values.yaml
-  ssm_secrets:
-    - name: name-dev-1
-      key: .env
-      ssm_path: ssm_path/dev
-name-backend:
-  repo_name: repo_name_2
-  namespace: namespace_2
-  target_revision: target_revision_2
-  path: charts/name-backend
-  values_file: values.yaml
-  ssm_secrets:
-    - name: name-backend-app-dev
-      ssm_path: name-backend/app/dev
-      key: app.ini
-    - name: name-backend-abi-dev
-      ssm_path: name-backend/abi/dev
-      key: contractTokenABI.json
-    - name: name-backend-widget-dev
-      ssm_path: name-backend/widget/dev
-      key: name.ini
-    - name: name-abi-dev
-      ssm_path: name-abi/dev
-      key: name_1.json
-    - name: name-website-dev
-      ssm_path: name/website/dev
-      key: website.ini
-    - name: name-name-dev
-      ssm_path: name/name/dev
-      key: contract.ini
-    - name: name-key-dev
-      ssm_path: name-key/dev
-      key: name.pub
-
-And using External Secrets and EKS Blueprints, I am trying to generate the yaml file necessary to create the secrets
-resource ""kubectl_manifest"" ""secret"" {
-  for_each   = toset(flatten([for service in var.secrets : service.ssm_secrets[*].ssm_path]))
-  yaml_body  = <<YAML
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  name: ${replace(each.value, ""/"", ""-"")}
-  namespace: ${split(""/"", each.value)[0]}
-spec:
-  refreshInterval: 30m
-  secretStoreRef:
-    name: ${local.cluster_secretstore_name}
-    kind: ClusterSecretStore
-  data:
-  - secretKey: .env
-    remoteRef:
-       key: ${each.value}
-YAML
-  depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
-}
-
-The above works fine, but I also need to use the key value from the yaml into secretKey: <key_value from yaml>.
-If I try with for_each   = toset(flatten([for service in var.secrets : service.ssm_secrets[*]]))
-resource ""kubectl_manifest"" ""secret"" {
-  for_each   = toset(flatten([for service in var.secrets : service.ssm_secrets[*]]))
-  yaml_body  = <<YAML
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  name: ${replace(each.value[""ssm_path""], ""/"", ""-"")}
-  namespace: ${split(""/"", each.value[""ssm_path""])[0]}
-spec:
-  refreshInterval: 30m
-  secretStoreRef:
-    name: ${local.cluster_secretstore_name}
-    kind: ClusterSecretStore
-  data:
-  - secretKey: .env
-    remoteRef:
-       key: ${each.value[""ssm_path""]}
-YAML
-  depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
-}
-
-It just gives me the following error:
-
-The given ""for_each"" argument value is unsuitable: ""for_each"" supports
-maps and sets of strings, but you have provided a set containing type
-object.
-
-I have tried converting the variable into a map, used lookup, but it doesn't work.
-Any help would be much appreciated.
-Update 1:
-As per @MattSchuchard suggestion, changing the for_each into
-for_each   = toset(flatten([for service in var.secrets : service.ssm_secrets]))
-Gave the following error:
-Error: Invalid for_each set argument
-│ 
-│   on ../../modules/02-plugins/external-secrets.tf line 58, in resource ""kubectl_manifest"" ""secret"":
-│   58:   for_each   = toset(flatten([for service in var.secrets : service.ssm_secrets]))
-│     ├────────────────
-│     │ var.secrets is object with 14 attributes
-│ 
-│ The given ""for_each"" argument value is unsuitable: ""for_each"" supports maps and sets of strings, but you have provided a set containing type object.
-
-Update 2:
-@mariux gave the perfect solution, but here is what I came up with. It's not that cleaner, but definitely works (PS: I myself am going to use Mariux's solution):
-locals {
-  my_list = tolist(flatten([for service in var.secrets : service.ssm_secrets[*]]))
-}
-
-
-resource ""kubectl_manifest"" ""secret"" {
-
-  count      = length(local.my_list)
-  yaml_body  = <<YAML
-apiVersion: external-secrets.io/v1beta1
-kind: ExternalSecret
-metadata:
-  name: ${replace(local.my_list[count.index][""ssm_path""], ""/"", ""-"")}
-  namespace: ${split(""/"", local.my_list[count.index][""ssm_path""])[0]}
-spec:
-  refreshInterval: 30m
-  secretStoreRef:
-    name: ${local.cluster_secretstore_name}
-    kind: ClusterSecretStore
-  data:
-  - secretKey: ${local.my_list[count.index][""key""]}
-    remoteRef:
-       key: ${local.my_list[count.index][""ssm_path""]}
-YAML
-  depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
-}
-
-","1. Assumptions
-Based on what you shared, i make the following assumptions:
-
-the service is not actually important for you as you want to create external secrets by ssm_secrets.*.name using the given key and ssm_path attributes.
-each name is globally unique for all services and never reused.
-
-terraform hacks
-Based on the assumptions you can create an array of ALL ssm_secrets using
-locals {
-  ssm_secrets_all = flatten(values(var.secrets)[*].ssm_secrets)
-}
-
-and convert it to a map that can be used in for_each by keying the values by .name:
-locals {
-  ssm_secrets_map = { for v in local.ssm_secrets_all : v.name => v }
-}
-
-Full (working) example
-The example below works for me and makes some assumption where the variables should be used.
-
-Using yamldecode to decode your original input into local.input
-Using yamlencode to make reading the manifest easier and removing some string interpolcations. This also ensures that the indent is correct as we convert HCL to yaml.
-
-A terraform init && terraform plan will plan to create the following resources:
- kubectl_manifest.secret[""name-abi-dev""] will be created
- kubectl_manifest.secret[""name-backend-abi-dev""] will be created
- kubectl_manifest.secret[""name-backend-app-dev""] will be created
- kubectl_manifest.secret[""name-backend-widget-dev""] will be created
- kubectl_manifest.secret[""name-dev-1""] will be created
- kubectl_manifest.secret[""name-key-dev""] will be created
- kubectl_manifest.secret[""name-name-dev""] will be created
- kubectl_manifest.secret[""name-website-dev""] will be created
-
-locals {
-  # input = var.secrets
-  ssm_secrets_all = flatten(values(local.input)[*].ssm_secrets)
-  ssm_secrets_map = { for v in local.ssm_secrets_all : v.name => v }
-
-  cluster_secretstore_name = ""not provided secretstore name""
-}
-
-resource ""kubectl_manifest"" ""secret"" {
-  for_each = local.ssm_secrets_map
-
-  yaml_body = yamlencode({
-    apiVersion = ""external-secrets.io/v1beta1""
-    kind       = ""ExternalSecret""
-    metadata = {
-      name      = replace(each.value.ssm_path, ""/"", ""-"")
-      namespace = split(""/"", each.value.ssm_path)[0]
-    }
-    spec = {
-      refreshInterval = ""30m""
-      secretStoreRef = {
-        name = local.cluster_secretstore_name
-        kind = ""ClusterSecretStore""
-      }
-      data = [
-        {
-          secretKey = "".env""
-          remoteRef = {
-            key = each.value.key
-          }
-        }
-      ]
-    }
-  })
-
-  # not included dependencies
-  # depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
-}
-
-locals {
-  input = yamldecode(<<-EOF
-    rabbitmq:
-      repo_name: bitnami
-      namespace: rabbitmq
-      target_revision: 11.1.1
-      path: rabbitmq
-      values_file: charts/rabbitmq/values.yaml
-      ssm_secrets: []
-    app_name_1:
-      repo_name: repo_name_1
-      namespace: namespace_1
-      target_revision: target_revision_1
-      path: charts/path
-      values_file: values.yaml
-      ssm_secrets:
-        - name: name-dev-1
-          key: .env
-          ssm_path: ssm_path/dev
-    name-backend:
-      repo_name: repo_name_2
-      namespace: namespace_2
-      target_revision: target_revision_2
-      path: charts/name-backend
-      values_file: values.yaml
-      ssm_secrets:
-        - name: name-backend-app-dev
-          ssm_path: name-backend/app/dev
-          key: app.ini
-        - name: name-backend-abi-dev
-          ssm_path: name-backend/abi/dev
-          key: contractTokenABI.json
-        - name: name-backend-widget-dev
-          ssm_path: name-backend/widget/dev
-          key: name.ini
-        - name: name-abi-dev
-          ssm_path: name-abi/dev
-          key: name_1.json
-        - name: name-website-dev
-          ssm_path: name/website/dev
-          key: website.ini
-        - name: name-name-dev
-          ssm_path: name/name/dev
-          key: contract.ini
-        - name: name-key-dev
-          ssm_path: name-key/dev
-          key: name.pub
-    EOF
-  )
-}
-
-terraform {
-  required_version = ""~> 1.0""
-
-  required_providers {
-    kubectl = {
-      source  = ""gavinbunney/kubectl""
-      version = ""~> 1.7""
-    }
-  }
-}
-
-hint: you could also try to use the kubernetes_manifest resource instead of kubectl_manifest
-p.s.: We created Terramate to make complex creation of Terraform code easier. But this seems perfectly fine for pure Terraform.
-
-2. If you modify the for_each meta-parameter to:
-for_each = toset(flatten([for service in var.secrets : service.ssm_secrets]))
-
-then the lambda/closure scope iterator variable within the kubernetes_manifest.secret resource with default name each will be a list(object) type representing the desired values analogous to the list of hashes in the YAML (list of maps within Kubernetes), and one can access ssm_path with each.value[""ssm_path""], and key with each.value[""key""].
-",external-secrets
-"I am facing minor issue with getting secrets from external vaults to aws eks container.
-I am using sidecar container for inject secrets in to pods.
-I have created secrets at below path ,
-vault kv put secrets/mydemo-eks/config username='admin' password='secret'
-
-my pod yaml is as below,
-apiVersion: v1
-kind: Pod
-metadata:
-  name: mydemo
-  labels:
-    app: mydemo
-  annotations:
-    vault.hashicorp.com/agent-inject: 'true'
-    vault.hashicorp.com/agent-inject-status: 'update'
-    vault.hashicorp.com/auth-path: 'auth/mydemo-eks'
-    vault.hashicorp.com/namespace: 'default'
-    vault.hashicorp.com/role: 'mydemo-eks-role'
-    vault.hashicorp.com/agent-inject-secret-credentials.txt: 'secrets/data/mydemo-eks/config' 
-spec:
-  serviceAccountName: mydemo-sa
-  containers:
-    - name: myapp
-      image: nginx:latest
-      ports:       
-      - containerPort: 80
-
-
-when i m checking real time logs,
-getting as below,
-
-My Hashicorp Vault policy is as below,
-vault policy write mydemo-eks-policy - <<EOF
-path ""secrets/data/mydemo-eks/config"" {
-  capabilities = [""read""]
-}
-EOF
-
-actually secrets already there on mentioned path,
-
-Any idea....
-Is there any wrong i have done.
-any one have worked on this scenario?
-Thanks
-","1. I have modified the policy as below,
-vault policy write mydemo-eks-policy - <<EOF
-path ""secrets/mydemo-eks/config"" {
-  capabilities = [""read""]
-}
-EOF
-
-Earlier i used like ,
-vault policy write mydemo-eks-policy - <<EOF
-path ""secrets/data/mydemo-eks/config"" {
-  capabilities = [""read""]
-}
-EOF
-
-
-",external-secrets
-"How do we deploy falco in adifferent namespace as it is deployed in the default namespace? How do we specify on which namespace to install falco charts?
-","1. You may use -n flag to specify the custom namespace name and --create-namespace flag to create the namespace if its not already present.
-helm install falco falcosecurity/falco -n falco --create-namespace
-
-
-2. There are some options that allow deploying Falco on Kubernetes non-default namespace (Kubernetes manifest or helm). Using helm with a package from an official document seems the fastest way.
-# adding repository
-helm repo add falcosecurity https://falcosecurity.github.io/charts
-helm repo update
-# install falco
-helm install falco falcosecurity/falco --namespace falco --create-namespace
-
-The example above will deploy falco in falco namespace.
-To verify the falco deploying status, get pods status & logs:
-kubectl get pods -n falco -o wide
-kubectl logs <falco-pod-name> -n falco
-
-If you plan deploy Falco on Kubernetes (EKS), follow the How to deploy Falco on Kubernetes (EKS).
-If you want to deploy Falco on Kubernetes and monitor EKS audit logs, follow the Monitoring EKS audit logs with Falco security document.
-",Falco
-"I am using Azure AKS cluster. Have deployed the falco helm chart with the k8s-audit plugin. But I am not getting any events for k8s-audit in the falco log.Following is the falco configuration.
-
-
-falco:
-    falcoctl:
-        artifact:
-            install:
-            # -- Enable the init container. We do not recommend installing plugins for security reasons since they are executable objects.
-            # We install only ""rulesfiles"".
-                enabled: true
-            follow:
-            # -- Enable the sidecar container. We do not support it yet for plugins. It is used only for rules feed such as k8saudit-rules rules.
-                enabled: true
-        config:
-            artifact:
-                install:
-                    # -- Do not resolve the depenencies for artifacts. By default is true, but for our use case we disable it.
-                    resolveDeps: false
-                    # -- List of artifacts to be installed by the falcoctl init container.
-                    # We do not recommend installing (or following) plugins for security reasons since they are executable objects.
-                    refs: [falco-rules:0, k8saudit-rules:0.5]
-                follow:
-                    # -- List of artifacts to be followed by the falcoctl sidecar container.
-                    # We do not recommend installing (or following) plugins for security reasons since they are executable objects.
-                    refs: [falco-rules:0, k8saudit-rules:0.5]
-    services:
-    - name: k8saudit-webhook
-      type: NodePort
-      ports:
-      - port: 9765 # See plugin open_params
-        nodePort: 30007
-        protocol: TCP
-    falco:
-        rules_file:
-            - /etc/falco/falco_rules.yaml
-            - /etc/falco/k8s_audit_rules.yaml
-        plugins:
-        - name: k8saudit
-          library_path: libk8saudit.so
-          init_config:
-            """"
-            # maxEventBytes: 1048576
-            # sslCertificate: /etc/falco/falco.pem
-          open_params: ""http://:9765/k8s-audit""
-        - name: json
-          library_path: libjson.so
-          init_config: """"
-        load_plugins: [k8saudit, json]
-
-If we have to use webhook config file. How to use it in Cloud Kubernetes deployments.
-","1. Sadly, the k8saudit plugin doesn't work with managed K8s clusters like AKS, EKS or GKE. The cloud providers are catching the audit logs for their own usage (ie monitoring system). This is why we developed a specific to EKS plugin, and someone in the community is working on the GKE one. There was an attempt by a member to write an AKS plugin, but he has been laid off recently and can't work on it anymore.
-",Falco
-"I need to create a consolidated program with Libsinsp and gRPC.
-How the program works?
-
-Collects the syscall data with Libsinsp
-Transfer the data with gRPC
-
-I have created both programs, and would like to consolidate them into a single program.
-I have a problem combining the two CMakeLists.txt into a consolidated file and it prompts alot of error messages while compiling.
-Would anyone be able to give me advice?
-CMakeLists.txt file for collecting syscall data
-include_directories(""../../../common"")
-include_directories(""../../"")
-
-add_executable(sinsp-example
-    util.cpp
-    test.cpp
-)
-
-target_link_libraries(sinsp-example
-    sinsp
-)
-
-if (APPLE AND NOT MINIMAL_BUILD)
-    # Needed when linking libcurl
-    set(CMAKE_CXX_FLAGS ""${CMAKE_CXX_FLAGS} -framework Foundation -framework SystemConfiguration"")
-endif()
-
-CMakeLists.txt file for collecting gRPC
-cmake_minimum_required(VERSION 3.5.1)
-
-project(HelloWorld C CXX)
-
-include(common.cmake)
-
-# Proto file
-get_filename_component(hw_proto ""helloworld.proto"" ABSOLUTE)
-get_filename_component(hw_proto_path ""${hw_proto}"" PATH)
-
-# Generated sources
-set(hw_proto_srcs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.pb.cc"")
-set(hw_proto_hdrs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.pb.h"")
-set(hw_grpc_srcs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.grpc.pb.cc"")
-set(hw_grpc_hdrs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.grpc.pb.h"")
-add_custom_command(
-      OUTPUT ""${hw_proto_srcs}"" ""${hw_proto_hdrs}"" ""${hw_grpc_srcs}"" ""${hw_grpc_hdrs}""
-      COMMAND ${_PROTOBUF_PROTOC}
-      ARGS --grpc_out ""${CMAKE_CURRENT_BINARY_DIR}""
-        --cpp_out ""${CMAKE_CURRENT_BINARY_DIR}""
-        -I ""${hw_proto_path}""
-        --plugin=protoc-gen-grpc=""${_GRPC_CPP_PLUGIN_EXECUTABLE}""
-        ""${hw_proto}""
-      DEPENDS ""${hw_proto}"")
-
-# Include generated *.pb.h files
-include_directories(""${CMAKE_CURRENT_BINARY_DIR}"")
-
-# hw_grpc_proto
-add_library(hw_grpc_proto
-  ${hw_grpc_srcs}
-  ${hw_grpc_hdrs}
-  ${hw_proto_srcs}
-  ${hw_proto_hdrs})
-target_link_libraries(hw_grpc_proto
-  ${_REFLECTION}
-  ${_GRPC_GRPCPP}
-  ${_PROTOBUF_LIBPROTOBUF})
-
-# Targets greeter_[async_](client|server)
-foreach(_target  
-  greeter_async_client2)
-  add_executable(${_target} ""${_target}.cc"")
-  target_link_libraries(${_target}
-    hw_grpc_proto
-    ${_REFLECTION}
-    ${_GRPC_GRPCPP}
-    ${_PROTOBUF_LIBPROTOBUF})
-endforeach()
-
-
-Combined CMakefileLists.txt file
-cmake_minimum_required(VERSION 3.5.1)
-
-project(HelloWorld C CXX)
-
-include_directories(""../../../common"")
-include_directories(""../../"")
-
-include(common.cmake)
-
-# Proto file
-get_filename_component(hw_proto ""helloworld.proto"" ABSOLUTE)
-get_filename_component(hw_proto_path ""${hw_proto}"" PATH)
-
-# Generated sources
-set(hw_proto_srcs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.pb.cc"")
-set(hw_proto_hdrs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.pb.h"")
-set(hw_grpc_srcs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.grpc.pb.cc"")
-set(hw_grpc_hdrs ""${CMAKE_CURRENT_BINARY_DIR}/helloworld.grpc.pb.h"")
-add_custom_command(
-      OUTPUT ""${hw_proto_srcs}"" ""${hw_proto_hdrs}"" ""${hw_grpc_srcs}"" ""${hw_grpc_hdrs}""
-      COMMAND ${_PROTOBUF_PROTOC}
-      ARGS --grpc_out ""${CMAKE_CURRENT_BINARY_DIR}""
-        --cpp_out ""${CMAKE_CURRENT_BINARY_DIR}""
-        -I ""${hw_proto_path}""
-        --plugin=protoc-gen-grpc=""${_GRPC_CPP_PLUGIN_EXECUTABLE}""
-        ""${hw_proto}""
-      DEPENDS ""${hw_proto}"")
-
-# Include generated *.pb.h files
-include_directories(""${CMAKE_CURRENT_BINARY_DIR}"")
-
-# hw_grpc_proto
-add_library(hw_grpc_proto
-  ${hw_grpc_srcs}
-  ${hw_grpc_hdrs}
-  ${hw_proto_srcs}
-  ${hw_proto_hdrs})
-target_link_libraries(hw_grpc_proto
-  ${_REFLECTION}
-  ${_GRPC_GRPCPP}
-  ${_PROTOBUF_LIBPROTOBUF})
-
-add_executable(sinsp-example
-  util.cpp
-  test.cpp
-)
-
-target_link_libraries(sinsp-example
-  sinsp
-  hw_grpc_proto
-  ${_REFLECTION}
-  ${_GRPC_GRPCPP}
-  ${_PROTOBUF_LIBPROTOBUF})
-
-I tried compiling using the combined CMakefile. But it didn't work.
-Snippet of error
-/home/jeremy/grpc/third_party/boringssl-with-bazel/linux-x86_64/crypto/fipsmodule/x86_64-mont5.S:2969: multiple definition of `bn_sqrx8x_internal'; ../../openssl-prefix/src/openssl/target/lib/libcrypto.a(x86_64-mont5.o):(.text+0x2420): first defined here
-/usr/bin/ld: /home/jeremy/.local/lib/libcrypto.a(x86_64-mont5.S.o): in function `bn_scatter5':
-/home/jeremy/grpc/third_party/boringssl-with-bazel/linux-x86_64/crypto/fipsmodule/x86_64-mont5.S:3601: multiple definition of `bn_scatter5'; ../../openssl-prefix/src/openssl/target/lib/libcrypto.a(x86_64-mont5.o):(.text+0x2e40): first defined here
-/usr/bin/ld: /home/jeremy/.local/lib/libcrypto.a(x86_64-mont5.S.o): in function `bn_gather5':
-/home/jeremy/grpc/third_party/boringssl-with-bazel/linux-x86_64/crypto/fipsmodule/x86_64-mont5.S:3610: multiple definition of `bn_gather5'; ../../openssl-prefix/src/openssl/target/lib/libcrypto.a(x86_64-mont5.o):(.text+0x2e80): first defined here
-/usr/bin/ld: /home/jeremy/.local/lib/libcrypto.a(engine.c.o): in function `ENGINE_new':
-engine.c:(.text+0x37): multiple definition of `ENGINE_new'; ../../openssl-prefix/src/openssl/target/lib/libcrypto.a(eng_lib.o):eng_lib.c:(.text+0x60): first defined here
-/usr/bin/ld: /home/jeremy/.local/lib/libcrypto.a(engine.c.o): in function `ENGINE_free':
-engine.c:(.text+0x7b): multiple definition of `ENGINE_free'; ../../openssl-prefix/src/openssl/target/lib/libcrypto.a(eng_lib.o):eng_lib.c:(.text+0x250): first defined here
-
-
-","1. From the error messages, I guess this is because the application tries to pull both BoringSSL and OpenSSL, which is not allowed as they define the same symbols. You may want to try to build it with cmake option, -DgRPC_SSL_PROVIDER=package making gRPC to use OpenSSL instead of BoringSSL, which might be helpful.
-",Falco
-"I am facing a problem with Python libraries installation in Azure Synapse Notebook. I have tried installing two libraries, holidays and fugue using %pip... and !pip... I have tried even with .WHL files, but nothing is working. Cluster have not any restriction. The error is:
-WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f29776c5240>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/fugue/
-WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f29776c4ca0>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/fugue/
-WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f29776c47c0>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/fugue/
-WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f29776c7250>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/fugue/
-WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f29776c4130>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/fugue/
-ERROR: Could not find a version that satisfies the requirement fugue (from versions: none)
-ERROR: No matching distribution found for fugue
-Note: you may need to restart the kernel to use updated packages.
-Warning: PySpark kernel has been restarted to use updated packages.
-
-Any thoughts about what is wrong?
-Thanks in advanced
-I have tried  %pip install fugue, and !pip install fugue, .whl files.
-","1. You can use the commands below:
-pip install holidays
-
-pip install fugue
-
-Results:
-
-References:
-
-fugue 0.9.0
-holidays 0.48
-
-Also, learn more about managing libraries for Apache Spark pools in Azure Synapse Analytics.
-Here is the Stack Overflow link to adding a custom Python library in Azure Synapse.
-",Fugue
-"I am testing the Fugue library to compare its benefits compared to purely PySpark, for which I would like to be able to test different operations strictly with Fugue.
-Although I could already use Fugue to perform transformations applying Pandas functions, I have not been able to load a databricks table directly with Fugue from a databricks notebook. How could I do it?
-Clarifications: I can load the table with PySpark without any problem. Also I have tried following the documentation (https://fugue-tutorials.readthedocs.io/tutorials/beginner/io.html) and tried using:
-import fugue.api as fa
-df = fa.load(f'{db_name_model_data}.{table_name_model_data}', engine=spark)
-
-Output:
-NotImplementedError: .my_table_name is not supported
-
-I also tried:
-from fugue import FugueWorkflow, Schema, FugueSQLWorkflow
-# Define a Fugue workflow
-with FugueWorkflow() as dag:
-    # Load a table from a CSV file (example source, replace with your data source)
-    df = dag.load(f'{db_name_model_data}.{table_name_model_data}')
-
-    # Show the loaded DataFrame
-    df.show()
-dag.run()
-
-I'm expecting to have any type of DataFrame loaded in ""df"" by directly using Fugue
-","1. I have tried the following steps:
-Step 1: Installing Fugue
-
-Step 2: Databricks-connect
-In this step, we need to uninstall PySpark to avoid conflicts with Databricks Connect.
-pip install databricks-connect
-
-Note: to remove pyspark pip uninstall pyspark
-After configuring Databricks Connect to connect to the cluster.
-import pandas as pd
-from fugue import transform
-from fugue_spark import SparkExecutionEngine
-data = pd.DataFrame({'numbers':[1,2,3,4], 'words':['hello','world','apple','banana']})
-#schema: *, reversed:str
-def reverse_word(df: pd.DataFrame) -> pd.DataFrame:
-    df['reversed'] = df['words'].apply(lambda x: x[::-1])
-    return df
-spark_df = transform(data, reverse_word, engine=SparkExecutionEngine())
-spark_df.show()
-
-Step 3: Using Fugue-sql on the Cluster
-from fugue_notebook import setup
-setup()
-%%fsql spark
-SELECT *
-  FROM data
-TRANSFORM USING reverse_word
- PRINT
-
-Results:
-
-Additional configuration:
-from pyspark.sql import SparkSession
-from fugue_spark import SparkExecutionEngine
-spark_session = (SparkSession
-                 .builder
-                 .config(""spark.executor.cores"",4)
-                 .config(""fugue.dummy"",""dummy"")
-                 .getOrCreate())
-engine = SparkExecutionEngine(spark_session, {""additional_conf"":""abc""})
-
-Reference: Using Fugue on Databricks
-",Fugue
-"Say I have this file s3://some/path/some_partiitoned_data.parquet.
-I would like to sample a given count of rows and display them nicely, possibly in a jupyter notebook.
-some_partiitoned_data.parquet could be very large, I would like to do this without loading the data into memory, even without downloading the parquet files to disk.
-","1. Spark doesn't let you sample a given number of rows, you can only sample a given fraction, but with Fugue 0.8.0 this is a solution to get n rows
-import fugue.api as fa
-
-df = fa.load(""parquetfile"", engine=spark)
-fa.show(fa.sample(df, frac=0.0001), n=10)
-
-You just need to make sure with the frac, there are still more than 10 rows.
-You can use fa.head to get the dataframe instead of printing it.
-See the API reference at https://fugue.readthedocs.io/en/latest/top_api.html
-",Fugue
-"Say I have these 2 parquet files
-import pandas as pd
-
-pd.DataFrame([[0]], columns=[""a""]).to_parquet(""/tmp/1.parquet"")
-pd.DataFrame([[0],[2]], columns=[""a""]).to_parquet(""/tmp/2.parquet"")
-
-I would like to have a new parquet file that is a row wise union of the two.
-The resulting DataFrame should look like this
-   a
-0  0
-1  0
-2  2
-
-I also would like to repartition that new file with a pre-determined number of partitions.
-","1. You can certainly solve this problem in either Pandas, Spark or other computing frameworks, but each of them will require different implementations. Using Fugue here, you can have one implementation for different computing backends, more importantly, the logic is unit testable without using any heavy backend.
-from fugue import FugueWorkflow
-
-def merge_and_save(file1, file2, file3, partition_num):
-    dag = FugueWorkflow()
-    df1 = dag.load(file1)
-    df2 = dag.load(file2)
-    df3 = df1.union(df2, distinct=False)
-    df3.partition(num=partition_num).save(file3)
-    return dag
-
-To unit test this logic, just use small local files and use the default execution engine. Assume you have a function assert_eq:
-merge_and_save(f1, f2, f3, 4).run()
-assert_eq(pd.read_parquet(f3), expected_df)
-
-And in real production, if the input files are large, you can switch to spark
-merge_and_save(f4, f5, f6, 100).run(spark_session)
-
-It's worth to point out that partition_num is not respected by the default local execution engine, so we can't assert on the number of output files. But it takes effect when the backend is Spark or Dask.
-",Fugue
-"I'm running into 2 separate issues using the Grafeas golang v1beta1 API.
-What I'm trying to do
-
-Call ListOccurrencesRequest() with a Filter to get a list of occurrences for deletion
-Call DeleteOccurrence() on each occurrence from above list to delete it
-
-Issue #1
-I'm trying to set the Filter field using this GCP reference grafeas golang code as a guide.
-filterStr := fmt.Sprintf(`kind=%q`, grafeas_common_proto.NoteKind_BUILD.String())
-listReq := &grafeas_proto.ListOccurrencesRequest{
-    Parent:   BuildProject,
-    Filter:   filterStr,
-    PageSize: 100,
-}
-
-listOccResp, err := r.grafeasCommon.ListOccurrences(ctx, listReq)
-for {
-        if err != nil {
-            log.Error(""failed to iterate over occurrences"", zap.NamedError(""error"", err))
-            return nil, err
-        }
-        ...
-
-But it looks like my filterStr is invalid, here's the error:
-filterStr       {""filterStr"": ""kind=\""BUILD\""""}
-failed to iterate over occurrences      {""error"": ""rpc error: code = Internal desc = error while parsing filter expression: 4 errors occurred:\n\t* error parsing filter\n\t* Syntax error: token recognition error at: '=\""' (1:4)\n\t* Syntax error: token recognition error at: '\""' (1:11)\n\t* Syntax error: extraneous input 'BUILD' expecting <EOF> (1:6)\n\n""}
-
-It looks like the \ escape character is causing trouble but I've tried it without it and get another flavor of same type of error.
-Issue #2
-When I call DeleteOccurrence(), I can see that the occurrence is in fact deleted from Grafeas by checking:
-curl http://localhost:8080/v1beta1/projects/broker_builds/occurrences
-But DeleteOccurrence() always sets the err
-Code:
-    for _, o := range occToDelete {
-        log.Info(""occToDelete"", zap.String(""occurrence"", o))
-        _, err := r.grafeasCommon.DeleteOccurrence(ctx, &grafeas_proto.DeleteOccurrenceRequest{
-            Name: o,
-        })
-        if err != nil {
-            log.Error(""failed to delete occurrence"", zap.String(""occurrence"", o), zap.NamedError(""error"", err))
-        }
-    }
-
-Error:
-failed to delete occurrence     {""occurrence"": ""projects/broker_builds/occurrences/f61a4c57-a3d3-44a9-86ee-5d58cb6c6052"", ""error"": ""rpc error: code = Internal desc = grpc: error while marshaling: proto: Marshal called with nil""}
-
-I don't understand what the error is referring to.
-This question was cross-posted on Grafeas message board.
-Appreciate any help. Thanks.
-","1. Can you shed some details around the storage engine used, and the filtering implementations details?
-Issue 1. filtering is not implemented in any of the storage engines in gitHub.com/grafeas/grafeas.
-Issue 2. it depends what store you use, memstore/embededstore do not seem to be producing any errors similar to what you mentioned... if using postgresql store, are you trying to delete an occurrence twice?
-
-2. Solution for Issue #1
-I'm using grafeas-elasticsearch as the storage backend. It uses a different filter string format than the examples I had looked at in my original post.
-For example, instead of = -> ==, AND -> &&, etc.
-More examples can be seen here:
-https://github.com/rode/grafeas-elasticsearch/blob/main/test/v1beta1/occurrence_test.go#L226
-Solution for Issue #2
-Known issue with grafeas
-https://github.com/grafeas/grafeas/pull/456
-https://github.com/grafeas/grafeas/pull/468
-Unfortunately the latest tagged release of grafeas v0.1.6 does not include these fixes yet. So will need to pick them up on the next release.
-Thanks to @Ovidiu Ghinet, that was a good tip
-",Grafeas
-"I'm trying to install Kritis using :
-azureuser@Azure:~/kritis/docs/standalone$ helm install  kritis https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz --set certificates.ca=""$(cat ca.crt)"" --set certificates.cert=""$(cat kritis.crt)"" --set certificates.key=""$(cat kritis.key)"" --debug
-
-But I'm getting the next error:
-install.go:148: [debug] Original chart version: """"
-install.go:165: [debug] CHART PATH: /home/azureuser/.cache/helm/repository/kritis-charts-0.2.0.tgz
-
-Error: unable to build kubernetes objects from release manifest: error validating """": error validating data: ValidationError(ClusterRole.metadata): unknown field ""kritis.grafeas.io/install"" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
-helm.go:76: [debug] error validating """": error validating data: ValidationError(ClusterRole.metadata): unknown field ""kritis.grafeas.io/install"" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
-helm.sh/helm/v3/pkg/kube.scrubValidationError
-        /home/circleci/helm.sh/helm/pkg/kube/client.go:520
-helm.sh/helm/v3/pkg/kube.(*Client).Build
-        /home/circleci/helm.sh/helm/pkg/kube/client.go:135
-
-Is there a way to know exactly on which file the error is being triggered? and what exactly that error means? 
-The original chart files are available here : https://github.com/grafeas/kritis/blob/master/kritis-charts/templates/preinstall/clusterrolebinding.yaml
-","1. You cant get from where exactly this coming from but this output is giving some clues regarding that. 
-In your error message we have some useful information:
-helm.go:76: [debug] error validating """": error validating data: ValidationError(ClusterRole.metadata): unknown field ""kritis.grafeas.io/install"" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta
-
-
-error validating """"
-ClusterRole
-kritis.grafeas
-
-You can download your chart and dig into it for these terms using cat as follows: 
-$ wget https://storage.googleapis.com/kritis-charts/repository/kritis-charts-0.2.0.tgz
-$ tar xzvf kritis-charts-0.2.0.tgz 
-$ cd kritis-charts/
-
-If your grep for kritis.grafeas.io/install, you can see a ""variable"" being set:
-$ grep -R ""kritis.grafeas.io/install"" *
-values.yaml:kritisInstallLabel: ""kritis.grafeas.io/install""
-
-Now we can grep this variable and check what we can find: 
-$ grep -R ""kritisInstallLabel"" *
-templates/rbac.yaml:      {{ .Values.kritisInstallLabel }}: """"
-templates/rbac.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/kritis-server-deployment.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/preinstall/pod.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/preinstall/pod.yaml:      - {{ .Values.kritisInstallLabel }}
-templates/preinstall/serviceaccount.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/preinstall/clusterrolebinding.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/postinstall/pod.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/postinstall/pod.yaml:      - {{ .Values.kritisInstallLabel }}
-templates/secrets.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/predelete/pod.yaml:    {{ .Values.kritisInstallLabel }}: """"
-templates/kritis-server-service.yaml:    {{ .Values.kritisInstallLabel }}: """"
-values.yaml:kritisInstallLabel: ""kritis.grafeas.io/install""
-
-In this output we can see a rbac.yaml file. That matches with one of the terms we are looking for (ClusterRole):
-If we read this file, we can see the ClusterRole and a line referring to kritisInstallLabel:
-- apiVersion: rbac.authorization.k8s.io/v1beta1
-  kind: ClusterRoleBinding
-  metadata:
-    name: {{ .Values.clusterRoleBindingName }}
-    labels:
-      {{ .Values.kritisInstallLabel }}: """"
-
-{{ .Values.kritisInstallLabel }}: """" will be translated as .Values.kritis.grafeas.io/install by helm and that's where your error is coming from. 
-",Grafeas
-"What does the ""="" mean within the path parameter of the following OpenApi / Swagger spec?
-https://github.com/grafeas/grafeas/blob/master/proto/v1beta1/swagger/grafeas.swagger.json#L18
-Here is an excerpt (converted to YAML from JSON for readability):
-swagger: '2.0'
-info:
-  title: grafeas.proto
-  version: version not set
-schemes:
-  - http
-  - https
-consumes:
-  - application/json
-produces:
-  - application/json
-paths:
-  '/v1beta1/{name=projects/*/notes/*}':
-    get:
-      summary: Gets the specified note.
-      operationId: GetNote
-      responses:
-        '200':
-          description: A successful response.
-          schema:
-            $ref: '#/definitions/v1beta1Note'
-      parameters:
-        - name: name
-          description: |-
-            The name of the note in the form of
-            `projects/[PROVIDER_ID]/notes/[NOTE_ID]`.
-          in: path
-          required: true
-          type: string
-      tags:
-        - GrafeasV1Beta1
-
-The path is defined as /v1beta1/{name=projects/*/notes/*} and a parameter called name is defined, but when I put the whole .json into https://editor.swagger.io, I get errors of the form:
-
-Declared path parameter ""name=projects/*/notes/*"" needs to be defined
-  as a path parameter at either the path or operation level
-
-","1. I believe this swagger spec was auto-generated and the =TEXT within the {param} blocks to be an error.  I have raised this as https://github.com/grafeas/grafeas/issues/379.
-",Grafeas
-"I am using Keycloak for authentication, and I want to configure the Forgot Password feature to redirect users to my password reset website https://mypassport.xxx.com. Could you please guide me on how to set this up? Thank you!
-
-","1. I know its a bit late. But if don't have a solution till now I can suggest some changes to achieve this.
-Step 1: Download existing theme from keycloak container.
-\opt\keycloak\lib\lib\main\org.keycloak.keycloak-themes-xx.x.x.jar
-
-Step 2: Unzip the JAR file and copy \theme\base\ folder.
-Step 3: Rename your theme. I will use ""myTheme"".
-Step 4: Now edit forgot password href to your domain link in the myTheme\login\login.ftl in the following line,
-<div class=""${properties.kcFormOptionsWrapperClass!}"">
-    <#if realm.resetPasswordAllowed>
-        <span><a tabindex=""5"" href=""${url.loginResetCredentialsUrl}"">
-            ${msg(""doForgotPassword"")}
-        </a></span>
-    </#if>
-</div>
-
-After edit it will look like,
-<div class=""${properties.kcFormOptionsWrapperClass!}"">
-    <#if realm.resetPasswordAllowed>
-        <span><a tabindex=""5"" href=""https://yourdomain.com/something"">
-            ${msg(""doForgotPassword"")}
-        </a></span>
-    </#if>
-</div>
-
-Step 5: You can do your own style changes as well if needed.
-Step 6: Upload the theme folder to \opt\keycloak\themes\
-Step 7: In the Admin UI, Select your Realm -> Realm Settings -> Themes -> Choose Login theme as ""myTheme"".
-Hope it helps!!!
-
-2. I have successfully changed the address which I forgot the password redirection according to your steps, thank you very much, but now I encounter a new problem. When I change the theme to a custom theme, the title page seems to be a bit ugly. It is not the login screen of the keycloak them
-
-I want to chage this to
-
-",Keycloak
-"Container must follow the Security best Practices from Kubernetes Community and developers and they need to apply alle the Recommendation from CIS Benchmark.
-what about InitContainer ,should they also follow the same Practices?
-and what if not , which Security Threads could come from Completed Container?
-thanks
-","1. what about InitContainer ,should they also follow the same Practices?
-
-Yes
-
-what if not , which Security Threads could come from Completed Container?
-
-they could perform damage before getting into completed state.
-
-",kube-bench
-"I am trying to apply the kube-bench on k8s cluster on gcp environment. while creating the cluster it is failing with message:
-Error: failed to generate container ""<container_id>"" spec: failed to generate spec: failed to mkdir ""/srv/kubernetes"": mkdir /srv: read-only file system
-
-job.yml:
-apiVersion: batch/v1
-kind: Job
-metadata:
-  name: kube-bench-master
-spec:
-  template:
-    spec:
-      hostPID: true
-      nodeSelector:
-        node-role.kubernetes.io/master: """"
-      tolerations:
-        - key: node-role.kubernetes.io/master
-          operator: Exists
-          effect: NoSchedule
-      containers:
-        - name: kube-bench
-          image: aquasec/kube-bench:latest
-          command: [""kube-bench"", ""run"", ""--targets"", ""master""]
-          volumeMounts:
-            - name: var-lib-etcd
-              mountPath: /var/lib/etcd
-              readOnly: true
-            - name: var-lib-kubelet
-              mountPath: /var/lib/kubelet
-              readOnly: true
-            - name: var-lib-kube-scheduler
-              mountPath: /var/lib/kube-scheduler
-              readOnly: true
-            - name: var-lib-kube-controller-manager
-              mountPath: /var/lib/kube-controller-manager
-              readOnly: true
-            - name: etc-systemd
-              mountPath: /etc/systemd
-              readOnly: true
-            - name: lib-systemd
-              mountPath: /lib/systemd/
-              readOnly: true
-            - name: srv-kubernetes
-              mountPath: /srv/kubernetes/
-              readOnly: true
-            - name: etc-kubernetes
-              mountPath: /etc/kubernetes
-              readOnly: true
-              # /usr/local/mount-from-host/bin is mounted to access kubectl / kubelet, for auto-detecting the Kubernetes version.
-              # You can omit this mount if you specify --version as part of the command.
-            - name: usr-bin
-              mountPath: /usr/local/mount-from-host/bin
-              readOnly: true
-            - name: etc-cni-netd
-              mountPath: /etc/cni/net.d/
-              readOnly: true
-            - name: opt-cni-bin
-              mountPath: /opt/cni/bin/
-              readOnly: true
-            - name: etc-passwd
-              mountPath: /etc/passwd
-              readOnly: true
-            - name: etc-group
-              mountPath: /etc/group
-              readOnly: true
-      restartPolicy: Never
-      volumes:
-        - name: var-lib-etcd
-          hostPath:
-            path: ""/var/lib/etcd""
-        - name: var-lib-kubelet
-          hostPath:
-            path: ""/var/lib/kubelet""
-        - name: var-lib-kube-scheduler
-          hostPath:
-            path: ""/var/lib/kube-scheduler""
-        - name: var-lib-kube-controller-manager
-          hostPath:
-            path: ""/var/lib/kube-controller-manager""
-        - name: etc-systemd
-          hostPath:
-            path: ""/etc/systemd""
-        - name: lib-systemd
-          hostPath:
-            path: ""/lib/systemd""
-        - name: srv-kubernetes
-          hostPath:
-            path: ""/srv/kubernetes""
-        - name: etc-kubernetes
-          hostPath:
-            path: ""/etc/kubernetes""
-        - name: usr-bin
-          hostPath:
-            path: ""/usr/bin""
-        - name: etc-cni-netd
-          hostPath:
-            path: ""/etc/cni/net.d/""
-        - name: opt-cni-bin
-          hostPath:
-            path: ""/opt/cni/bin/""
-        - name: etc-passwd
-          hostPath:
-            path: ""/etc/passwd""
-        - name: etc-group
-          hostPath:
-            path: ""/etc/group""
-
-git link
-","1. Your trying to create folder inside folder with permission ReadOnly.
-The easiest workaround to make it work is changing your path form:
-            - name: srv-kubernetes
-              mountPath: /srv/kubernetes/
-
-to f.e. :
-            - name: srv-kubernetes
-              mountPath: /tmp/kubernetes/
-
-The second solution is to change permissions for this folder.
-See also this and this questions with helpful answers, connected to your issue.
-",kube-bench
-"I know how to use RBAC with X.509 certificates to identify a user of kubectl and restrict them (using Role and RoleBinding) from creating pods of any kind in a namespace. However, I don't know how I can prevent them from putting specific labels on a pod (or any resource) they're trying to create.
-What I want to do is something like:
-
-Create a NetworkPolicy that only resources in other namespaces with the label group: cross-ns are allowed to reach a resource in the special-namespace
-Have a user who cannot create pods or other resources with the label group: cross-ns
-Have another user who can create resources with the label group: cross-ns
-
-Is this possible?
-","1. You can use the Kubernetes-native policy engine called Kyverno:
-
-Kyverno runs as a dynamic admission controller in a Kubernetes cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the kube-apiserver and applies matching policies to return results that enforce admission policies or reject requests.
-
-A Kyverno policy is a collection of rules that can be applied to the entire cluster (ClusterPolicy) or to the specific namespace (Policy).
-
-I will create an example to illustrate how it may work.
-First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: Quick Start guide):
-$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
-
-After successful installation, let's create a simple ClusterPolicy:
-apiVersion: kyverno.io/v1
-kind: ClusterPolicy
-metadata:
-  name: labeling-policy
-spec:
-  validationFailureAction: enforce
-  background: false
-  rules:
-  - name: deny-rule
-    match:
-      resources:
-        kinds:
-        - Pod
-    exclude:
-      clusterRoles:
-      - cluster-admin
-    preconditions:
-      - key: ""{{request.object.metadata.labels.purpose}}""
-        operator: Equals
-        value: ""*""
-    validate:
-      message: ""Using purpose label is not allowed for you""
-      deny: {}
-
-In the example above, only using the cluster-admin ClusterRole you can  modify a Pod with a label purpose.
-Suppose I have two users (john and dave), but only john is linked to the cluster-admin ClusterRole via ClusterRoleBinding:
-$ kubectl describe clusterrolebinding john-binding
-Name:         john-binding
-Labels:       <none>
-Annotations:  <none>
-Role:
-  Kind:  ClusterRole
-  Name:  cluster-admin
-Subjects:
-  Kind  Name  Namespace
-  ----  ----  ---------
-  User  john
-
-Finally, we can test if it works as expected:
-$ kubectl run test-john --image=nginx --labels purpose=test --as john
-pod/test-john created
-
-$ kubectl run test-dave --image=nginx --labels purpose=test --as dave
-Error from server: admission webhook ""validate.kyverno.svc"" denied the request:
-
-resource Pod/default/test-dave was blocked due to the following policies
-
-labeling-policy:
-  deny-rule: Using purpose label is not allowed for you
-
-$ kubectl get pods --show-labels
-NAME        READY   STATUS    RESTARTS   AGE   LABELS
-test-john   1/1     Running   0          32s   purpose=test
-
-More examples with detailed explanations can be found in the Kyverno Writing Policies documentation.
-",Kyverno
-"I am trying to setup a policy to block image without attestation.
-Here is my code: https://github.com/whoissqr/cg-test-keyless-sign
-my ClusterPolicy is as following
-apiVersion: kyverno.io/v1
-kind: ClusterPolicy
-metadata:
-  name: check-image-keyless
-spec:
-  validationFailureAction: Enforce
-  failurePolicy: Fail
-  background: false
-  webhookTimeoutSeconds: 30
-  rules:
-    - name: check-image-keyless
-      match:
-        any:
-        - resources:
-            kinds:
-              - Pod
-      verifyImages:
-      - verifyDigest: false
-        imageReferences:
-        - ""ghcr.io/whoissqr/cg-test-keyless-sign:latest""
-        attestors:
-        - entries:
-          - keyless:
-              subject: ""https://github.com/whoissqr/cg-test-keyless-sign/.github/workflows/main.yml@refs/heads/main""
-              issuer: ""https://token.actions.githubusercontent.com""
-              rekor:
-                url: https://rekor.sigstore.dev
-
-and when I run  kubectl get clusterpolicies -o yaml | kyverno apply - --resource ./k3s/pod.yaml -v 5, I got
-policy check-image-keyless -> resource app/Pod/cg failed: 
-1. check-image-keyless: unverified image ghcr.io/whoissqr/cg-test-keyless-sign:latest 
-I0226 13:11:26.376474    6153 cosign.go:86] cosign ""msg""=""verified image"" ""bundleVerified""=true ""count""=1
-I0226 13:11:26.376625    6153 imageVerify.go:511] EngineVerifyImages ""msg""=""image attestors verification succeeded"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""requiredCount""=1 ""verifiedCount""=1
-I0226 13:11:26.376663    6153 imageVerify.go:287] EngineVerifyImages ""msg""=""adding digest patch"" ""image""=""ghcr.io/whoissqr/cg-test-keyless-sign:latest"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""patch""=""{\""op\"":\""replace\"",\""path\"":\""/spec/containers/0/image\"",\""value\"":\""ghcr.io/whoissqr/cg-test-keyless-sign:latest@sha256:0c1f3bc065a0f1e7ea189fe50cf6f0e74e20b046bcfb6674eb716bd0af80f457\""}"" ""policy""=""check-image-keyless""
-I0226 13:11:26.376891    6153 validation.go:591] EngineVerifyImages ""msg""=""resource does not match rule"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""reason""=""rule autogen-check-image-keyless not matched:\n 1. no resource matched""
-I0226 13:11:26.376996    6153 validation.go:591] EngineVerifyImages ""msg""=""resource does not match rule"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""reason""=""rule autogen-cronjob-check-image-keyless not matched:\n 1. no resource matched""
-I0226 13:11:26.377050    6153 imageVerify.go:83] EngineVerifyImages ""msg""=""processed image verification rules"" ""applied""=1 ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""successful""=true ""time""=""1.301291106s""
-I0226 13:11:26.377099    6153 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0226 13:11:26.377219    6153 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0226 13:11:26.377235    6153 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0226 13:11:26.377335    6153 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0226 13:11:26.377416    6153 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0226 13:11:26.377432    6153 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-pass: 1, fail: 1, warn: 0, error: 0, skip: 4 
-Error: Process completed with exit code 1.
-
-what is exactly the 'fail: 1` about?
-also, the cosign verification by cosign is passing.
-
-","1. The following worked, thanks to expert in kyverno slack channel:
-      - name: (optional) Install Kyverno CLI
-        if: always() 
-        uses: kyverno/action-install-cli@v0.2.0
-          
-      - name: (optional) Dry run policy using Kyverno CLI
-        if: always() 
-        run: |
-          kyverno version
-          kyverno apply ./k3s/policy-check-image-keyless.yaml --resource ./k3s/pod.yaml
-          # kubectl get clusterpolicies -o yaml | kyverno apply - --resource ./k3s/pod.yaml -v 10
-
-",Kyverno
-"I am writing a simple test to verify that Kyverno is able to block images without attestation from being deployed in k3s cluster.
-https://github.com/whoissqr/cg-test-keyless-sign
-I have the following ClusterPolicy
-apiVersion: kyverno.io/v1
-kind: ClusterPolicy
-metadata:
-  name: check-image-keyless
-spec:
-  validationFailureAction: Enforce
-  failurePolicy: Fail
-  webhookTimeoutSeconds: 30
-  rules:
-    - name: check-image-keyless
-      match:
-        any:
-        - resources:
-            kinds:
-              - Pod
-      verifyImages:
-      - imageReferences:
-        - ""ghcr.io/whoissqr/cg-test-keyless-sign""
-        attestors:
-        - entries:
-          - keyless:
-              subject: ""https://github.com/whoissqr/cg-test-keyless-sign/.github/workflows/main.yml@refs/heads/main""
-              issuer: ""https://token.actions.githubusercontent.com""
-              rekor:
-                url: https://rekor.sigstore.dev
-
-and the following pod yaml
-apiVersion: v1
-kind: Pod
-metadata:
-  name: cg
-  namespace: app
-spec:
-  containers:
-    - image: ghcr.io/whoissqr/cg-test-keyless-sign
-      name: cg-test-keyless-sign
-      resources: {}
-
-And, I purposely commented out the image cosign step in Github action so that the cosign verify failed as expected, but the pod deployment to k3s is still succeeded. What am I missing here?
-name: Publish and Sign Container Image
-
-on:
-  schedule:
-    - cron: '32 11 * * *'
-  push:
-    branches: [ main ]
-    # Publish semver tags as releases.
-    tags: [ 'v*.*.*' ]
-  pull_request:
-    branches: [ main ]
-
-jobs:
-  build:
-
-    runs-on: ubuntu-latest
-    permissions:
-      contents: read
-      packages: write
-      id-token: write
-
-    steps:
-      - name: Checkout repository
-        uses: actions/checkout@v2
-
-      - name: Install cosign
-        uses: sigstore/cosign-installer@v3.2.0
-          
-      - name: Check install!
-        run: cosign version
-        
-      - name: Setup Docker buildx
-        uses: docker/setup-buildx-action@v2
-
-      - name: Log into ghcr.io
-        uses: docker/login-action@master
-        with:
-          registry: ghcr.io
-          username: ${{ github.actor }}
-          password: ${{ secrets.GITHUB_TOKEN }}
-
-      - name: Build and push container image
-        id: push-step
-        uses: docker/build-push-action@master
-        with:
-          push: true
-          tags: ghcr.io/${{ github.repository }}:latest
-
-      - name: Sign the images with GitHub OIDC Token
-        env:
-          DIGEST: ${{ steps.push-step.outputs.digest }}
-          TAGS: ghcr.io/${{ github.repository }}
-          COSIGN_EXPERIMENTAL: ""true""
-        run: |
-          echo ""dont sign image""
-          # cosign sign --yes ""${TAGS}@${DIGEST}""
-        
-      - name: Verify the images
-        run: |
-          cosign verify ghcr.io/whoissqr/cg-test-keyless-sign \
-             --certificate-identity https://github.com/whoissqr/cg-test-keyless-sign/.github/workflows/main.yml@refs/heads/main \
-             --certificate-oidc-issuer https://token.actions.githubusercontent.com | jq
-
-      - name: Create k3s cluster
-        uses: debianmaster/actions-k3s@master
-        id: k3s
-        with:
-          version: 'latest'
-          
-      - name: Install Kyverno chart
-        run: |
-          helm repo add kyverno https://kyverno.github.io/kyverno/
-          helm repo update
-          helm install kyverno kyverno/kyverno -n kyverno --create-namespace
-
-      - name: Apply image attestation policy
-        run: |
-          kubectl apply -f ./k3s/policy-check-image-keyless.yaml
-          
-      - name: Deploy pod to k3s
-        run: |
-          set -x
-          # kubectl get nodes
-          kubectl create ns app
-          sleep 20
-          # kubectl get pods -n app
-          kubectl apply -f ./k3s/pod.yaml
-          kubectl -n app wait --for=condition=Ready pod/cg
-          kubectl get pods -n app
-          kubectl -n app describe pod cg
-          kubectl get polr -o wide
-
-      - name: Install Kyverno CLI
-        uses: kyverno/action-install-cli@v0.2.0
-        with:
-          release: 'v1.9.5'
-          
-      - name: Check policy using Kyverno CLI
-        run: |
-          kyverno version
-          kyverno apply ./k3s/policy-check-image-keyless.yaml --cluster -v 10
-
-in the GH action console
-+ kubectl apply -f ./k3s/pod.yaml
-pod/cg created
-+ kubectl -n app wait --for=condition=Ready pod/cg
-pod/cg condition met
-+ kubectl get pods -n app
-NAME   READY   STATUS    RESTARTS   AGE
-cg     1/1     Running   0          12s
-
-and the kyverno CLI output has
-I0225 10:00:31.650505    6794 common.go:424]  ""msg""=""applying policy on resource"" ""policy""=""check-image-keyless"" ""resource""=""app/Pod/cg""
-I0225 10:00:31.652646    6794 context.go:278]  ""msg""=""updated image info"" ""images""={""containers"":{""cg-test-keyless-sign"":{""registry"":""ghcr.io"",""name"":""cg-test-keyless-sign"",""path"":""whoissqr/cg-test-keyless-sign"",""tag"":""latest""}}}
-I0225 10:00:31.654017    6794 utils.go:29]  ""msg""=""applied JSON patch"" ""patch""=[{""op"":""replace"",""path"":""/spec/containers/0/image"",""value"":""ghcr.io/whoissqr/cg-test-keyless-sign:latest""}]
-I0225 10:00:31.659697    6794 mutation.go:39] EngineMutate ""msg""=""start mutate policy processing"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""startTime""=""2024-02-25T10:00:31.659674165Z""
-I0225 10:00:31.659737    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.659815    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.659834    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.659940    6794 mutation.go:379] EngineMutate ""msg""=""finished processing policy"" ""kind""=""Pod"" ""mutationRulesApplied""=0 ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""processingTime""=""249.225µs""
-I0225 10:00:31.659966    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660040    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.660059    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660153    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660218    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.660236    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660337    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660402    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.660421    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.660648    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""pods"",""singularName"":""pod"",""namespaced"":true,""version"":""v1"",""kind"":""Pod"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""po""],""categories"":[""all""]} ""kind""=""Pod""
-I0225 10:00:31.660729    6794 imageVerify.go:121] EngineVerifyImages ""msg""=""processing image verification rule"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""ruleSelector""=""All""
-I0225 10:00:31.660889    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""daemonsets"",""singularName"":""daemonset"",""namespaced"":true,""group"":""apps"",""version"":""v1"",""kind"":""DaemonSet"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""ds""],""categories"":[""all""]} ""kind""=""DaemonSet""
-I0225 10:00:31.661037    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""deployments"",""singularName"":""deployment"",""namespaced"":true,""group"":""apps"",""version"":""v1"",""kind"":""Deployment"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""deploy""],""categories"":[""all""]} ""kind""=""Deployment""
-I0225 10:00:31.661184    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""jobs"",""singularName"":""job"",""namespaced"":true,""group"":""batch"",""version"":""v1"",""kind"":""Job"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""categories"":[""all""]} ""kind""=""Job""
-I0225 10:00:31.661327    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""statefulsets"",""singularName"":""statefulset"",""namespaced"":true,""group"":""apps"",""version"":""v1"",""kind"":""StatefulSet"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""sts""],""categories"":[""all""]} ""kind""=""StatefulSet""
-I0225 10:00:31.661465    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""replicasets"",""singularName"":""replicaset"",""namespaced"":true,""group"":""apps"",""version"":""v1"",""kind"":""ReplicaSet"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""rs""],""categories"":[""all""]} ""kind""=""ReplicaSet""
-I0225 10:00:31.661606    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""replicationcontrollers"",""singularName"":""replicationcontroller"",""namespaced"":true,""version"":""v1"",""kind"":""ReplicationController"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""rc""],""categories"":[""all""]} ""kind""=""ReplicationController""
-I0225 10:00:31.661789    6794 validation.go:591] EngineVerifyImages ""msg""=""resource does not match rule"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""reason""=""rule autogen-check-image-keyless not matched:\n 1. no resource matched""
-I0225 10:00:31.661938    6794 discovery.go:269] dynamic-client ""msg""=""matched API resource to kind"" ""apiResource""={""name"":""cronjobs"",""singularName"":""cronjob"",""namespaced"":true,""group"":""batch"",""version"":""v1"",""kind"":""CronJob"",""verbs"":[""create"",""delete"",""deletecollection"",""get"",""list"",""patch"",""update"",""watch""],""shortNames"":[""cj""],""categories"":[""all""]} ""kind""=""CronJob""
-I0225 10:00:31.662056    6794 validation.go:591] EngineVerifyImages ""msg""=""resource does not match rule"" ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""reason""=""rule autogen-cronjob-check-image-keyless not matched:\n 1. no resource matched""
-I0225 10:00:31.662091    6794 imageVerify.go:83] EngineVerifyImages ""msg""=""processed image verification rules"" ""applied""=0 ""kind""=""Pod"" ""name""=""cg"" ""namespace""=""app"" ""policy""=""check-image-keyless"" ""successful""=true ""time""=""1.748335ms""
-I0225 10:00:31.662113    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.662189    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.662208    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.662302    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.662368    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.662385    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.662481    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-I0225 10:00:31.662544    6794 rule.go:286] autogen ""msg""=""generating rule for cronJob"" 
-I0225 10:00:31.662577    6794 rule.go:233] autogen ""msg""=""processing rule"" ""rulename""=""check-image-keyless""
-
-thanks!
-==== edit 02/25/2024 8:24 PM =====
-I noticed one small typo, and added tag 'latest' to image reference in the policy yaml, and now the 'kyverno apply' step failed as expected
-
-policy check-image-keyless -> resource app/Pod/cg failed: 
-1. check-image-keyless: failed to verify image ghcr.io/whoissqr/cg-test-keyless-sign:latest: .attestors[0].entries[0].keyless: no matching signatures:
-....
-pass: 0, fail: 1, warn: 0, error: 0, skip: 98 
-Error: Process completed with exit code 1.
-
-However, my 'kubectl apply -f ./k3s/pod.yaml' statement in previous step still proceed without error and pod are still created and running.
-why?
-==== 2nd edit =====
-we need to add the following to the policy
-background: false
-
-","1. apiVersion: kyverno.io/v1
-kind: ClusterPolicy
-metadata:
-  name: check-image-keyless
-spec:
-  validationFailureAction: Enforce
-  failurePolicy: Fail
-  background: false
-  webhookTimeoutSeconds: 30
-  rules:
-    - name: check-image-keyless
-      match:
-        any:
-        - resources:
-            kinds:
-              - Pod
-      verifyImages:
-      - imageReferences:
-        - ""ghcr.io/whoissqr/cg-test-keyless-sign:latest""
-        attestors:
-        - entries:
-          - keyless:
-              subject: ""https://github.com/whoissqr/cg-test-keyless-sign/.github/workflows/main.yml@refs/heads/main""
-              issuer: ""https://token.actions.githubusercontent.com""
-              rekor:
-                url: https://rekor.sigstore.dev
-
-pod yaml
-apiVersion: v1
-kind: Pod
-metadata:
-  name: cg
-  namespace: app
-spec:
-  containers:
-    - image: ghcr.io/whoissqr/cg-test-keyless-sign:latest
-      name: cg-test-keyless-sign
-      resources: {}
-
-Github action
-name: Publish and Sign Container Image
-
-on:
-  schedule:
-    - cron: '32 11 * * *'
-  push:
-    branches: [ main ]
-    # Publish semver tags as releases.
-    tags: [ 'v*.*.*' ]
-  pull_request:
-    branches: [ main ]
-
-jobs:
-  build:
-
-    runs-on: ubuntu-latest
-    permissions:
-      contents: read
-      packages: write
-      id-token: write
-
-    steps:
-      - name: Checkout repository
-        uses: actions/checkout@v2
-
-      - name: Install cosign
-        uses: sigstore/cosign-installer@v3.2.0
-          
-      - name: Check install!
-        run: cosign version
-        
-      - name: Setup Docker buildx
-        uses: docker/setup-buildx-action@v2
-
-      - name: Log into ghcr.io
-        uses: docker/login-action@master
-        with:
-          registry: ghcr.io
-          username: ${{ github.actor }}
-          password: ${{ secrets.GITHUB_TOKEN }}
-
-      - name: Build and push container image
-        id: push-step
-        uses: docker/build-push-action@master
-        with:
-          push: true
-          tags: ghcr.io/${{ github.repository }}:latest
-
-      - name: Sign the images with GitHub OIDC Token
-        env:
-          DIGEST: ${{ steps.push-step.outputs.digest }}
-          TAGS: ghcr.io/${{ github.repository }}
-          COSIGN_EXPERIMENTAL: ""true""
-        run: |
-          echo ""dont sign image""
-          # cosign sign --yes ""${TAGS}@${DIGEST}""
-        
-      - name: (optional) Verify the images
-        run: |
-          cosign verify ghcr.io/whoissqr/cg-test-keyless-sign \
-             --certificate-identity https://github.com/whoissqr/cg-test-keyless-sign/.github/workflows/main.yml@refs/heads/main \
-             --certificate-oidc-issuer https://token.actions.githubusercontent.com | jq
-
-      - name: Create k3s cluster
-        uses: debianmaster/actions-k3s@master
-        id: k3s
-        with:
-          version: 'latest'
-          
-      - name: Install Kyverno chart
-        run: |
-          helm repo add kyverno https://kyverno.github.io/kyverno/
-          helm repo update
-          helm install --atomic kyverno kyverno/kyverno -n kyverno --create-namespace
-          sleep 10
-
-      - name: Apply image attestation policy
-        run: |
-          kubectl apply -f ./k3s/policy-check-image-keyless.yaml
-
-      - name: Deploy pod to k3s
-        if: always() 
-        run: |
-          kubectl create ns app
-          kubectl apply -f ./k3s/pod.yaml
-          kubectl -n app wait --for=condition=Ready pod/cg
-          kubectl get pods -n app
-
-      - name: (optional) Install Kyverno CLI
-        if: always() 
-        uses: kyverno/action-install-cli@v0.2.0
-        with:
-          release: 'v1.9.5'
-          
-      - name: (optional) Dry run policy using Kyverno CLI
-        if: always() 
-        run: |
-          kyverno version
-          # kyverno apply ./k3s/policy-check-image-keyless.yaml --cluster -v 10
-          kubectl get clusterpolicies -o yaml | kyverno apply - --resource ./k3s/pod.yaml -v 10
-          
-
-",Kyverno
-"I have been following the steps in the action script here closely (https://github.com/sudo-bot/action-docker-sign/blob/main/action.yml) to trust and sign a multi-platform image. The only modification required was for the extraction of the SHA256 where I extract the last SHA256 returned by the manifest-push-command (the cut command in the action script does not seem to return a valid SHA256); maybe the manifest-push result has changed. I have also tried different SHA256 values returned by the push with the same result.
-This is the script, using Docker 23.0.0 and the notary package installed with sudo apt-get notary on Ubuntu.
-The script completes without error but there is no image tag signature in the end. What am I missing? How do you trust and sign multi-platform image tags?
-Note that buildx does not help signing multi-platform images; it just pushes unsigned images as far as I know.
-export DOCKER_CONTENT_TRUST=1
-
-# build for platforms, authentication build args omitted; needs docker 23.0.0
-docker build --platform=linux/amd64 --tag mydockerid/test-amd64:$(tag)$(tagSuffix) --file $(Folder)/Dockerfile .
-docker build --platform=linux/arm64 --tag mydockerid/test-arm64:$(tag)$(tagSuffix) --file $(Folder)/Dockerfile .
-
-export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE='$(SignerKeyPassword)'
-docker trust key load $(signerKey.secureFilePath)
-
-export NOTARY_TARGETS_PASSPHRASE='$(TargetKeyPassword)'
-export NOTARY_SNAPSHOT_PASSPHRASE='$(SnapshotKeyPassword)'
-
-# Sign and push platform specific images - is it necessary to sign these?
-docker trust sign mydockerid/test-amd64:$(tag)$(tagSuffix)
-docker trust sign mydockerid/test-arm64:$(tag)$(tagSuffix)
-
-# Create manifest list from platform manifests
-docker manifest create mydockerid/test:$(tag)$(tagSuffix) mydockerid/test-amd64:$(tag)$(tagSuffix) mydockerid/test-arm64:$(tag)$(tagSuffix)
-
-# orignal action command does not extract valid SHA
-# SHA_256=$(docker manifest push mydockerid/test:$(tag)$(tagSuffix) --purge | cut -d ':' -f 2)
-
-# Push manifest
-MANIFEST=$(docker manifest push mydockerid/test:$(tag)$(tagSuffix) --purge)
-# Extract last sha256 return by push command which is the only sha256 not corresponding to layers
-echo ""MANIFEST: ${MANIFEST}""
-SHA_256=$(echo ${MANIFEST//*:})          
-echo ""SHA_256: $SHA_256""
-
-MANIFEST_FROM_REG=""$(docker manifest inspect ""mydockerid/test:$(tag)$(tagSuffix)"" -v)"";
-echo ""MANIFEST_FROM_REG: $MANIFEST_FROM_REG""
-
-# Determine byte size as per action script
-BYTES_SIZE=""$(printf ""${MANIFEST_FROM_REG}"" | jq -r '.[].Descriptor.size' | uniq)"";
-echo ""BYTES_SIZE: $BYTES_SIZE""
-
-REF=""mydockerid/test""
-TAG=""$(tag)$(tagSuffix)""
-
-AUTH_BASIC=$(SignerAuthBasic)
-ROLE_CLI=""""
-# Check that keys are present
-notary key list -d $(DOCKER_CONFIG)/trust/
-# Encode user:pat as base 64
-export NOTARY_AUTH=""$(printf ""${AUTH_BASIC}"" | base64 -w0)"";
-TRUST_FOLDER=""$(DOCKER_CONFIG)/trust/""
-echo ""TRUST_FOLDER: $TRUST_FOLDER""
-# publish and sign
-notary -d ${TRUST_FOLDER} -s ""https://notary.docker.io"" addhash ""${REF}"" ""${TAG}"" ""${BYTES_SIZE}"" --sha256 ""${SHA_256}"" ${ROLE_CLI} --publish --verbose
-notary -s ""https://notary.docker.io"" list ""${REF}"";
-unset NOTARY_AUTH;
-
-
-The script completes without error.
-The notary ... --publish ... command returns:
-Addition of target ""1.1.1234-beta"" by sha256 hash to repository ""***/test"" staged for next publish.
-Auto-publishing changes to ***/test
-Successfully published changes for repository ***/test
-
-The last notary ... list command lists the image tag as expected:
-NAME             DIGEST            SIZE (BYTES)    ROLE
-----             ------            ------------    ----
-1.0.1234-beta    91e75e43bd....    637             targets
-
-But when inspecting trust then there is no signature:
-docker trust inspect --pretty mydockerid/test
-
-No signatures for mydockerid/test
-...
-
-","1. The fix is relatively simple. The image reference in the notary command needed docker.io in front in my case, like so:
-notary -d ${TRUST_FOLDER} -s ""https://notary.docker.io"" addhash ""docker.io/${REF}"" ""${TAG}"" ""${BYTES_SIZE}"" --sha256 ""${SHA_256}"" ${ROLE_CLI} --publish --verbose
-
-This will have notary ask for the repository key, which needs to be provided, i.e.:
-export NOTARY_TARGETS_PASSPHRASE='$(RepoKeyPassword)'
-
-The other notary keys are not required.
-After these two changes, the signatures appear in docker inspect trust as expected.
-",Notary
-"I am working on notarizing my installer using notarytool, below is my command
-submission_id=$(xcrun notarytool submit --apple-id ""${APPLE_ID}"" --team-id ""${TEAM_ID}"" --password ""${APP_PASSWORD}"" ""${dmgFile}"" 2>&1)
-
-Next I want to check the status using notarytool info
-status=$(xcrun notarytool info ""${submission_id}"" --apple-id ""$APPLE_ID"" --password ""$APP_PASSWORD"" 2>&1)
-
-However, I get submission_id as below.
-
-Conducting pre-submission checks for file_name.dmg and initiating
-connection to the Apple notary service... Submission ID received id:
-submission_id Successfully uploaded file id: submission_id path:
-file_path/file_name.dmg
-
-How can I extract submission_id in the form of UUID which I can use to get notarization status using notary info
-","1. I had the same problem when I had to work on the Notarization via CI/CD. At the moment I'm writing the notarytool's output is similar to:
-Conducting pre-submission checks for Boundle.dmg and initiating connection to the Apple notary service...
-Submission ID received
-  id: abc12-1234-xxxx-yyyy-123456e4c1c8
-Upload progress: 100,00% (116 MB of 116 MB)    
-Successfully uploaded file
-  id: abc12-1234-xxxx-yyyy-123456e4c1c8
-  path: /Users/Boundle.dmg
-Waiting for processing to complete.
-Current status: Accepted...........
-Processing complete
-  id: abc12-1234-xxxx-yyyy-123456e4c1c8
-  status: Accepted
-
-You can use the awk command to clean and retrieve only the id from the notary output:
-submission_id=$(echo ""$submission_id"" | awk '/id: / { print $2;exit; }')
-
-In my case the result was:
-NOTARY_SUBMIT_OUTPUT=$(xcrun notarytool submit ""${dmgFile}"" --wait --apple-id ""${APPLE_ID}"" --password ""${APP_PASSWORD}"" --team-id ""${TEAM_ID}"")
-xcrun notarytool log $(echo ""$NOTARY_SUBMIT_OUTPUT"" | awk '/id: / { print $2;exit; }') --apple-id ""${APPLE_ID}"" --password ""${APP_PASSWORD}"" --team-id ""${TEAM_ID}""
-
-N.B.: When using echo ensure to wrap your variable into double-quote to preserve line-breaks:
-echo ""$NOTARY_SUBMIT_OUTPUT""
-",Notary
-"When the initial trust on docker content trust with notary on tuf is initialized I understand how TUF, Notary and Content Trust works. 
-But what is not clear to me is, how the initial trust is setup. 
-How do I know, that the first pull is not a compromised one and the initial root.json is trustworthy?
-So for example if I do docker pull with content-trust enabled, I will only get signed images. But how do I verify, that this image is signed by the right person?
-","1. Notary creator and maintainer here. Justin has already given a good account but I'll speak to trust initialization in TUF and Notary more broadly.
-Unless you communicate the root of trust through some out of band method, there will always be a point of download that you implicitly trust to deliver the root of trust. Some general case examples: we do this when we download an OS (i.e. any Linux distro), or grab somebody's GPG public key from a public key directory. Assuming the resources are delivered over a TLS connection and we believe that the publisher has secured their server, we trust we're receiving legitimate data, and use this to bootstrap trust on all future interactions. This is called Trust On First Use, or TOFU.
-The claim here is that people do keep their servers secure, and that it's difficult to perform a Man-in-the-middle (MiTM) attack, especially against a TLS secured connection. Therefore we can trust this initial download.
-Notary has a few ways one can initialize trust. The first is this TOFU mechanism. TUF has a defined update flow that ensures trust over all subsequent content after the initial download. Notary implements this update flow and ensures the publisher is consistent after the initial download.
-If you want to additionally ensure the publisher is a specific entity, Notary provides three different ways to bootstrap that trust. They are:
-
-Manually place the root.json, acquired out of band, in the correct location in the notary cache.
-Configure trust pinning to trust a specific root key ID for a Notary Globally Unique Name (GUN).
-Configure trust pinning to trust a CA for a specific Notary GUN or GUN prefix.
-
-More information on trust pinning can be found in our docs. Note all 3 options require an out of band communication in which you acquire either a root.json, the ID of the root key, or the CA certificate that was used to issue the root key.
-Implementing trust pinning under the docker trust command is in our TODO list, it's not there yet. However you can still use option 1 with docker trust. The cache is located at ~/.docker/trust/tuf/<GUN>/metadata/
-Additional context on option 3: Notary implements a feature that allows one to configure CAs for GUNs or GUN prefixes. The requirement in this instance is that the public root key is included in the root.json as an x509 certificate that chains to the configured CA. While CAs can be a controversial topic, nobody is forced to use this feature and in most attacker models it's strictly better than TOFU. Additionally TUF explicitly does not address how trust is bootstrapped.
-
-2. You can pin the keys by some out of band means, or do something like ssh which shows you a key to check on first use. These methods are not predefined, but you have the flexibility to build them yourself depending how you are using Notary. For LinuxKit we are planning to have an option to put the key hashes in the config file you use for building, that lists which images to pull. Or you could publish the root key id elsewhere.
-
-3. tl;dr
-You can execute the following to pin the public root key for debian:
-sudo su -
-mkdir -p /root/.docker/trust/tuf/docker.io/library/debian/metadata
-chown -R root:root /root/.docker
-chmod -R 0700 /root/.docker
-echo '{""signed"":{""_type"":""Root"",""consistent_snapshot"":false,""expires"":""2025-08-07T20:55:22.677722315-07:00"",""keys"":{""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsslGF2xHOYztrocb2OsRF2zth16v170QiLAyKdce1nQgOJ34FOk679ClPL9/RNnJukf2JfQXSlVV/qcsvxV2dQ==""}},""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84"":{""keytype"":""ecdsa-x509"",""keyval"":{""private"":null,""public"":""LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lRWExkUFFHTGJaOE84UXFlTzVuZlBRekFLQmdncWhrak9QUVFEQWpBak1TRXcKSHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrWldKcFlXNHdIaGNOTVRVd09ERXhNRE0xTlRJeQpXaGNOTWpVd09EQTRNRE0xTlRJeVdqQWpNU0V3SHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrClpXSnBZVzR3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVE1ZGkxcmxPQjBMQmRNS2N0VFQxYmwKUGd6aXYxOUJDdW9tNEFNL3BUdURtdjBnS0E5S1ptNUVjLy9VQmhSODVCYmR0cTk0cXhQM3IwUjhRc3FQV1Y4SQpvelV3TXpBT0JnTlZIUThCQWY4RUJBTUNBS0F3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdNd0RBWURWUjBUCkFRSC9CQUl3QURBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBOUFOZ3dPN2tBdUVIK3U2N25XNlFLWmlMdWd5UVcKaEQ3Vys5WjIza01mTndJaEFJa3RTaW1TdFdRQkFoOG9WOXhjaWNVWWVUN0pyUG82a0RqeHU1YitGZ3MxCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K""}},""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAENtpBkDJ2oYaAAVdOkP0A6J0XwUkYGuFRk+q8N4WCPu2VnNIuBJkatPCWdEtHfQ9nNYLeanWgG62/UmJnx3E2Yg==""}},""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEwfs26T/cpjvNTXVJpK7Wv8oDOnNKL78AT3Y1QD356OIAggwPupX2LQjZU6CVzCjm+pkJIO4clu9Q2n540gKuzQ==""}}},""roles"":{""root"":{""keyids"":[""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84""],""threshold"":1},""snapshot"":{""keyids"":[""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69""],""threshold"":1},""targets"":{""keyids"":[""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a""],""threshold"":1},""timestamp"":{""keyids"":[""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70""],""threshold"":1}},""version"":1},""signatures"":[{""keyid"":""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9f
-fb1ab5cc4291e07bf84"",""method"":""ecdsa"",""sig"":""3WbX1VXN9E8LRmSG+E4SQlBUNqBNchhwAStWnRWLLyAOoFNBq5xmIgSO3UYYuKyJvL7kbMoONRbn5Vk2p2Wqrg==""}]}' > /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-
-export DOCKER_CONTENT_TRUST=1
-docker pull debian:stable-slim
-
-Long Answer
-
-⚠ Disclaimer: I am not a docker developer. As of 2021, it appears that DCT is broken out-of-the-box and is far from being useful. My answer here is a best-guess, but I have not confirmed with the docker team if this is the ""correct"" way to pre-load and pin a given publisher's root key into the DCT keyring.
-Be advised, proceed with caution, and your comments are very welcome.
-
-It doesn't appear to be documented anywhere, but per this question it's clear that docker puts its DCT metadata (including root public keys) in the following location:
-$HOME/.docker/trust/tuf/docker.io/library
-
-Inside this library dir exists one dir per publisher. For the purposes of this answer, I'll use debian as our example publisher.
-You can see the list of the debian docker images published to Docker Hub here:
-
-https://hub.docker.com/_/debian/
-
-Solution
-Let's say we want to download the stable-slim image from the debian publisher on Docker Hub. In this example, we'll also use a fresh install of Debian 10 as the docker host.
-##
-# first, install docker
-## 
-root@disp2716:~# apt-get install docker.io
-...
-root@disp2716:~#
-
-##
-# confirm that there is no docker config dir yet
-##
-
-root@disp2716:~# ls -lah /root/.docker
-ls: cannot access '/root/.docker': No such file or directory
-root@disp2716:~# 
-
-##
-# add the debian publisher's root DCT key
-##
-
-root@disp2716:~# mkdir -p /root/.docker/trust/tuf/docker.io/library/debian/metadata
-root@disp2716:~# chown -R root:root /root/.docker
-root@disp2716:~# chmod -R 0700 /root/.docker
-root@disp2716:~# echo '{""signed"":{""_type"":""Root"",""consistent_snapshot"":false,""expires"":""2025-08-07T20:55:22.677722315-07:00"",""keys"":{""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsslGF2xHOYztrocb2OsRF2zth16v170QiLAyKdce1nQgOJ34FOk679ClPL9/RNnJukf2JfQXSlVV/qcsvxV2dQ==""}},""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84"":{""keytype"":""ecdsa-x509"",""keyval"":{""private"":null,""public"":""LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lRWExkUFFHTGJaOE84UXFlTzVuZlBRekFLQmdncWhrak9QUVFEQWpBak1TRXcKSHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrWldKcFlXNHdIaGNOTVRVd09ERXhNRE0xTlRJeQpXaGNOTWpVd09EQTRNRE0xTlRJeVdqQWpNU0V3SHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrClpXSnBZVzR3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVE1ZGkxcmxPQjBMQmRNS2N0VFQxYmwKUGd6aXYxOUJDdW9tNEFNL3BUdURtdjBnS0E5S1ptNUVjLy9VQmhSODVCYmR0cTk0cXhQM3IwUjhRc3FQV1Y4SQpvelV3TXpBT0JnTlZIUThCQWY4RUJBTUNBS0F3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdNd0RBWURWUjBUCkFRSC9CQUl3QURBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBOUFOZ3dPN2tBdUVIK3U2N25XNlFLWmlMdWd5UVcKaEQ3Vys5WjIza01mTndJaEFJa3RTaW1TdFdRQkFoOG9WOXhjaWNVWWVUN0pyUG82a0RqeHU1YitGZ3MxCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K""}},""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAENtpBkDJ2oYaAAVdOkP0A6J0XwUkYGuFRk+q8N4WCPu2VnNIuBJkatPCWdEtHfQ9nNYLeanWgG62/UmJnx3E2Yg==""}},""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEwfs26T/cpjvNTXVJpK7Wv8oDOnNKL78AT3Y1QD356OIAggwPupX2LQjZU6CVzCjm+pkJIO4clu9Q2n540gKuzQ==""}}},""roles"":{""root"":{""keyids"":[""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84""],""threshold"":1},""snapshot"":{""keyids"":[""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69""],""threshold"":1},""targets"":{""keyids"":[""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a""],""threshold"":1},""timestamp"":{""keyids"":[""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70""],""threshold"":1}},""version"":1},""signatures"":[{""keyid"":""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84"",""method"":""ecdsa"",""sig"":""3WbX1VXN9E8LRmSG+E4SQlBUNqBNchhwAStWnRWLLyAOoFNBq5xmIgSO3UYYuKyJvL7kbMoONRbn5Vk2p2Wqrg==""}]}' > /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# 
-root@disp2716:~# chown root:root /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# chmod 0600 /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# 
-
-##
-# pull the docker image with DCT verification
-##
-
-root@disp2716:~# export DOCKER_CONTENT_TRUST=1
-root@disp2716:~# docker pull debian:stable-slim
-Pull (1 of 1): debian:stable-slim@sha256:850a7ee21c49c99b0e5e06df21f898a0e64335ae84eb37d6f71abc1bf28f5632
-sha256:850a7ee21c49c99b0e5e06df21f898a0e64335ae84eb37d6f71abc1bf28f5632: Pulling from library/debian
-6e640006d1cd: Pull complete 
-Digest: sha256:850a7ee21c49c99b0e5e06df21f898a0e64335ae84eb37d6f71abc1bf28f5632
-Status: Downloaded newer image for debian@sha256:850a7ee21c49c99b0e5e06df21f898a0e64335ae84eb37d6f71abc1bf28f5632
-Tagging debian@sha256:850a7ee21c49c99b0e5e06df21f898a0e64335ae84eb37d6f71abc1bf28f5632 as debian:stable-slim
-root@disp2716:~# 
-
-Proof
-While there's no way to tell docker to fail on TOFU, we can confirm that the above key pinning works by making the public key something else
-##
-# first, move the docker config dir out of the way
-## 
-
-mv /root/.docker /root/.docker.bak
-
-##
-# add the debian publisher's root DCT key (note I just overwrote the first 8
-# characters of the actual key with ""INVALID/"")
-##
-
-root@disp2716:~# mkdir -p /root/.docker/trust/tuf/docker.io/library/debian/metadata
-root@disp2716:~# chown -R root:root /root/.docker
-root@disp2716:~# chmod -R 0700 /root/.docker
-root@disp2716:~# echo '{""signed"":{""_type"":""Root"",""consistent_snapshot"":false,""expires"":""2025-08-07T20:55:22.677722315-07:00"",""keys"":{""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""INVALID/KoZIzj0CAQYIKoZIzj0DAQcDQgAEsslGF2xHOYztrocb2OsRF2zth16v170QiLAyKdce1nQgOJ34FOk679ClPL9/RNnJukf2JfQXSlVV/qcsvxV2dQ==""}},""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84"":{""keytype"":""ecdsa-x509"",""keyval"":{""private"":null,""public"":""INVALID/RUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlVENDQVIrZ0F3SUJBZ0lRWExkUFFHTGJaOE84UXFlTzVuZlBRekFLQmdncWhrak9QUVFEQWpBak1TRXcKSHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrWldKcFlXNHdIaGNOTVRVd09ERXhNRE0xTlRJeQpXaGNOTWpVd09EQTRNRE0xTlRJeVdqQWpNU0V3SHdZRFZRUURFeGhrYjJOclpYSXVhVzh2YkdsaWNtRnllUzlrClpXSnBZVzR3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVE1ZGkxcmxPQjBMQmRNS2N0VFQxYmwKUGd6aXYxOUJDdW9tNEFNL3BUdURtdjBnS0E5S1ptNUVjLy9VQmhSODVCYmR0cTk0cXhQM3IwUjhRc3FQV1Y4SQpvelV3TXpBT0JnTlZIUThCQWY4RUJBTUNBS0F3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdNd0RBWURWUjBUCkFRSC9CQUl3QURBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlBOUFOZ3dPN2tBdUVIK3U2N25XNlFLWmlMdWd5UVcKaEQ3Vys5WjIza01mTndJaEFJa3RTaW1TdFdRQkFoOG9WOXhjaWNVWWVUN0pyUG82a0RqeHU1YitGZ3MxCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K""}},""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""INVALID/KoZIzj0CAQYIKoZIzj0DAQcDQgAENtpBkDJ2oYaAAVdOkP0A6J0XwUkYGuFRk+q8N4WCPu2VnNIuBJkatPCWdEtHfQ9nNYLeanWgG62/UmJnx3E2Yg==""}},""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69"":{""keytype"":""ecdsa"",""keyval"":{""private"":null,""public"":""INVALID/KoZIzj0CAQYIKoZIzj0DAQcDQgAEwfs26T/cpjvNTXVJpK7Wv8oDOnNKL78AT3Y1QD356OIAggwPupX2LQjZU6CVzCjm+pkJIO4clu9Q2n540gKuzQ==""}}},""roles"":{""root"":{""keyids"":[""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84""],""threshold"":1},""snapshot"":{""keyids"":[""d48327d85f0490827db7c931eedb58d293e1da5fc425ea0cde3e6c13b397ad69""],""threshold"":1},""targets"":{""keyids"":[""5717dcd81d9fb5b73aa15f2d887a6a0de543829ab9b2d411acce9219c2f8ba3a""],""threshold"":1},""timestamp"":{""keyids"":[""728c96ff5e9f48d4e66d5a0c3ecabfdd90bee2b5f9f80b950ed9c668db264a70""],""threshold"":1}},""version"":1},""signatures"":[{""keyid"":""575d013f89e3cbbb19e0fb06aa33566c22718318e0c9ffb1ab5cc4291e07bf84"",""method"":""ecdsa"",""sig"":""3WbX1VXN9E8LRmSG+E4SQlBUNqBNchhwAStWnRWLLyAOoFNBq5xmIgSO3UYYuKyJvL7kbMoONRbn5Vk2p2Wqrg==""}]}' > /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# 
-root@disp2716:~# chown root:root /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# chmod 0600 /root/.docker/trust/tuf/docker.io/library/debian/metadata/root.json
-root@disp2716:~# 
-
-##
-# pull the docker image with DCT verification
-##
-
-root@disp2716:~# export DOCKER_CONTENT_TRUST=1
-root@disp2716:~# docker pull debian:stable-slim
-could not validate the path to a trusted root: unable to retrieve valid leaf certificates
-root@disp2716:~# 
-root@disp2716:~# echo $?
-1
-root@disp2716:~# 
-
-Note that docker exits 1 with an error refusing to pull the debian:stable-slim docker image from Docker Hub because it cannot trust its signature
-",Notary
-"Does anyone have a good solution for a generic container signature verification?
-From what I've seen (please correct any mistakes)
-
-Docker Hub uses signatures based on ""Notary"", that needs docker
-RedHat use their own signing mechanism, that needs podman
-
-As I can't install both podman and docker (containerd.io and runc have a conflict in RHEL, maybe a different host would allow it?) there seems to be no way to validate signatures that works for both sources.
-Even if I could install them both I'd need to parse the dockerfile, work out where the source image was, do a docker/podman pull on the images and then do the build if no pulls fail. (Which feels likely to fail!)
-For example : a build stage used a container from docker hub (eg maven) and run stage from redhat (eg registry.access.redhat.com/ubi8).
-I really want a generic ""validate the container signature at this URL"" function that I can drop into a CICD tool. Some teams like using the RH registry, some Docker Hub, some mix and match.
-Any good ideas? Obvious solutions I missed?
-","1. look at cosign
-https://github.com/sigstore/cosign
-$ cosign verify --key cosign.pub dlorenc/demo
-",Notary
-"I am new to the mechanism of Docker Content Trust (DCT) and a bit confused about the root key. The first time I add a signer to a new repository I am asked to enter passphrases for the root and repository key. After that a key file with the root key ID is generated in the directory ~/.docker/trust/private. So far so good, but when I execute docker trust inspect <repo name>, I get a different root key ID under the administrative keys section.
-Can you please explain this to me?
-","1. There are several keys:
-
-Signer key
-Repository key
-Root key
-
-You can open files in ~/.docker/trust/private to see the role of each key. Or you can run notary -d ~/.docker/trust key list
-Pretty option is also cool for this:
-docker trust inspect --pretty <repo_name> to get the following result
-Signatures for repo_name
-
-SIGNED TAG   DIGEST                                                             SIGNERS
-latest       def822f9851ca422481ec6fee59a9966f12b351c62ccb9aca841526ffaa9f748   test
-
-List of signers and their keys for repo_name
-
-SIGNER    KEYS
-test       c990796d79a9
-
-Administrative keys for repo_name
-
-  Repository Key:   06362021113fed73dc5e08e6b5edbe04cf4316193b362b0d8335fab3285fc98b
-  Root Key: 317f83b55c99e2b8f9d341a3c9a3fc4b1d65d97f52a553020a65cdee85940cf3
-
-
-2. TLDR; :
-One root key is for the signer and another one is for the repository.
-When I try to load a key to add the signer, it will ask me a passphrase to encrypt the private key (root).
-$ docker trust key load --name arif key.pem
-Loading key from ""key.pem""...
-Enter passphrase for new arif key with ID 2817c38: 
-Repeat passphrase for new arif key with ID 2817c38: 
-Successfully imported key from key.pem
-
-
-You can find the encrypted root key in the .docker/trust/private like the following,
-$ cat ../.docker/trust/private/2817c387b869ede57bd209e40a3dfce967b70eca1eb3739bc58afba44665aaef.key 
------BEGIN ENCRYPTED PRIVATE KEY-----
-role: arif
-
-MIHuMEkGCSqGSIb3DQEFDTA8MBsGCSqGSIb3DQEFDDAOBAh/6HbWl/T/SAICCAAw
-HQYJYIZIAWUDBAEqBBAZpJBc+C9ABYY6UbMT3YSRBIGgiNT5fX9QqCOrGJ3lb3qw
-7JkC/4D0dtp75MYWaMbfYXvNm+muJXmVUpp5vh91onUW8Y8q+ymQTgDq3mN8+HLu
-4iRp46wXxilEKUxmXsYln/mxQI+jU7UwTTiLiy6LpR1vpBKdO8hhd/WObW25P+ah
-YjslB1P8fe9VeSsorAKM5zDnuaiVhHh7BjgVAiepDvmy/7zO3W7Rso4Kgg0UZkJn
-SA==
------END ENCRYPTED PRIVATE KEY-----
-
-Then I am trying to add the signer in a repository and it will ask 2 things,
-
-New passphrase to encrypt root key for the repository I want to sign""
-New passphrase to encrypt **repository key ** for that exact repository.
-
-$ docker trust signer add --key cert.pem arif ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy 
-Adding signer ""arif"" to ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy...
-Initializing signed repository for ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy...
-You are about to create a new root signing key passphrase. This passphrase
-will be used to protect the most sensitive key in your signing system. Please
-choose a long, complex passphrase and be careful to keep the password and the
-key file itself secure and backed up. It is highly recommended that you use a
-password manager to generate the passphrase and keep it safe. There will be no
-way to recover this key. You can find the key in your config directory.
-Enter passphrase for new root key with ID 06665b8: 
-Repeat passphrase for new root key with ID 06665b8: 
-Enter passphrase for new repository key with ID b040c66: 
-Repeat passphrase for new repository key with ID b040c66: 
-Successfully initialized ""ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy""
-Successfully added signer: arif to ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy
-
-
-At the output above we can see the id for the two keys are 06665b8 and b040c66.
-If I have look at my trust directory I will see two keys starting with these two ids. One for the root keys of the repository and another one for the target key.
-$ grep role .docker/trust/private/06665b8*.key
-role: root
-
-$ grep role .docker/trust/private/b040c66*.key
-role: targets
-
-Now, if I inspect the repository I can see the following,
-$ docker trust inspect ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy
-[
-    {
-        ""Name"": ""ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy"",
-        ""SignedTags"": [],
-        ""Signers"": [
-            {
-                ""Name"": ""arif"",
-                ""Keys"": [
-                    {
-                        ""ID"": ""2817c387b869ede57bd209e40a3dfce967b70eca1eb3739bc58afba44665aaef""
-                    }
-                ]
-            }
-        ],
-        ""AdministrativeKeys"": [
-            {
-                ""Name"": ""Root"",
-                ""Keys"": [
-                    {
-                        ""ID"": ""5ed03b461b330c6d722c319bdfaa87e3d8b289a1213569248bdaa616a1a399c6""
-                    }
-                ]
-            },
-            {
-                ""Name"": ""Repository"",
-                ""Keys"": [
-                    {
-                        ""ID"": ""b040c663463612c99130eca98ec827ef32a3bab73d2976403888443ce87899c6""
-                    }
-                ]
-            }
-        ]
-    }
-]
-
-
-So now, we have 3 keys. One is the signers root key, another one is the repository's root key and the last one is the target key.
-$ ls .docker/trust/private/ -1 | wc -l
-3
-
-You can find all the metadata about these keys in the tuf directory,
-$ cd .docker/trust/tuf/ec2-3-67-179-58.eu-central-1.compute.amazonaws.com/docker/haproxy/metadata/
-
-$ ls 
-root.json  snapshot.json  targets.json  timestamp.json
-
-I hope it makes sense now.
-
-3. User-Signed images
-There are two options for trust pinning user-signed images:
-
-Notary Canonical Root Key ID (DCT Root Key) is an ID that describes just the root key used to sign a repository (or rather its respective keys). This is the root key on the host that originally signed the repository (i.e. your workstation). This can be retrieved from the workstation that signed the repository through $ grep -r ""root"" ~/.docker/trust/private/ (Assuming your trust data is at ~/.docker/trust/*). It is expected that this canonical ID has initiated multiple image repositories (mydtr/user1/image1 and mydtr/user1/image2).
-
-# Retrieving Root ID
-$ grep -r ""root"" ~/.docker/trust/private
-/home/ubuntu/.docker/trust/private/0b6101527b2ac766702e4b40aa2391805b70e5031c04714c748f914e89014403.key:role: root
-
-# Using a Canonical ID that has signed 2 repos (mydtr/user1/repo1 and mydtr/user1/repo2). Note you can use a Wildcard.
-
-{
-  ""content-trust"": {
-    ""trust-pinning"": {
-      ""root-keys"": {
-         ""mydtr/user1/*"": [
-           ""0b6101527b2ac766702e4b40aa2391805b70e5031c04714c748f914e89014403""
-         ]
-      }
-    },
-    ""mode"": ""enforced""
-  }
-}
-
-
-Notary Root key ID (DCT Certificate ID) is an ID that describes the same, but the ID is unique per repository. For example, mydtr/user1/image1 and mydtr/usr1/image2 will have unique certificate IDs. A certificate ID can be retrieved through a $ docker trust inspect command and is labelled as a root-key (referring back to the Notary key name). This is designed for when different users are signing their own repositories, for example, when there is no central signing server. As a cert-id is more granular, it would take priority if a conflict occurs over a root ID.
-
-# Retrieving Cert ID
-$ docker trust inspect mydtr/user1/repo1 | jq -r '.[].AdministrativeKeys[] | select(.Name==""Root"") | .Keys[].ID'
-9430d6e31e3b3e240957a1b62bbc2d436aafa33726d0fcb50addbf7e2dfa2168
-
-# Using Cert Ids, by specifying 2 repositories by their DCT root ID. Example for using this may be different DTRs or maybe because the repository was initiated on different hosts, therefore having different canonical IDs.
-
-{
-  ""content-trust"": {
-    ""trust-pinning"": {
-      ""cert-ids"": {
-         ""mydtr/user1/repo1"": [
-           ""9430d6e31e3b3e240957a1b62bbc2d436aafa33726d0fcb50addbf7e2dfa2168""
-         ],
-         ""mydtr/user2/repo1"": [
-           ""544cf09f294860f9d5bc953ad80b386063357fd206b37b541bb2c54166f38d08""
-         ]
-      }
-    },
-    ""mode"": ""enforced""
-  }
-}
-
-http://www.myclass5.cn/engine/security/trust/content_trust/
-",Notary
-"I'm trying to write a simple parser in Haskell using Parsec but my input ""Hello World"" is never correctly parsed.
-My code looks like this:
-parser = p1 <|> p2
-
-p1 = string ""Hello""
-p2 = string ""Hello World""
-
-If I run it I get the error unexpected whitespace
-","1. p1 already consumes the tokens ""Hello"" and therefore p2 instantly fails because the next token is whitespace.
-You could use something like try to reset the consumed tokens.
-parser = try p1 <|> p2
-
-p1 = string ""Hello""
-p2 = string ""Hello World""
-
-",Parsec
-"I've been experimenting with Parsec for the first time in my life and I found a task that turned out surprisingly difficult.
-I want to parse two lists of numbers separated by | character.
-Here's an example:
-Data 1: 43 76 123 98 32 | 32 88 43 123 43
-Here's the code I've come up so far
-data Data = Data Int [Int] [Int]
-    deriving Show
-
-toInt :: String -> Int
-toInt = read
-
-parseRow :: Parser Data
-parseRow = do
-    _ <- Parsec.string ""Data""
-    _ <- Parsec.spaces
-    cid <- toInt <$> many1 Parsec.digit
-    firstList <- map toInt <$> between (Parsec.string "": "") (Parsec.char '|') (many1 (many1 Parsec.digit <* Parsec.space))
-    secondList <- map toInt <$> sepBy1 (many1 Parsec.digit) (Parsec.char ' ')
-    return $ Data cid firstList secondList
-
-It gets confused while parsing firstList. I guess I messed up parsing the spaces separating the numbers, but can't see an obvious mistake.
-Going forward, what's the most beginner-friendly introduction to Parsec? I found a few tutorials, but happy to hear recommendations.
-","1. Replace Parsec.char '|' with Parsec.string ""| "" in firstList.  Otherwise, secondList has to deal with an extra space at the beginning of the input that it doesn't expect.
-",Parsec
-"I have the following piece of code:
-import Text.ParserCombinators.Parsec
-import Control.Applicative hiding ((<|>))
-import Control.Monad
-
-data Test = Test Integer Integer deriving Show
-
-integer :: Parser Integer
-integer = rd <$> many1 digit
-    where rd = read :: String -> Integer
-
-testParser :: Parser Test
-testParser = do
-  a <- integer
-  char ','
-  b <- integer
-  eol
-  return $ Test a b
-
-eol :: Parser Char
-eol = char '\n'
-
-main = forever $ do putStrLn ""Enter the value you need to parse: ""
-                    input <- getLine
-                    parseTest testParser input
-
-But when I actually try to parse my value in ghci, it doesn't work.
-ghci > main
-Enter the value you need to parse: 
-34,343\n
-parse error at (line 1, column 7):
-unexpected ""\\""
-expecting digit or ""\n""
-
-Any ideas on what I'm missing here ?
-","1. The problem seems to be that you're expecting a newline, but your text doesn't contain one.  Change eol to
-import Control.Monad (void)
-
-eol :: Parser ()
-eol = void (char '\n') <|> eof
-
-and it'll work.
-
-2. ""\n"" is an escape code used in Haskell (and C, etc.) string and character literals to represent ASCII 0x0A, the character that is used to indicate end-of-line on UNIX and UNIX-like platforms.  You don't (normally) use the <\> or <n> keys on your keyboard to put this character in a file (e.g.) instead you use the <Enter> key.
-On PC-DOS and DOS-like systems, ASCII 0x0D followed by ASCII 0x0A is used for end-of-line and ""\r"" is the escape code used for ASCII 0x0D.
-getLine reads until it finds end-of-line and returns a string containing everything but the end-of-line character.  So, in your example, your parser will fail to match.  You might fix this by matching end-of-line optionally.
-",Parsec
-"WorkerGroup.h:88:5: error: looser throw specifier for 'virtual threads::WorkerGroup::~WorkerGroup() throw (threads::CondException, threads::MutexException)'
-   88 |     ~WorkerGroup();
-      |     ^
-
-
-I tried Compiling parsec benchmark this still keeps showing.
-","1. Add a -std=g++98 in CXXFLAGS in gcc.bldconf
-",Parsec
-"I'm trying to parse just comments from a String and I'm close but not quite there.
-import Text.ParserCombinators.Parsec
-
-parseSingleLineComment :: Parser String
-parseSingleLineComment = do 
-    string ""//"" 
-    x <- manyTill anyChar newline
-    spaces 
-    return x
-parseMultilineComment :: Parser String
-parseMultilineComment = do
-    string ""/*"" 
-    x <- manyTill anyChar (string ""*/"")
-    spaces
-    return x
-parseEndOfFile :: Parser String
-parseEndOfFile = do 
-  x <- eof
-  return """"
-
-parseComment :: Parser String
-parseComment = try parseSingleLineComment <|> try parseMultilineComment
-    
-parseNotComment :: Parser String
-parseNotComment = manyTill anyChar (lookAhead (try parseComment <|> parseEndOfFile))
-
-extractComments :: Parser [String]
-extractComments = do
-  manyTill anyChar (lookAhead (parseComment <|> parseEndOfFile))
-  xs <- try $ sepEndBy1 parseComment parseNotComment
-  eof
-  return $ xs
-
-
-printHelperF :: String -> IO ()
-printHelperF s = do
-  print s
-  print $ parse extractComments ""Test Parser"" s
-  print ""-------------------""
-
--- main
-main :: IO ()
-main = do 
-  let sample0 = ""No comments here""
-  let sample1 = ""//Hello there!\n//General Kenobi""
-  let sample2 = ""/* What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!*/""
-  let sample3 = "" //Global Variable\nlet x = 5;\n/*TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n*/\nlet c = 500;""
-  let sample4 = ""//First\n//Second//NotThird\n//Third""
-  let samples = [sample0, sample1, sample2, sample3, sample4]
-  mapM_ printHelperF samples
-
-
--- > runhaskell test.hs
--- ""No comments here""
--- Left ""Test Parser"" (line 1, column 17):
--- unexpected end of input
--- expecting ""//"" or ""/*"" <---------- fails because no comment in string
--- ""-------------------""
--- ""//Hello there!\n//General Kenobi""
--- Right [""Hello there!""] <---------- fails to extract the last comment
--- ""-------------------""
--- ""/* What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!*/""
--- Right ["" What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!""] <- correct
--- ""-------------------""
--- "" //Global Variable\nlet x = 5;\n/*TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n*/\nlet c = 500;""
--- Right [""Global Variable"",""TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n""] <- correct
--- ""-------------------""
--- ""//First\n//Second//NotThird\n//Third""
--- Right [""First"",""Second//NotThird""] <- again fails to extract the last comment
--- ""-------------------""
-
-","1. If you replace sepEndBy1 with sepEndBy, that should take care of the problem with the ""no comments"" case failing.
-To handle the case of a final single-line comment with no terminating newline, try using:
-parseSingleLineComment :: Parser String
-parseSingleLineComment = do
-    string ""//""
-    noneOf ""\n""
-
-After making these changes, there are several other test cases you should consider.  Asterisks in multiline comments cause the comment to be ignored.
-λ> printHelperF ""x = 3*4 /* not 3*5 */""
-""x = 3*4 /* not 3*5 */""
-Right []
-""-------------------""
-
-To fix this, you'll need something like:
-parseMultilineComment :: Parser String
-parseMultilineComment = do
-    string ""/*""
-    manyTill anyChar (try (string ""*/""))
-
-Also, unterminated multiline comments are treated as code:
-> printHelperF ""/* unterminated comment""
-""/* unterminated comment""
-Right []
-""-------------------""
-
-This should probably be a parse error instead.  Fixing this involves moving around some try logic.  Take the try calls out of parseComment:
-parseComment :: Parser String
-parseComment = parseSingleLineComment <|> parseMultilineComment
-
-and move them into the sub-functions:
-parseSingleLineComment :: Parser String
-parseSingleLineComment = do
-    try (string ""//"")
-    many (noneOf ""\n"")
-
-parseMultilineComment :: Parser String
-parseMultilineComment = do
-    try (string ""/*"")
-    manyTill anyChar (try (string ""*/""))
-
-The way this version of parseMultilineComment works is that a lone / character will cause the first parser to fail, but the try will ensure that no input is consumed (i.e., no comment was found).  On the other hand, if string ""/*"" succeeds, then manyTill will search for the terminating string ""*/"".  If this it isn't found, the parser will fail but after consuming input (namely, the string ""/*"").  This will result in a parse error instead.
-For this to work correctly, we need to get rid of the try in parseNotComment:
-parseNotComment :: Parser String
-parseNotComment = manyTill anyChar (lookAhead (parseComment <|> parseEndOfFile))
-
-and we can also simplify extractComments, since its first line is now identical to parseNotComment, and the other try is redundant:
-extractComments :: Parser [String]
-extractComments = do
-  parseNotComment
-  xs <- sepEndBy parseComment parseNotComment
-  eof
-  return $ xs
-
-The final result should pass your tests, plus a few more:
-module Comments where
-
-import Text.ParserCombinators.Parsec
-
-parseSingleLineComment :: Parser String
-parseSingleLineComment = do
-    try (string ""//"")
-    many (noneOf ""\n"")
-
-parseMultilineComment :: Parser String
-parseMultilineComment = do
-    try (string ""/*"")
-    manyTill anyChar (try (string ""*/""))
-
-parseEndOfFile :: Parser String
-parseEndOfFile = do
-    x <- eof
-    return """"
-
-parseComment :: Parser String
-parseComment = parseSingleLineComment <|> parseMultilineComment
-
-parseNotComment :: Parser String
-parseNotComment = manyTill anyChar (lookAhead (parseComment <|> parseEndOfFile))
-
-extractComments :: Parser [String]
-extractComments = do
-  parseNotComment
-  xs <- sepEndBy parseComment parseNotComment
-  eof
-  return $ xs
-
-
-printHelperF :: String -> IO ()
-printHelperF s = do
-  print s
-  print $ parse extractComments ""Test Parser"" s
-  print ""-------------------""
-
--- main
-main :: IO ()
-main = do
-  let sample0 = ""No comments here""
-  let sample1 = ""//Hello there!\n//General Kenobi""
-  let sample2 = ""/* What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!*/""
-  let sample3 = "" //Global Variable\nlet x = 5;\n/*TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n*/\nlet c = 500;""
-  let sample4 = ""//First\n//Second//NotThird\n//Third""
-  let sample5 = ""x = 3*4 /* not 3*5 */""
-  let sample6 = ""/* unterminated comment""
-  let sample6 = ""/* foo */ /* unterminated comment""
-  let sample7 = """"
-  let samples = [sample0, sample1, sample2, sample3, sample4, sample5, sample6, sample7]
-  mapM_ printHelperF samples
-
-giving output:
-""No comments here""
-Right []
-""-------------------""
-""//Hello there!\n//General Kenobi""
-Right [""Hello there!"",""General Kenobi""]
-""-------------------""
-""/* What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!*/""
-Right ["" What's the deal with airline food?\nIt keeps getting worse and worse\nI can't take it anymore!""]
-""-------------------""
-"" //Global Variable\nlet x = 5;\n/*TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n*/\nlet c = 500;""
-Right [""Global Variable"",""TODO:\n\t// Add the number of cats as a variable\n\t//Shouldn't take too long\n""]
-""-------------------""
-""//First\n//Second//NotThird\n//Third""
-Right [""First"",""Second//NotThird"",""Third""]
-""-------------------""
-""x = 3*4 /* not 3*5 */""
-Right ["" not 3*5 ""]
-""-------------------""
-""/* foo */ /* unterminated comment""
-Left ""Test Parser"" (line 1, column 34):
-unexpected end of input
-expecting ""*/""
-""-------------------""
-""""
-Right []
-""-------------------""
-
-",Parsec
-"I am trying to summarize our detection data in a way that I can easily see when an animal moves from one pool to another. Here is an example of one animal that I track
-    tibble [22 x 13] (S3: tbl_df/tbl/data.frame)
-     $ Receiver     : chr [1:22] ""VR2Tx-480679"" ""VR2Tx-480690"" ""VR2Tx-480690"" ""VR2Tx-480690"" ...
-     $ Transmitter  : chr [1:22] ""A69-9001-12418"" ""A69-9001-12418"" ""A69-9001-12418"" ""A69-9001-12418"" ...
-     $ Species      : chr [1:22] ""PDFH"" ""PDFH"" ""PDFH"" ""PDFH"" ...
-     $ LocalDATETIME: POSIXct[1:22], format: ""2021-05-28 07:16:52"" ...
-     $ StationName  : chr [1:22] ""1405U"" ""1406U"" ""1406U"" ""1406U"" ...
-     $ LengthValue  : num [1:22] 805 805 805 805 805 805 805 805 805 805 ...
-     $ WeightValue  : num [1:22] 8.04 8.04 8.04 8.04 8.04 8.04 8.04 8.04 8.04 8.04 ...
-     $ Sex          : chr [1:22] ""NA"" ""NA"" ""NA"" ""NA"" ...
-     $ Translocated : num [1:22] 0 0 0 0 0 0 0 0 0 0 ...
-     $ Pool         : num [1:22] 16 16 16 16 16 16 16 16 16 16 ...
-     $ DeployDate   : POSIXct[1:22], format: ""2018-06-05"" ...
-     $ Latitude     : num [1:22] 41.6 41.6 41.6 41.6 41.6 ...
-     $ Longitude    : num [1:22] -90.4 -90.4 -90.4 -90.4 -90.4 ...
-
-I want to add columns that would allow me to summarize this data in a way that I would have the start date of when an animal was in a pool and when the animal moved to a different pool it would have the end date of when it exits.
-Ex: Enters Pool 19 on 1/1/22, next detected in Pool 20 on 1/2/22, so there would be columns that say fish entered and exited Pool 19 on 1/1/22 and 1/2/22. I have shared an Excel file example of what I am trying to do. I would like to code upstream movement with a 1 and downstream movement with 0.
-I have millions of detections and hundreds of animals that I monitor so I am trying to find a way to look at passages for each animal. Thank you!
-Here is my dataset using dput:
-structure(list(Receiver = c(""VR2Tx-480679"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480692"", ""VR2Tx-480695"", 
-""VR2Tx-480695"", ""VR2Tx-480713"", ""VR2Tx-480713"", ""VR2Tx-480702"", 
-""VR100"", ""VR100"", ""VR100""), Transmitter = c(""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418""), Species = c(""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH""), LocalDATETIME = structure(c(1622186212, 1622381700, 
-1622384575, 1622184711, 1622381515, 1622381618, 1622381751, 1622381924, 
-1622382679, 1622383493, 1622384038, 1622384612, 1622183957, 1622381515, 
-1626905954, 1626905688, 1622971975, 1622970684, 1626929618, 1624616880, 
-1626084540, 1626954660), tzone = ""UTC"", class = c(""POSIXct"", 
-""POSIXt"")), StationName = c(""1405U"", ""1406U"", ""1406U"", ""1406U"", 
-""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", 
-""1406U"", ""1406U"", ""1404L"", ""1401D"", ""1401D"", ""14Aux2"", ""14Aux2"", 
-""15.Mid.Wall"", ""man_loc"", ""man_loc"", ""man_loc""), LengthValue = c(805, 
-805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 
-805, 805, 805, 805, 805, 805, 805, 805), WeightValue = c(8.04, 
-8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 
-8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04), 
-    Sex = c(""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-    ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-    ""NA"", ""NA"", ""NA""), Translocated = c(0, 0, 0, 0, 0, 0, 0, 
-    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), Pool = c(16, 
-    16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 
-    16, 16, 16, 14, 14, 16), DeployDate = structure(c(1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800), tzone = ""UTC"", class = c(""POSIXct"", ""POSIXt""
-    )), Latitude = c(41.57471, 41.5758, 41.5758, 41.5758, 41.5758, 
-    41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 
-    41.5758, 41.57463, 41.5731, 41.5731, 41.57469, 41.57469, 
-    41.57469, 41.57469, 41.57469, 41.57469), Longitude = c(-90.39944, 
-    -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-    -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-    -90.39984, -90.40391, -90.40391, -90.40462, -90.40462, -90.40462, 
-    -90.40462, -90.40462, -90.40462)), row.names = c(NA, -22L
-), class = c(""tbl_df"", ""tbl"", ""data.frame""))
-> dput(T12418)
-structure(list(Receiver = c(""VR2Tx-480679"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480692"", ""VR2Tx-480695"", 
-""VR2Tx-480695"", ""VR2Tx-480713"", ""VR2Tx-480713"", ""VR2Tx-480702"", 
-""VR100"", ""VR100"", ""VR100""), Transmitter = c(""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-""A69-9001-12418""), Species = c(""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-""PDFH"", ""PDFH""), LocalDATETIME = structure(c(1622186212, 1622381700, 
-1622384575, 1622184711, 1622381515, 1622381618, 1622381751, 1622381924, 
-1622382679, 1622383493, 1622384038, 1622384612, 1622183957, 1622381515, 
-1626905954, 1626905688, 1622971975, 1622970684, 1626929618, 1624616880, 
-1626084540, 1626954660), class = c(""POSIXct"", ""POSIXt""), tzone = ""UTC""), 
-    StationName = c(""1405U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", 
-    ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", 
-    ""1406U"", ""1404L"", ""1401D"", ""1401D"", ""14Aux2"", ""14Aux2"", ""15.Mid.Wall"", 
-    ""man_loc"", ""man_loc"", ""man_loc""), LengthValue = c(805, 805, 
-    805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 
-    805, 805, 805, 805, 805, 805, 805, 805), WeightValue = c(8.04, 
-    8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 
-    8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 
-    8.04), Sex = c(""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-    ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-    ""NA"", ""NA"", ""NA"", ""NA"", ""NA""), Translocated = c(0, 0, 0, 
-    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
-    Pool = c(16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 
-    16, 16, 16, 16, 16, 16, 16, 14, 14, 16), DeployDate = structure(c(1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-    1528156800), class = c(""POSIXct"", ""POSIXt""), tzone = ""UTC""), 
-    Latitude = c(41.57471, 41.5758, 41.5758, 41.5758, 41.5758, 
-    41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 
-    41.5758, 41.57463, 41.5731, 41.5731, 41.57469, 41.57469, 
-    41.57469, 41.57469, 41.57469, 41.57469), Longitude = c(-90.39944, 
-    -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-    -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-    -90.39984, -90.40391, -90.40391, -90.40462, -90.40462, -90.40462, 
-    -90.40462, -90.40462, -90.40462)), class = c(""tbl_df"", ""tbl"", 
-""data.frame""), row.names = c(NA, -22L))
-
-
-","1. Here is one possibility for getting the beginning date for entering a pool and ending date for leaving a pool. First, I group by Species (could also add additional grouping variables to distinguish between specimens) and arrange by the time. Then, I look for any changes to the Pool using cumsum. Then, I pull the first date recorded for the pool as the the date that they entered the pool. Then, I do some grouping and ungrouping to grab the date from the next group (i.e., the date the species left the pool) and then copy that date for the whole group. For determining upstream/downstream, we can use case_when inside of mutate. I'm also assuming that you want this to match the date, so I have filled in the values for each group with the movement for pool change.
-library(tidyverse)
-  
-df_dates <- df %>%
-  group_by(Species, Transmitter) %>%
-  arrange(Species, Transmitter, LocalDATETIME) %>%
-  mutate(changeGroup = cumsum(Pool != lag(Pool, default = -1))) %>%
-  group_by(Species, Transmitter, changeGroup) %>%
-  mutate(EnterPool = first(format(as.Date(LocalDATETIME), ""%m/%d/%Y""))) %>%
-  ungroup(changeGroup) %>%
-  mutate(LeftPool = lead(EnterPool)) %>%
-  group_by(Species, Transmitter, changeGroup) %>%
-  mutate(LeftPool = last(LeftPool)) %>% 
-  ungroup(changeGroup) %>% 
-  mutate(stream = case_when((Pool - lag(Pool)) > 0 ~ 0,
-                            (Pool - lag(Pool)) < 0 ~ 1)) %>% 
-  fill(stream, .direction = ""down"")
-
-Output
-print(as_tibble(df_dates[1:24, c(1:5, 10:17)]), n=24)
-
-# A tibble: 24 × 13
-   Receiver     Transmitter    Species LocalDATETIME       StationName  Pool DeployDate          Latitude Longitude changeGroup EnterPool  LeftPool   stream
-   <chr>        <chr>          <chr>   <dttm>              <chr>       <dbl> <dttm>                 <dbl>     <dbl>       <int> <chr>      <chr>       <dbl>
- 1 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-28 06:39:17 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 2 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-28 06:51:51 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 3 VR2Tx-480679 A69-9001-12418 PDFH    2021-05-28 07:16:52 1405U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 4 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:31:55 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 5 VR2Tx-480692 A69-9001-12418 PDFH    2021-05-30 13:31:55 1404L          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 6 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:33:38 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 7 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:35:00 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 8 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:35:51 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
- 9 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:38:44 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-10 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 13:51:19 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-11 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 14:04:53 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-12 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 14:13:58 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-13 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 14:22:55 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-14 VR2Tx-480690 A69-9001-12418 PDFH    2021-05-30 14:23:32 1406U          16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-15 VR2Tx-480713 A69-9001-12418 PDFH    2021-06-06 09:11:24 14Aux2         16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-16 VR2Tx-480713 A69-9001-12418 PDFH    2021-06-06 09:32:55 14Aux2         16 2018-06-05 00:00:00     41.6     -90.4           1 05/28/2021 06/25/2021     NA
-17 VR100        A69-9001-12418 PDFH    2021-06-25 10:28:00 man_loc        14 2018-06-05 00:00:00     41.6     -90.4           2 06/25/2021 07/21/2021      1
-18 VR100        A69-9001-12418 PDFH    2021-07-12 10:09:00 man_loc        14 2018-06-05 00:00:00     41.6     -90.4           2 06/25/2021 07/21/2021      1
-19 VR2Tx-480695 A69-9001-12418 PDFH    2021-07-21 22:14:48 1401D          16 2018-06-05 00:00:00     41.6     -90.4           3 07/21/2021 NA              0
-20 VR2Tx-480695 A69-9001-12418 PDFH    2021-07-21 22:19:14 1401D          16 2018-06-05 00:00:00     41.6     -90.4           3 07/21/2021 NA              0
-21 VR2Tx-480702 A69-9001-12418 PDFH    2021-07-22 04:53:38 15.Mid.Wall    16 2018-06-05 00:00:00     41.6     -90.4           3 07/21/2021 NA              0
-22 VR100        A69-9001-12418 PDFH    2021-07-22 11:51:00 man_loc        16 2018-06-05 00:00:00     41.6     -90.4           3 07/21/2021 NA              0
-23 AR100        B80-9001-12420 PDFH    2021-07-22 11:51:00 man_loc        19 2018-06-05 00:00:00     42.6     -90.4           1 07/22/2021 07/22/2021     NA
-24 AR100        B80-9001-12420 PDFH    2021-07-22 11:51:01 man_loc        18 2018-06-05 00:00:00     42.6     -90.4           2 07/22/2021 NA              1
-
-Data
-df <- structure(list(Receiver = c(""VR2Tx-480679"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-                            ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-                            ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480690"", 
-                            ""VR2Tx-480690"", ""VR2Tx-480690"", ""VR2Tx-480692"", ""VR2Tx-480695"", 
-                            ""VR2Tx-480695"", ""VR2Tx-480713"", ""VR2Tx-480713"", ""VR2Tx-480702"", 
-                            ""VR100"", ""VR100"", ""VR100"", ""AR100"", ""AR100""), Transmitter = c(""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", ""A69-9001-12418"", 
-                                                                        ""A69-9001-12418"", ""B80-9001-12420"", ""B80-9001-12420""), Species = c(""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-                                                                                                       ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-                                                                                                       ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH"", 
-                                                                                                       ""PDFH"", ""PDFH"", ""PDFH"", ""PDFH""), LocalDATETIME = structure(c(1622186212, 1622381700, 
-                                                                                                                                                    1622384575, 1622184711, 1622381515, 1622381618, 1622381751, 1622381924, 
-                                                                                                                                                    1622382679, 1622383493, 1622384038, 1622384612, 1622183957, 1622381515, 
-                                                                                                                                                    1626905954, 1626905688, 1622971975, 1622970684, 1626929618, 1624616880, 
-                                                                                                                                                    1626084540, 1626954660, 1626954661, 1626954660), class = c(""POSIXct"", ""POSIXt""), tzone = ""UTC""), 
-               StationName = c(""1405U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", 
-                               ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", ""1406U"", 
-                               ""1406U"", ""1404L"", ""1401D"", ""1401D"", ""14Aux2"", ""14Aux2"", ""15.Mid.Wall"", 
-                               ""man_loc"", ""man_loc"", ""man_loc"", ""man_loc"", ""man_loc""), LengthValue = c(805, 805, 
-                                                                                 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 805, 
-                                                                                 805, 805, 805, 805, 805, 805, 805, 805, 805, 805), WeightValue = c(8.04, 
-                                                                                                                                          8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 
-                                                                                                                                          8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 8.04, 
-                                                                                                                                          8.04, 8.04, 8.04), Sex = c(""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-                                                                                                                                                         ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", 
-                                                                                                                                                         ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA"", ""NA""), Translocated = c(0, 0, 0, 
-                                                                                                                                                                                                         0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
-               Pool = c(16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 
-                        16, 16, 16, 16, 16, 16, 16, 14, 14, 16, 18, 19), DeployDate = structure(c(1528156800, 
-                                                                                          1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-                                                                                          1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-                                                                                          1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-                                                                                          1528156800, 1528156800, 1528156800, 1528156800, 1528156800, 
-                                                                                          1528156800, 1528156800, 1528156800), class = c(""POSIXct"", ""POSIXt""), tzone = ""UTC""), 
-               Latitude = c(41.57471, 41.5758, 41.5758, 41.5758, 41.5758, 
-                            41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 41.5758, 
-                            41.5758, 41.57463, 41.5731, 41.5731, 41.57469, 41.57469, 
-                            41.57469, 41.57469, 41.57469, 41.57469, 42.57469, 42.57469), Longitude = c(-90.39944, 
-                                                                                   -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-                                                                                   -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, -90.39793, 
-                                                                                   -90.39984, -90.40391, -90.40391, -90.40462, -90.40462, -90.40462, 
-                                                                                   -90.40462, -90.40462, -90.40462, -90.40470, -90.40470)), class = c(""tbl_df"", ""tbl"", 
-                                                                                                                                ""data.frame""), row.names = c(NA, -24L))
-
-",Passage
-"I have application in which i uploads multiple images using zip file.
-It is working fine in my local system and also working fine on server if I upload only 2 images but when I am trying to upload more than 20 images in a single zip it gives me following error :
-Failed to load resource: the server responded with a status of 504 (GATEWAY_TIMEOUT)
-I am using centos + apache + passenger on server.
-environment
-OS: Centos7
-ruby: 2.2.3 installed with rvm
-passenger 5.5.0
-rails: 4
-PostgreSQL: latest version
-apache installed with passenger
-If you have any solution please answer.
-","1. Normally, server need continues connection for file or image upload, but when file size is large and if you don't need to do any processing on file then we need to set background job for same.
-So, server upload that file gradually and not throw any kind of error like 504 GATEWAY_TIMEOUT.
-Hope, this will help you. 
-",Passage
-"I would like to instantiate the project.toml that's build in in a Pluto notebook with the native package manager. How do I read it from the notebook?
-Say, I have a notebook, e.g.,
-nb_source = ""https://raw.githubusercontent.com/fonsp/Pluto.jl/main/sample/Interactivity.jl""
-
-How can I create a temporary environment, and get the packages for the project of this notebook? In particular, how do I complete the following code?
-cd(mktempdir()) 
-import Pkg; Pkg.activate(""."") 
-import Pluto, Pkg 
-
-nb = download(nb_source, ""."") 
-
-### Some code using Pluto's build in package manager 
-### to read the Project.toml from nb --> nb_project_toml 
-
-cp(nb_project_toml, ""./Project.toml"", force=true) 
-Pkg.instantiate(""."")
-
-","1. So, first of all, the notebook you are looking at is a Pluto 0.17.0 notebook, which does not have the internal package manager. I think it was added in Pluto 0.19.0.
-This is what the very last few cells look like in a notebook using the internal pluto packages:
-# ╔═╡ 00000000-0000-0000-0000-000000000001
-PLUTO_PROJECT_TOML_CONTENTS = """"""
-[deps]
-Plots = ""91a5bcdd-55d7-5caf-9e0b-520d859cae80""
-PlutoUI = ""7f904dfe-b85e-4ff6-b463-dae2292396a8""
-PyCall = ""438e738f-606a-5dbb-bf0a-cddfbfd45ab0""
-Statistics = ""10745b16-79ce-11e8-11f9-7d13ad32a3b2""
-
-[compat]
-Plots = ""~1.32.0""
-PlutoUI = ""~0.7.40""
-PyCall = ""~1.94.1""
-""""""
-
-# ╔═╡ 00000000-0000-0000-0000-000000000002
-PLUTO_MANIFEST_TOML_CONTENTS = """"""
-# This file is machine-generated - editing it directly is not advised
-
-julia_version = ""1.8.0""
-...
-
-so you could add something like:
-import(nb)
-write(""./Project.toml"", PLUTO_PROJECT_TOML_CONTENTS)
-
-This has the drawback of running all the code in your notebook, which might take a while.
-Alternatively, you could read the notebook file until you find the # ╔═╡ 00000000-0000-0000-0000-000000000001 line and then either parse the following string yourself or eval everything after that (something like eval(Meta.parse(string_stuff_after_comment)) should do it...)
-I hope that helps a little bit.
-
-2. The Pluto.load_notebook_nobackup() reads the information of a notebook. This gives a dictionary of deps in the field .nbpkg_ctx.env.project.deps
-import Pluto, Pkg 
-Pkg.activate(;temp=true)  
-nb_source = ""https://raw.githubusercontent.com/fonsp/Pluto.jl/main/sample/PlutoUI.jl.jl"" 
-nb = download(nb_source)
-nb_info = Pluto.load_notebook_nobackup(nb)
-deps = nb_info.nbpkg_ctx.env.project.deps 
-Pkg.add([Pkg.PackageSpec(name=p, uuid=u) for (p, u) in deps])
-
-",Pluto
-"I would like to use the package HDF5
-In my Pluto.jl, I have the line
-using HDF5
-
-When I try to evaluate this cell, I get the error message
-""ERROR: LoadError: HDF5 is not properly installed. Please run Pkg.build(""HDF5"") and restart Julia.""
-I would like to do this, but when I go to the terminal, I can't do this while I have Pluto open.
-I've tried running Pluto in the background with a command like
-Pluto.run() &
-
-But this code is completely wrong.
-I've also heard that there sometimes appears a cloud icon above the cell, which would allow me to download HDF5 directly.
-In any case, it seems to me like any time this happens, I will have to write down which package I need to install, and then kill my Pluto notebook, go to Julia, install, and restart Julia. Surely there is a better way? Can anyone help me find it?
-","1. When the package is correctly installed or could be installed without problems, using HDF5 in Pluto itself is sufficient. The built-in Pluto package manager takes care about the installation.
-There are edge cases where due to issues with external packages installation does not work out-of-the-box. In this case, it could help to install the package in a temp environment before starting Pluto:
-] activate --temp
-] add HDF5
-
-follwed by whatever steps required to get the package working in Julia itself, like re-building it.
-This should really be a workaround and should be fixed in the corresponding package - consider creating an Issue there if it does not exist already.
-",Pluto
-"Shopify recently released their new @shopify/app-bridge, but it is unclear to me how it should be used alongside @shopify/polaris.
-For example, I have tried to make a React component that will use the app-bridge and polaris to display a toast.
-import React, { Component } from ""react"";
-import * as PropTypes from ""prop-types"";
-import { Toast } from ""@shopify/app-bridge/actions"";
-import { Page } from ""@shopify/polaris"";
-
-class Start extends Component {
-  static contextTypes = {
-    polaris: PropTypes.object
-  };
-
-  showToast() {
-    console.log(""SHOW TOAST"");
-    console.log(this.context.polaris.appBridge);
-    const toastNotice = Toast.create(this.context.polaris.appBridge, {
-      message: ""Test Toast"",
-      duration: 5000
-    });
-    toastNotice.dispatch(Toast.Action.SHOW);
-  }
-
-  render() {
-    this.showToast();
-    return (
-      <Page title=""Do you see toast?"">
-        <p>I do not see toast.</p>
-      </Page>
-    );
-  }
-}
-
-export default Start;
-
-But it does not seem to dispatch the action. Any ideas on why not? Note that my app is wrapped in the AppProvider and app-bridge is initialized.
-ReactDOM.render(
-  <AppProvider
-    apiKey={process.env.REACT_APP_SHOPIFY_API_KEY}
-    shopOrigin={queryString.parse(window.location.search).shop}
-  >
-    <Start />
-  </AppProvider>,
-  document.getElementById(""root"")
-);
-
-Any suggestions?
-","1. So after a lot of debugging, I found out from Shopify that inside App Bridge, before taking any action, they check that the localOrigin matches the appURL (one that's entered in the partners dashboard). In my case, I have a backend (node.js on heroku used for authentication) and a frontend (react bundle on firebase) my app starts by hitting the backend, and then if authentication checks out, it redirects to the front end. And hence the localOrigin does not match... hmmm, I'm very glad to have figured this out since I lost a lot of sleep over it. Now the question is what to do about it... maybe this is something that could be updated with AppBridge? Or is there a better design I should be considering?
-
-2. There is now @shopify/app-bridge-react,
-https://www.npmjs.com/package/@shopify/app-bridge-react
-Shopify supposedly doesn't have docs for it yet though... But, someone can update my answer when they come out with them. :)
-
-NOTE: 
-Be sure to have, static contextType = Context; to get access to this.context for dispatching actions/etc in your components.
-(Hopefully this saves you days of suffering haha I'm not a React developer, so, yeah... this was not marked as ""crucial"" or anything in the examples).
-
-I also wanted to address @SomethingOn's comment, but I don't have enough reputation to comment...
-You actually can debug an iframe. In chrome dev tools, on top where it says ""top"", you can actually select a frame that you want to debug.
-https://stackoverflow.com/a/8581276/10076085
-Once you select the Shopify App iframe, type in ""window.location"" or whatever you want!
-Shopify's docs and examples are limited and I'm running into a bunch of issues myself working on a Shopify App, so I just want to spread help as much as possible!
-
-3. For Using Shopify App Bridge with Shopify Polaris you first need to make Your App is embedded in the shopify admin, secondly shopify app bridge package shopify/app-bridge-react is compatible with @shopify/polaris-react not this @shopify/app-bridge.
-App bridge is used for communicate your app with shopify admin or shopify api's without need of any type handling API authentication.
-For more information you refer the link
-",Polaris
-"I'm using shopify polaris to create my shopify app and I'm trying to use the Layout component along with the Card component to properly structure the page. So right now, I have two cards inside of a <Layout> but the problem is, since the two cards don't have the same amount of data in them, they are of different height (Polaris adjusts the size on its own), and it makes it look bad. Here is an example of what I'm talking about: https://codesandbox.io/s/y37on11x9j .
-If you open the sandbox result on a separate browser, to see it in full screen, you'll see that there are two cards that have different heights. I just want them to be the same height. Can I get any help in doing this? Thanks!
-","1. Add a Box element inside of the Card element and use the minHeight property: https://polaris.shopify.com/components/layout-and-structure/box
-Your layout section should look something like this. If you have three of these Layout.Sections next to each other, they would all have the same height unless one extends pass the minHeight.
-<Layout.Section variant=""oneThird"">
-  <Card>
-    <Box minHeight=""460px"">
-      <BlockStack gap=""500"">
-        <Text as=""h2"" variant=""headingMd"">
-          Small Header
-        </Text>
-        <Text as=""h3"" variant=""headingXl"">
-          Big Header
-        </Text>
-        <Text variant=""bodyMd"">
-          Body Text
-        </Text>
-      </BlockStack>
-    </Box>
-  </Card>
-</Layout.Section>
-
-",Polaris
-"I'm encountering an issue with my Shopify app development environment where CSS styles are not reloading properly during hot reload. Whenever I make changes to CSS files and save them in my code editor, the styles disappear and I need to manually reload the app to see the changes. This behavior is causing inconvenience and slowing down the development process.
-I'm looking for guidance on how to troubleshoot and fix this issue with CSS not reloading and styles disappearing during hot reload in my Shopify app development environment. Any insights, suggestions, or known solutions would be greatly appreciated.
-","1. If you're using Chrome, try opening your app in an Incognito window to see if the styles reload with HMR.
-I had the same problem, and it turned out I had a Chrome Extension installed that was interfering with the Remix hydration.
-",Polaris
-"In case we pass environment variable for values of a gradle's project property
-say in build.gradle say, we use:
-url ""https://${nexusDomain}/repository/gvhvid-maven-public""
-
-then running snyk test for a gradle project fails with following error:
-ERROR: Gradle Error (short):
-> Could not get unknown property 'nexusDomain' for root project
-'gvhvid-service' of type org.gradle.api.Project.
-
-
-because snyk test implicitly runs following command:
-/builds/gvhvid/gradlew' snykResolvedDepsJson -q --build-file /builds/gvhvid/build.gradle 
--Dorg.gradle.parallel= -Dorg.gradle.console=plain -I /tmp/tmp-580-SoPWUNy9S4bm--init.gradle 
---no-configuration-cache
-
-Possible solution is to pass the value for project property: nexusDomain via snyk CLI
-So, the fix I tried using the SNYK CLI option: --configuration-attributes but that does not seems to be working!
-","1. Fix that I found is to pass the value for project property as an environment variables visible to gradle using special prefix ORG_GRADLE_PROJECT_ to the build
-say:
-ORG_GRADLE_PROJECT_nexusDomain=""gvh.vid.com"" 
-
-and then run snyk test worked! ☺️
-",Snyk
-"I added my github projects to snyk.io portal to check vulnerabilities. Sadly, snyk is only checking files ending with the .json, .yml, .txt etc. It's not checking vulnerabilities in typescript, js, java, python files. I tried this couple times, same result, no change. Any suggestion?
-
-","1. I assume you're referring to SAST scan in your own code, not SCA / open source dependencies, right? Then it should be under ""Code Analysis"" (second item in your screenshot); that's where the SAST results appear. Everything else in above screenshot are results from SCA scans.
-Can you open the ""Code Analysis"" and see what's in the report / which file types are shown there?
-",Snyk
-"Essentially, I want an Iam role from AccountA to be able to manage a dynamodb table in AccountB, but the deployment that I am using does not support sts:AssumeRole (not my choice).  I faced this same issue with an S3, but I was able to add an S3 bucket policy that allowed the Iam role from AccountB to access it (see below).  Is there anything similar for dynamodb tables?
-Thanks all :D
-{
-    ""Version"": ""2012-10-17"",
-    ""Statement"": [
-        {
-            ""Effect"": ""Allow"",
-            ""Principal"": {
-                ""AWS"": ""arn:aws:iam::AccountB:role/iam-role-name""
-            },
-            ""Action"": ""*"",
-            ""Resource"": [
-                ""arn:aws:s3:::bucket-name"",
-                ""arn:aws:s3:::bucket-name/*""
-            ]
-        }
-    ]
-}
-
-","1. The only way that you can manage a table in another account is by assuming a role.
-Unlike S3, DynamoDB does not support resource based access control. Unfortunately there are no simple workarounds as IAM is a security feature.
-
-2. Amazon DynamoDB now supports resource-based policies https://aws.amazon.com/about-aws/whats-new/2024/03/amazon-dynamodb-resource-based-policies/
-You can follow the instructions here to specify a resource policy for a DynamoDB table:
-https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/rbac-create-table.html
-
-Open the DynamoDB console at https://console.aws.amazon.com/dynamodb/
-On the dashboard, choose Create table or choose an already created table (in which case skip the next step and go to ""Permissions"" tab to create your resource policy)
-In Table settings, choose Customize settings.
-In Resource-based policy, add a policy to define the access
-permissions for the table and its indexes. In this policy, you specify
-who has access to these resources, and the actions they are allowed to
-perform on each resource.
-
-For example you would add the following resource policy to your table in Account A, to grant this IAM role in account B to write and read the table.
-{
-  ""Version"": ""2012-10-17"",
-  ""Statement"": [
-    {
-      ""Effect"": ""Allow"",
-      ""Principal"": {
-        ""AWS"": ""arn:aws:iam::AccountB:role/iam-role-name""
-      },
-      ""Action"": [
-        ""dynamodb:GetItem"",
-        ""dynamodb:PutItem""
-      ],
-      ""Resource"": ""arn:aws:dynamodb:Region:AccountA:table/myTable""
-    }
-  ]
-}
-
-",Teleport
-"for a CI/CD pipeline, i need an image for connecting to my teleport cluster to use a bot, which i will a create. Therefore i have installed gravitational/teleport:12.4.11 (following this link ) with all required tools. The Log-in using tsh login --proxy=myteleport.registry.com works fine, but the following tctl get usersor tctl get roles  --format=text throws ERROR: access denied to perform action ""list"" on ""role"", access denied to perform action ""read"" on ""role"".
-I highly appreciate any tips or suggestions you may give to resolve this.
-","1. It seems that the user who logged in using the tsh login command does not have the necessary privileges to view a list of users or roles with tctl.
-You can try adding a role that grants the required permissions. Here's an example of a role configuration manage-users-and-roles.yaml:
-kind: role
-metadata:
-  description: role to manage users & roles 
-  name: manage-users-and-roles
-spec:
-  allow:
-    rules:
-    - resources:
-      - user
-      - role
-      - read
-      verbs:
-      - list
-      - create
-      - read
-      - update
-      - delete
-  deny: {}
-version: v4
-
-Add this role to teleport :
-tctl create -f manage-users-and-roles.yaml
-
-And then link this role with your user :
-tctl users update <your-username> --set-roles <existing-roles>,manage-users-and-roles
-
-
-Note that you should be connected on your teleport server with the admin user
-You can find more information about managing roles on teleport in their docs :
-
-https://goteleport.com/docs/access-controls/guides/role-templates/
-
-",Teleport
-"I'm trying to generate a HTML report from Trivy.  On the example page, they provide trivy image --format template --template ""@contrib/html.tpl"" -o report.html golang:1.12-alpine.  When I run this, I get the following error,
-FATAL  report error: unable to write results: failed to initialize template writer: error retrieving template from path: open contrib/html.tpl: no such file or directory 
-Based on the documentation, It looks like this is a default template so I'm assuming it's included. My logic here is that there is no ""/"" following the ""@"" in the template path.
-I'm currently on version 0.41.0
-","1. I did not install trivy from RPM and had no files other than a solid trivy binary of a few 10 MBs. Downloaded the html.tpl template from their github repo https://github.com/aquasecurity/trivy/blob/main/contrib/html.tpl and placed it in /usr/bin/html.tpl and used the command line: trivy image --format template --template ""@/usr/bin/html.tpl"" -o report.html image-name
-
-2. Not sure what operating system you're using, but on Kali Linux I do the following:
-trivy fs FOLDER_PATH --format -template --template 
-      ""@/usr/share/trivy/templates/html.tpl"" --output NAME_OF_FOLDER.html
-
-I like customizing the .tpl file to generate a html page to my liking, so I generally copy the file somewhere else and edit it.
---template ""@/home/user/Desktop/html.tpl"" (or wherever you wanna put it)
-The example you link is from old documentation. Here's something newer that might help: https://aquasecurity.github.io/trivy/v0.41/docs/configuration/reporting/#default-templates
-
-3. According to the documentation for 0.41 you need to find out the path where the templates are saved on your system. I also use the standard template and need to set ""@/usr/local/share/trivy/templates/html.tpl"" as path. I guess it depends on your operating system and the way you've installed trivy. I run it on a centOS 7 System
-Information from the Documentation:
-
-Example command from documentation: $ trivy image --format template --template ""@/path/to/template"" golang:1.12-alpine -o your_output.html 
-",Trivy
-"I'm interested in developing an alternative authentication method for authorizing an agent with Spire, one that involves authenticating the device based on a specific pattern(like the time it takes to type the password or something like that).
-I've experimented with the NodeAuthenticator in the Spire-server configuration, but I believe I might be making changes to the wrong component (specifically, I made an edit to the generateKey method). I'm curious if I'm heading in the right direction and whether the file I'm currently editing is the appropriate one for this task.
-","1. I believe you are heading in the wrong direction, and your example just happens to be a poor example.
-SPIRE provides identities for workloads (basically a service or set of services).  As a result, delays in typing passwords wouldn't be possible, as SPIRE doesn't provide user identities (items that identify users).  Workloads don't ""type in passwords"" so they can't have delays in typing them.
-Also, SPIRE doesn't use passwords.  Authentication, in this case, is more about proving one has a private key.  The ""client side"" sends a small generated document to the ""other side"" which encrypts the document with the private key.  The encrypted response is then sent back to the client.  The client decrypts the document with a public key matching the private key, and if the documents match, then one can know (by the virtue of cryptographic verification) that they are talking to the correct service, the one identified by the public certificate containing the key used to decrypt the message.  In mTLS, both sides act as clients validating the other side's private keys (each different) against the other side's public certificates (again different, but matching the private keys)
-Now to handle your question about NodeAttestor, that's not part of the above authentication process.  The NodeAttestor is part of the rotating certificate delivery infrastructure.  Basically a ""node"" or ""agent"" is located on the machine where certificates are distributed to the local processes.  The NodeAttestor is the way the node can prove / attest it should be part of the SPIRE server's installation.  This is a little like the workload attestation (proving a process should get a SVID, a private key / certificate bundle), but is scoped only for SPIRE maintaining its own deployment.
-If you want to change how a process would prove itself to the system (maybe add a special field), look into the workload attestor plugins.
-",SPIFFE
-"I am running a jar on spark slave with version spark-2.5.6-bin-hadoop where i am getting this error on submitting the jar
-Exception occurred while create new JwtSource
-java.lang.NoClassDefFoundError: Could not initialize class io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder
-    at io.spiffe.workloadapi.internal.GrpcManagedChannelFactory.createNativeSocketChannel(GrpcManagedChannelFactory.java:55)
-    at io.spiffe.workloadapi.internal.GrpcManagedChannelFactory.newChannel(GrpcManagedChannelFactory.java:41)
-    at io.spiffe.workloadapi.DefaultWorkloadApiClient.newClient(DefaultWorkloadApiClient.java:133)
-    at io.spiffe.workloadapi.DefaultJwtSource.createClient(DefaultJwtSource.java:221)
-    at io.spiffe.workloadapi.DefaultJwtSource.newSource(DefaultJwtSource.java:90)
-
-The spiffe dependency i am using is this
-{
-    ""bagAttributes"":
-    {
-        ""entityId"": ""e51810ec.0372.2342.8d1e.140a06aad6baWCmFcmhRHh"",
-        ""created"": 1627654402157,
-        ""updated"": 1694592585758,
-        ""version"": 1,
-        ""elements"":
-        {
-            ""U.S. Polo Assn.##Casual Shoes##Men"":
-            {
-                ""created"": 1694527195590,
-                ""updated"": 1694527195590,
-                ""lastPrice"": 1799.0,
-                ""lastPriceTs"": 1694527178000,
-                ""latestStyleId"": ""10339033"",
-                ""viewCount"": 1,
-                ""latestEventTs"": 1694527178000
-            }
-        }
-    }
-}
-
-From whatever solution i could find online , it seemed the issue is from guava version where the jar i am deploying is build from guava 29.0-jre while spark slave is picking the guava-14.0jar from /opt/spark-2.4.6-bin-hadoop2.7/jars
-please show me how to resolve these dependency conflict issues.
-","1. The general approach is to take the older library and update it, possibly updating any of the other items that required the older library to the newer versions that require the newer library.
-In your case, you should probably update spark, guava, and possibly a bit more.
-",SPIFFE
-"Can SPIFFE/SPIRE Server be installed on GKE's any node? If yes, one node out of other nodes in cluster will have server and agents both installed. Is it required to have agent running on that node also who is running SPIRE Server?
-Please explain.
-","1. As per the comment received on SPIRE Slack
-On GKE (and other hosted k8s) you only get worker nodes, so there's no way to deploy to the master anyway. But, In the end, there's pluses (potential security) and minuses (scalability) to running SPIRE server on the master. In practice it's probably less likely than likely, but it's a fair debate.
-Typically, you would deploy SPIRE server as a StatefulSet to some number of nodes consistent with scalability and availability goals, and deploy SPIRE agent as a DaemonSet where it's going to run on every node in the cluster.
-Unless you are doing some very specific targeted deployments via the k8s scheduler, such as separate node pools or subsets of nodes scheduled via label selectors for very specific use-cases (where you won't run any SPIFFE workloads), that's the way I'd approach it - put SPIRE agent on all nodes so it's available for all workloads.
-
-2. There is no need to run the SPIRE server in the Kubernetes management plane, or on the Kubernetes management nodes.
-Run your SPIRE Server(s) on the worker nodes, ensuring you have a sufficient number of Servers to meet your fault tolerance needs. Use a Kubernetes Service object to distribute your SPIRE agent's connections across your server pool.
-",SPIFFE
-"I am testing Spire.PDF with example code from this site: https://www.nuget.org/packages/FreeSpire.PDF
-    //Create a pdf document.
-    PdfDocument doc = new PdfDocument();
-
-    PdfPageSettings setting = new PdfPageSettings();
-
-    setting.Size = new SizeF(1000,1000);
-    setting.Margins = new Spire.Pdf.Graphics.PdfMargins(20);
-
-    PdfHtmlLayoutFormat htmlLayoutFormat = new PdfHtmlLayoutFormat();
-    htmlLayoutFormat.IsWaiting = true;
-    
-    String url = ""https://www.wikipedia.org/"";
- 
-    Thread thread = new Thread(() =>
-    { doc.LoadFromHTML(url, false, false, false, setting,htmlLayoutFormat); });
-    thread.SetApartmentState(ApartmentState.STA);
-    thread.Start();
-    thread.Join();
-
-    //Save pdf file.
-    doc.SaveToFile(""output-wiki.pdf"");
-
-
-I have imported the nuget package. It manages to find several of the types but not PdfHtmlLayoutFormat.
-","1. You need to add the following namespace:
-using Spire.Pdf.HtmlConverter;
-
-For more information, you can visit this link: https://www.e-iceblue.com/Tutorials/Spire.PDF/Spire.PDF-Program-Guide/Convert-HTML-to-PDF-Customize-HTML-to-PDF-Conversion-by-Yourself.html
-
-2. The LoadFromHtml method is now removed.
-",SPIRE
-"iam useing Spire Doc library to create word templete SO,
-in the below example , 1st table in page-1 has been chosen to make find and replace operation ,,,
-i created word documents like these but i need to choose 1st table in Page-2
-,,, Thanks in advance for your support
-Output Word
-[1]: https://i.sstatic.net/UCoY0.jpg
-import com.spire.doc.*;  
-import com.spire.doc.documents.Paragraph;  
-import com.spire.doc.documents.TextSelection;  
-import com.spire.doc.fields.DocPicture;  
-import com.spire.doc.fields.TextRange;  
-import java.util.HashMap;  
-import java.util.Map;  
-  
-public class CreateByReplacingPlaceholderText {  
-    public static void main(String []args){  
-        //Load the template document  
-        Document document = new Document(""PlaceholderTextTemplate.docx"");  
-        //Get the first section  // What is Section Means in Word , and how can i select section in word ?
-        Section section = document.getSections().get(0);  
-        //Get the first table in the section  
-        Table table = section.getTables().get(0);  
-  
-        //Create a map of values for the template  
-        Map<String, String> map = new HashMap<String, String>();  
-        map.put(""firstName"",""Alex"");  
-        map.put(""lastName"",""Anderson"");  
-        map.put(""gender"",""Male"");  
-        map.put(""mobilePhone"",""+0044 85430000"");  
-        map.put(""email"",""alex.anderson@myemail.com"");  
-        map.put(""homeAddress"",""123 High Street"");  
-        map.put(""dateOfBirth"",""6th June, 1986"");  
-        map.put(""education"",""University of South Florida, September 2013 - June 2017"");  
-        map.put(""employmentHistory"",""Automation Inc. November 2013 - Present"");  
-  
-        //Call the replaceTextinTable method to replace text in table  
-        replaceTextinTable(map, table);  
-        // Call the replaceTextWithImage method to replace text with image  
-        replaceTextWithImage(document, ""photo"", ""Avatar.jpg"");  
-  
-        //Save the result document  
-        document.saveToFile(""CreateByReplacingPlaceholder.docx"", FileFormat.Docx_2013);  
-    }  
-  
-    //Replace text in table  
-    static void replaceTextinTable(Map<String, String> map, Table table){  
-        for(TableRow row:(Iterable<TableRow>)table.getRows()){  
-            for(TableCell cell : (Iterable<TableCell>)row.getCells()){  
-                for(Paragraph para : (Iterable<Paragraph>)cell.getParagraphs()){  
-                    for (Map.Entry<String, String> entry : map.entrySet()) {  
-                        para.replace(""${"" + entry.getKey() + ""}"", entry.getValue(), false, true);  
-                    }  
-                }  
-            }  
-        }  
-    }  
-  
-    //Replace text with image  
-    static  void replaceTextWithImage(Document document, String stringToReplace, String imagePath){  
-        TextSelection[] selections = document.findAllString(""${"" + stringToReplace + ""}"", false, true);  
-        int index = 0;  
-        TextRange range = null;  
-        for (Object obj : selections) {  
-            TextSelection textSelection = (TextSelection)obj;  
-            DocPicture pic = new DocPicture(document);  
-            pic.loadImage(imagePath);  
-            range = textSelection.getAsOneRange();  
-            index = range.getOwnerParagraph().getChildObjects().indexOf(range);  
-            range.getOwnerParagraph().getChildObjects().insert(index,pic);  
-            range.getOwnerParagraph().getChildObjects().remove(range);  
-        }  
-    }  
-  
-    //Replace text in document body  
-    static void replaceTextinDocumentBody(Map<String, String> map, Document document){  
-        for(Section section : (Iterable<Section>)document.getSections()) {  
-            for (Paragraph para : (Iterable<Paragraph>) section.getParagraphs()) {  
-                for (Map.Entry<String, String> entry : map.entrySet()) {  
-                    para.replace(""${"" + entry.getKey() + ""}"", entry.getValue(), false, true);  
-                }  
-            }  
-        }  
-    }  
-  
-    //Replace text in header or footer  
-    static  void replaceTextinHeaderorFooter(Map<String, String> map, HeaderFooter headerFooter){  
-        for(Paragraph para : (Iterable<Paragraph>)headerFooter.getParagraphs()){  
-            for (Map.Entry<String, String> entry : map.entrySet()) {  
-                para.replace(""${"" + entry.getKey() + ""}"", entry.getValue(), false, true);  
-            }  
-        }  
-    }  
-}  
-
-","1. There is no ""page"" definition in Spire.Doc because MS Word documents are actually ""flow"" documents. But there might be a way to achieve what you want, that is to set a title for each table in the Word template (right-click on the table->Table properties->Alt text->Title), then loop through the tables in the section, and find the desired table by its title using Table.getTitle() method.
-",SPIRE
-"I have an excel template that has preset formulas, then have my wpf application fills in data in other sheets then the preset formulas takes the data from the other sheet and shows it on the main page.
-The problem is when I automatically PDF the excel most formulas go through but other's give me a System.Object[][] or some other errors. But when I access the excel file with the dataset it works.
-The difference between the formulas that go through and the ones that don't are the ones that have an if() statement to remove all non zeroes in a range like this.
-=TEXTJOIN(""
-"", TRUE, TEXT(IF(Details!O:O>0,Details!O:O,""""), ""HH:MM""))
-
-Functions like this works:
-=TEXTJOIN(""
-"",TRUE,Details!D:D)
-
-How do I get Spire.xls to PDF the right format?
-","1. As you tested the Excel file and found it working fine, the problem is likely located inside the Spire rendering.
-You can of course report a bug over there but that might not get resolved instantly.
-You may want to try applying a number format to the cells for zero values instead of using that if part in the formula.
-Something like this could do when writing with ClosedXML if implemented correctly by Spire:
-worksheet.Cell(row, column).Style.NumberFormat.Format = ""#,##0;[Red]-#,##0;\""\"";@"";
-
-Formating possibilities are somewhat documented by Microsoft, take a look over there.
-",SPIRE
-"I was trying to deploy an application with helm on argocd , and this is my case .
-I want to deploy vault using helm and
-i use hashicorp's vault chart as base chart and overriding the values using sub-chart
-And the base chart has conditions on creating services, PVC , etc..
-The values are override on the argocd still the service exists even the condition is made false by boolean
-Chart.yml
-apiVersion: v2
-name: keycloak
-type: application
-version: 1.0.0
-dependencies:
-  - name: keycloak
-    version: ""9.7.3""
-    repository: ""https://charts.bitnami.com/bitnami""
-
-Argocd.yml
-apiVersion: argoproj.io/v1alpha1
-kind: Application
-metadata:
-  name: vault
-  namespace: vault
-spec:
-  project: default
-  source:
-    chart: vault
-    repoURL: https://github.com/myrepo.git
-    targetRevision: HEAD
-  destination:
-    server: ""https://kubernetes.default.svc""
-    namespace: kubeseal
-
-","1. Depends how you are overriding values in your chart and This is more of helm rather than ArgoCD.
-Considering the Chart.yaml as below and chart name being infra which also has keycloak as dependency subchart:
-apiVersion: v2
-name: infra
-type: application
-version: 1.0.0
-dependencies:
-  - name: keycloak
-    version: ""9.7.3""
-    repository: ""https://charts.bitnami.com/bitnami""
-
-Create a values file in the same directory as your Chart.yaml with following contents
-keycloak:
-  fullnameOverride: keycloak-1
-
-here keycloak: key in the values file sets the values for the subchart of name keycloak
-You can have multiple subchart values override like above in one values file.
-",Vault
-"
-I have a local setup of Trino, Hive Metastore, and Minio storage. I have enabled and configured Alluxio caching and Disk Spilling on Trino. The number of requests made to the object storage is higher than expected. Given that I am only testing on a few megabytes of Parquet files.
-What could be the problem? and the solution?
-Here are my configurations in /etc/trino/config.properties .
-coordinator=true
-node-scheduler.include-coordinator=true
-http-server.http.port=8080
-discovery.uri=http://localhost:8080
-catalog.management=${ENV:CATALOG_MANAGEMENT}
-query.max-memory=2GB
-query.max-memory-per-node=700MB
-exchange.http-client.max-requests-queued-per-destination=999999
-scheduler.http-client.max-requests-queued-per-destination=999999
-exchange.http-client.request-timeout=30s
-task.info-update-interval=2s
-spill-enabled=true
-spiller-spill-path=/tmp/spill
-spiller-max-used-space-threshold=0.7
-spiller-threads= 16
-max-spill-per-node=100GB
-query-max-spill-per-node=100GB
-aggregation-operator-unspill-memory-limit=15MB
-spill-compression-codec=LZ4
-spill-encryption-enabled=false
-
-Here are my catalog configurations in /etc/trino/catalog/hive.properties
-connector.name=hive
-hive.metastore=thrift
-hive.metastore.uri=thrift://hive-metastore:9083
-hive.s3.path-style-access=true
-hive.s3.endpoint=http://minio:9000
-hive.s3.aws-access-key=XXX
-hive.s3.aws-secret-key=XXX
-hive.non-managed-table-writes-enabled=true
-hive.s3.ssl.enabled=false
-hive.s3.max-connections=1000
-hive.metastore.thrift.client.read-timeout=3000s
-hive.timestamp-precision=MILLISECONDS
-hive.collect-column-statistics-on-write=false
-hive.storage-format=PARQUET
-hive.security=allow-all
-fs.cache.enabled=true
-fs.cache.directories=/tmp/cache
-fs.cache.max-disk-usage-percentages=70
-fs.cache.ttl=32d
-fs.cache.preferred-hosts-count=5
-fs.cache.page-size=15MB
-
-Thanks in advance.
-
--- Edit: Test with both cach and spilling disabled
-
-Disabling caching and spilling, has affected the latency and throughput, and caused far more GetObject requests:
-
-
--- Edit: Cache Tracing --
-
-I have enabled cache tracing, and the cache is being hit.
-
-
-
-
-Instruction to enable tracing were found at: File System Cache-Monitoring
-Setup in Docker Compose is simply:
-
-
-  jaeger:
-    image: jaegertracing/all-in-one:latest
-    hostname: jaeger
-    ports:
-      - ""16686:16686""
-      - ""4317:4317""
-    environment:
-      - COLLECTOR_OTLP_ENABLED=true
-
-
-And /etc/trino/config.properties
-################################# Tracing
-tracing.enabled=true
-tracing.exporter.endpoint=http://jaeger:4317
-
-
--- Edit: Provide more context Part II--
-
-Based on Slack chat, the following has been confirmed:
-
-Alluxio is used internally in Trino, so no need to follow further tutorials on setting Alluxio standalone/edge etc.
-Caching is supported only on workers, but still my original problem happens when I am using one master and three workers, and I revised the configurations.
-I have disabled spilling, and still the Minio traffic is still high.
-
-On Trino startup, I get:
-024-05-08T09:14:37.105Z INFO    main    Bootstrap   hive.s3.security-mapping.refresh-period              ----        ----                  How often to refresh the security mapping configuration
-2024-05-08T09:14:37.106119355Z 2024-05-08T09:14:37.105Z INFO    main    Bootstrap   hive.s3.security-mapping.iam-role-credential-name    ----        ----                  Name of the extra credential used to provide IAM role
-2024-05-08T09:14:37.106120682Z 2024-05-08T09:14:37.105Z INFO    main    Bootstrap   jmx.base-name                                        ----        ----
-2024-05-08T09:14:37.237666777Z 2024-05-08T09:14:37.237Z INFO    main    alluxio.client.file.cache.PageStore Opening PageStore with option=alluxio.client.file.cache.store.PageStoreOptions@780e214b
-2024-05-08T09:14:37.256328387Z 2024-05-08T09:14:37.256Z INFO    pool-51-thread-1    alluxio.client.file.cache.LocalCacheManager Restoring PageStoreDir (/tmp/alluxio_cache/LOCAL)
-2024-05-08T09:14:37.257456265Z 2024-05-08T09:14:37.257Z INFO    pool-51-thread-1    alluxio.client.file.cache.LocalCacheManager PageStore (/tmp/alluxio_cache/LOCAL) restored with 0 pages (0 bytes), discarded 0 pages (0 bytes)
-2024-05-08T09:14:37.257498782Z 2024-05-08T09:14:37.257Z INFO    pool-51-thread-1    alluxio.client.file.cache.LocalCacheManager Cache is in READ_WRITE.
-2024-05-08T09:14:37.552235866Z 2024-05-08T09:14:37.551Z INFO    main    org.ishugaliy.allgood.consistent.hash.HashRing  Ring [hash_ring_8605] created: hasher [METRO_HASH], partitionRate [1000]
-2024-05-08T09:14:37.566095594Z 2024-05-08T09:14:37.565Z INFO    main    org.ishugaliy.allgood.consistent.hash.HashRing  Ring [hash_ring_8605]: node [TrinoNode[nodeIdentifier=5b97f235d043, hostAndPort=172.23.0.12:8080]] added
-2024-05-08T09:14:37.721904967Z 2024-05-08T09:14:37.721Z DEBUG   main    io.trino.connector.CoordinatorDynamicCatalogManager -- Added catalog hive using connector hive --
-2024-05-08T09:14:37.724651195Z 2024-05-08T09:14:37.724Z INFO    main    io.trino.security.AccessControlManager  Using system access control: default
-2024-05-08T09:14:37.735207694Z 2024-05-08T09:14:37.734Z INFO    main    io.trino.server.Server  Server startup completed in 14.36s
-2024-05-08T09:14:37.735276670Z 2024-05-08T09:14:37.734Z INFO    main    io.trino.server.Server  ======== SERVER STARTED ========
-
-
--- Edit: Provide more context Part I--
-
-The test simply runs a few simple queries in a loop, so regardless of partitioning or queries joins cardinalities, the caching should have been triggered.
-
-By ""expected"" I meant, I would have expected a 10-megabyte dataset not to cause 60 GB of traffic, as in the screenshot below.
-
-I assume that the Alluxio cache is bypassed altogether, so I doubt that it is not only a matter of configuration, as one might infer from:
-Trino: File System Cache
-
-By consulting more references, it's clear that more steps should be included:
-
-Alluxio: Running Trino with Alluxio - Edge
-Alluxio: Running Trino with Alluxio - Stable
-Trino: Alluxio Tutorial 
-Alluxio: Integrate Alluxio
-
-
-Besides, I doubt compatibility issues, since I am on Trino 445. The Alluxio docs mentions that the step were only tested against an earlier version of Trino:
-Deploy Trino. This guide is tested with Trino-352.
-
-
-Here is a distribution of the requests trino makes to Minio bypassing any caching, I include it in case it hints Trino team to an optimization idea, especially head requests are triggered very often:
-
-
-Finally, after following the steps in the above mentioned tutorials, the following error occurs:
-trino> CREATE SCHEMA hive.lakehouse
-    ->     WITH (location = 'alluxio://alluxio-leader:19998/lakehouse');
-Query 20240508_071745_00002_vyeug failed: Invalid location URI: alluxio://alluxio-leader:19998/lakehouse
-
-
-Logs:
-2024-05-08 09:17:45 2024-05-08T07:17:45.210Z    DEBUG   dispatcher-query-4      io.trino.security.AccessControl Invocation of checkCanSetUser(principal=Optional[trino], userName='trino') succeeded in 27.47us
-2024-05-08 09:17:45 2024-05-08T07:17:45.210Z    DEBUG   dispatcher-query-4      io.trino.security.AccessControl Invocation of checkCanExecuteQuery(identity=Identity{user='trino', principal=trino}, queryId=20240508_071745_00002_vyeug) succeeded in 13.49us
-2024-05-08 09:17:45 2024-05-08T07:17:45.211Z    DEBUG   dispatcher-query-5      io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug is QUEUED
-2024-05-08 09:17:45 2024-05-08T07:17:45.211Z    DEBUG   dispatcher-query-1      io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug is WAITING_FOR_RESOURCES
-2024-05-08 09:17:45 2024-05-08T07:17:45.212Z    DEBUG   dispatcher-query-1      io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug is DISPATCHING
-2024-05-08 09:17:45 2024-05-08T07:17:45.212Z    DEBUG   dispatcher-query-1      io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug is RUNNING
-2024-05-08 09:17:45 2024-05-08T07:17:45.213Z    DEBUG   Query-20240508_071745_00002_vyeug-172   io.trino.security.AccessControl Invocation of checkCanCreateSchema(context=SecurityContext{identity=Identity{user='trino', principal=trino}, queryId=20240508_071745_00002_vyeug}, schemaName=hive.lakehouse, properties={location=alluxio://alluxio-leader:19998/lakehouse}) succeeded in 68.04us
-2024-05-08 09:17:45 2024-05-08T07:17:45.223Z    DEBUG   Query-20240508_071745_00002_vyeug-172   io.trino.plugin.hive.metastore.thrift.ThriftHiveMetastoreClient Invocation of getDatabase(name='lakehouse') took 7.32ms and failed with NoSuchObjectException(message:database hive.lakehouse)
-2024-05-08 09:17:45 2024-05-08T07:17:45.224Z    DEBUG   dispatcher-query-1      io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug is FAILED
-2024-05-08 09:17:45 2024-05-08T07:17:45.224Z    DEBUG   Query-20240508_071745_00002_vyeug-172   io.trino.execution.QueryStateMachine    Query 20240508_071745_00002_vyeug failed
-2024-05-08 09:17:45 io.trino.spi.TrinoException: Invalid location URI: alluxio://alluxio-leader:19998/lakehouse
-2024-05-08 09:17:45     at io.trino.plugin.hive.HiveMetadata.lambda$createSchema$22(HiveMetadata.java:954)
-2024-05-08 09:17:45     at java.base/java.util.Optional.map(Optional.java:260)
-2024-05-08 09:17:45     at io.trino.plugin.hive.HiveMetadata.createSchema(HiveMetadata.java:949)
-2024-05-08 09:17:45     at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorMetadata.createSchema(ClassLoaderSafeConnectorMetadata.java:417)
-2024-05-08 09:17:45     at io.trino.tracing.TracingConnectorMetadata.createSchema(TracingConnectorMetadata.java:348)
-2024-05-08 09:17:45     at io.trino.metadata.MetadataManager.createSchema(MetadataManager.java:769)
-2024-05-08 09:17:45     at io.trino.tracing.TracingMetadata.createSchema(TracingMetadata.java:373)
-2024-05-08 09:17:45     at io.trino.execution.CreateSchemaTask.internalExecute(CreateSchemaTask.java:128)
-2024-05-08 09:17:45     at io.trino.execution.CreateSchemaTask.execute(CreateSchemaTask.java:82)
-2024-05-08 09:17:45     at io.trino.execution.CreateSchemaTask.execute(CreateSchemaTask.java:54)
-2024-05-08 09:17:45     at io.trino.execution.DataDefinitionExecution.start(DataDefinitionExecution.java:146)
-2024-05-08 09:17:45     at io.trino.execution.SqlQueryManager.createQuery(SqlQueryManager.java:272)
-2024-05-08 09:17:45     at io.trino.dispatcher.LocalDispatchQuery.startExecution(LocalDispatchQuery.java:145)
-2024-05-08 09:17:45     at io.trino.dispatcher.LocalDispatchQuery.lambda$waitForMinimumWorkers$2(LocalDispatchQuery.java:129)
-2024-05-08 09:17:45     at io.airlift.concurrent.MoreFutures.lambda$addSuccessCallback$12(MoreFutures.java:570)
-2024-05-08 09:17:45     at io.airlift.concurrent.MoreFutures$3.onSuccess(MoreFutures.java:545)
-2024-05-08 09:17:45     at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1137)
-2024-05-08 09:17:45     at io.trino.$gen.Trino_445____20240508_071639_2.run(Unknown Source)
-2024-05-08 09:17:45     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
-2024-05-08 09:17:45     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
-2024-05-08 09:17:45     at java.base/java.lang.Thread.run(Thread.java:1570)
-2024-05-08 09:17:45 Caused by: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme ""alluxio""
-2024-05-08 09:17:45     at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3553)
-2024-05-08 09:17:45     at io.trino.hdfs.TrinoFileSystemCache.createFileSystem(TrinoFileSystemCache.java:155)
-2024-05-08 09:17:45     at io.trino.hdfs.TrinoFileSystemCache$FileSystemHolder.createFileSystemOnce(TrinoFileSystemCache.java:298)
-2024-05-08 09:17:45     at io.trino.hdfs.TrinoFileSystemCache.getInternal(TrinoFileSystemCache.java:140)
-2024-05-08 09:17:45     at io.trino.hdfs.TrinoFileSystemCache.get(TrinoFileSystemCache.java:91)
-2024-05-08 09:17:45     at org.apache.hadoop.fs.ForwardingFileSystemCache.get(ForwardingFileSystemCache.java:39)
-2024-05-08 09:17:45     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557)
-2024-05-08 09:17:45     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
-2024-05-08 09:17:45     at io.trino.hdfs.HdfsEnvironment.lambda$getFileSystem$0(HdfsEnvironment.java:110)
-2024-05-08 09:17:45     at io.trino.hdfs.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:25)
-2024-05-08 09:17:45     at io.trino.hdfs.HdfsEnvironment.getFileSystem(HdfsEnvironment.java:109)
-2024-05-08 09:17:45     at io.trino.hdfs.HdfsEnvironment.getFileSystem(HdfsEnvironment.java:102)
-2024-05-08 09:17:45     at io.trino.filesystem.hdfs.HdfsFileSystem.directoryExists(HdfsFileSystem.java:254)
-2024-05-08 09:17:45     at io.trino.filesystem.manager.SwitchingFileSystem.directoryExists(SwitchingFileSystem.java:117)
-2024-05-08 09:17:45     at io.trino.filesystem.cache.CacheFileSystem.directoryExists(CacheFileSystem.java:104)
-2024-05-08 09:17:45     at io.trino.filesystem.tracing.TracingFileSystem.lambda$directoryExists$5(TracingFileSystem.java:119)
-2024-05-08 09:17:45     at io.trino.filesystem.tracing.Tracing.withTracing(Tracing.java:47)
-2024-05-08 09:17:45     at io.trino.filesystem.tracing.TracingFileSystem.directoryExists(TracingFileSystem.java:119)
-2024-05-08 09:17:45     at io.trino.plugin.hive.HiveMetadata.lambda$createSchema$22(HiveMetadata.java:951)
-2024-05-08 09:17:45     ... 20 more
-2024-05-08 09:17:45 
-2024-05-08 09:17:45 
-2024-05-08 09:17:45 2024-05-08T07:17:45.225Z    INFO    dispatcher-query-1      io.trino.event.QueryMonitor     TIMELINE: Query 20240508_071745_00002_vyeug :: FAILED (INVALID_SCHEMA_PROPERTY) :: elapsed 12ms :: planning 12ms :: waiting 0ms :: scheduling 0ms :: running 0ms :: finishing 0ms :: begin 2024-05-08T07:17:45.211Z :: end 2024-05-08T07:17:45.223Z
-2024-05-08 09:17:45 2024-05-08T07:17:45.430Z    DEBUG   http-client-node-manager-63     io.trino.connector.CatalogPruneTask     Pruned catalogs on server: http://172.21.0.14:8080/v1/task/pruneCatalogs
-
-The alluxio jars, I have even added more in a trial and error one by one, are included in both of Trino and Hive metastore containers:
-[trino@4515602f2e82 hive]$ pwd
-/usr/lib/trino/plugin/hive
-
-[trino@4515602f2e82 hive]$ ls -ll | grep allu
--rwxrwxrwx  1 root  root         0 May  8 06:48 alluxio-2.9.3-client.jar
--rwxrwxrwx  1 root  root  90338926 Mar 24  2023 alluxio-2.9.3-hadoop2-client.jar
--rw-r--r--  4 trino trino   519152 Jan  1  1980 alluxio-core-client-fs-312.jar
--rw-r--r--  4 trino trino  1499627 Jan  1  1980 alluxio-core-common-312.jar
--rw-r--r--  4 trino trino  6097283 Jan  1  1980 alluxio-core-transport-312.jar
--rwxrwxrwx  1 root  root  90338925 May  8 07:06 alluxio-shaded-client-2.9.3.jar
--rw-r--r--  4 trino trino    34723 Jan  1  1980 trino-filesystem-cache-alluxio-445.jar
-
-
-And
-","1. I would suggest to NOT mix spilling and file system caching for starters, they are not designed to work together. Beyond that I would say that it completely depends on what your queries are, what data they have to access, how your files and partitions are structured and how you define ""expected"". So I really cant answer with any more detail at this stage.
-",Alluxio
-"We are using Alluxio(alluxio-2.8.1), and very curious to see and understand what version of log4j used in it. Please suggest where we can get that information.
-","1. 
-According to this url https://github.com/Alluxio/alluxio/blob/master/pom.xml, log4j version may be 2.17.1.
-Secondly, in the archive, you can found assembly director, extract  some-thing-server.jar and find log4j class.
-Thirdly, may be you can extract from running log, or set log to DEBUG
-
-",Alluxio
-"is it possible to mount HDFS data on Alluxio and have Alluxio copy/presist data onto s3 bucket?? or use Alluxio to copy data between HDFS and S3 (without storing data in Alluxio cache)?
-","1. So you can mount multiple ""under stores"" to Alluxio (one hdfs and one S3)  and have data move between the two under stores either through explicit actions, or in some cases using some rules and automation to instigate the transfer of data (if you are using Alluxio enterprise).  It would end up storing the data in the Alluxio cache as it transfers, but you can certainly have data move.
-",Alluxio
-"When mounted an s3 bucket under alluxio://s3/, the bucket already has objects. However, when I get the directory list (either by alluxio fs ls or ls the  fuse-mounted directory or on the web ui) i see no files. When I write a new file or read an already existing object via Alluxio, it appears in the dir list. Is there a way I can have Alluxio show all the not-yet-accessed files in the directory? (rather than only showing files after writing or accessing them)
-","1. a simple way is to run bin/alluxio fs loadMetadata /s3 to force refresh the Alluxio directory. There are other ways to trigger it, checkout “How to Trigger Metadata Sync” section in this latest blog:
-https://www.alluxio.io/blog/metadata-synchronization-in-alluxio-design-implementation-and-optimization/
-",Alluxio
-"I am new to Ceph Storage but ready to learn.
-I had been having this problem  since four days now. Just cant solve it
-I had followed the steps in : https://docs.ceph.com/en/latest/man/8/ceph-authtool/
-The issue is I cant authenticated against the ceph cluster
-root@node3 ceph]# ceph auth import -i ceph.admin.keyring
-2022-05-10T14:34:37.998-0700 7f0637fff700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:37.998-0700 7f0636ffd700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:40.998-0700 7f0636ffd700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:40.998-0700 7f0637fff700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:43.998-0700 7f0637fff700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:43.998-0700 7f06377fe700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:46.998-0700 7f0637fff700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-2022-05-10T14:34:46.998-0700 7f06377fe700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-
-My directory
-[root@node3 ceph]# ls -ls
-total 40
-0 -rw-r--r--. 1 root root    0 May  5 00:15 admin.secret
-0 drwxr-xr-x. 2 root root    6 May  4 00:36 cephadm
-4 -rw-r--r--. 1 root root  144 May 10 14:33 ceph.admin.keyring
-4 -rw-------. 1 ceph ceph  131 Aug 23  2021 ceph.client.crash.keyring
-4 -rw-r--r--. 1 ceph ceph  958 May 10 12:39 ceph.conf
-4 -rw-r--r--. 1 root root  460 May  3 12:21 ceph.conf.ak
-4 -rw-rw-r--. 1 ceph ceph 1302 Aug 23  2021 ceph-dashboard.crt
-4 -rw-------. 1 ceph ceph 1704 Aug 23  2021 ceph-dashboard.key
-4 -rw-------. 1 root root   41 May  5 13:30 ceph.key
-4 -rw-r--r--. 1 root root  145 May 10 14:27 keyring
-4 -rw-------. 1 root root   56 May  2 11:31 keyring.mds.3
-4 -rw-r--r--. 1 root root   92 Aug  5  2021 rbdmap
-[root@node3 ceph]#
-
-Really do need help on this
-","1. i had exactly the same error
-     # microceph.ceph  -n client.vasya  -s
-
-handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
-
-the issue was that i wrongly named the keyring file,
-that is :
-wrong file name => ceph.client.vasya.keyring.conf
-
-i renamed the file:
-correct file name => ceph.client.vasya.keyring
-
-the problem solved.
-to make sure ceph finds keyring file  you can specify it  on command line
-# microceph.ceph  -n client.vasya   --keyring=/var/snap/microceph/current/conf/ceph.client.vasya.keyring -s
-
-
-2. I solved the same error just by specifying the keyring in the commmand line:
-ceph auth get-key client.admin --keyring=ceph.client.admin.keyring
-
-The .keyring file was in the /etc/ceph directory and I was executing the command in /etc/ceph, but it still required the keyring keyword.
-",Ceph
-"I tried to drain the host by running
-sudo ceph orch host drain node-three
-
-But it stuck at removing osd with the below status
-node-one@node-one:~$ sudo ceph orch osd rm status
-OSD  HOST        STATE     PGS  REPLACE  FORCE  ZAP    DRAIN STARTED AT            
-2    node-three  draining    1  False    False  False  2024-04-20 20:30:34.689946 
-
-It's a test set-up and I don't have anything written to the OSD.
-Here is my ceph status
-node-one@node-one:~$ sudo ceph status
-  cluster:
-    id:     f5ac585a-fe8e-11ee-9452-79c779548dac
-    health: HEALTH_OK
- 
-  services:
-    mon: 2 daemons, quorum node-one,node-two (age 21m)
-    mgr: node-two.zphgll(active, since 9h), standbys: node-one.ovegfw
-    osd: 3 osds: 3 up (since 42m), 3 in (since 42m); 1 remapped pgs
- 
-  data:
-    pools:   1 pools, 1 pgs
-    objects: 2 objects, 577 KiB
-    usage:   81 MiB used, 30 GiB / 30 GiB avail
-    pgs:     2/6 objects misplaced (33.333%)
-             1 active+clean+remapped
-
-Is it normal for an orch osd rm drain to take so long?
-","1. With only 3 OSDs and a default crush rule with replicated size 3 there's no target where to drain the OSD to. If this is just a test cluster you could to reduce min_size to 1 and size to 2. But please don't ever do that in production.
-",Ceph
-"I am planning to use CEPH as storage of video files. RHEL CEPH provides options to store directly using librados or using RGW. I am curious to know which implementation is used more in the industry. Specifically if I do GET/PUT/DELETE operation from springboot microservice.
-","1. RGW exposes an S3 interface, whereas RADOS exposes its own object protocol.
-While coding to RADOS directly has some advantages, coding to S3 benefits from a large ecosystem where you can choose any language and pick between literally hundreds of tools and libraries.
-99% of Ceph users write/deploy applications that interact with RGW when using object stores. Only 1-2% write custom applications via RADOS.
-The advice here is to use RGW’s s3 interface, and only consider RADOS for specialized, high-performance tailored applications.
-",Ceph
-"I am studying about Replication in Ceph.
-I can't find about max number of copies in replication.
-I guess maximum value depends on size of object, but I want to know about max value approximately.
-I just figured out minimum and default(also recommended) value of replication is 2 and 3 in several documents
-","1. The smallest entity for replication is the OSD (where the placement groups are stored), so in theory one could set the pool size to the number of OSDs (ignoring the fact that it doesn't make any sense). It doesn't depend on the object size.
-And just to avoid misinterpretation regarding minimal size. What you refer to is the recommended pool min_size, this is something like a safety switch to prevent data loss while the cluster is degraded (e. g. if not all OSDs are up). The recommendation for replicated pools is min_size = 2 and size = 3. It means that each object is stored three times. And as long as at least two of those replicas are available, you can read/write from/to the cluster.
-You can reduce a pool's size (let's say to 2) for experimental reasons or if you only store not important data. But if you value your data, don't use a pool size of 2. There are plenty of explanations and discussions about it.
-Erasure-coded pools have a default min_size = k + 1, this is also some sort of safety switch. The size is k + m, where k is the number of data chunks and m is the number of coding chunks, they are determined by the profile you create.
-",Ceph
-"Having the following information:
-
-Origin point: Point(lat_origin, long_origin)
-End point: Point(lat_end, long_end)
-Center point: Point(lat_center, long_center)
-Distance: 100
-Bearing: 90º
-
-from shapely.geometry import Point
-origin_point = Point(...,...)
-end_point = Point(...,...)
-center_point = Point(...,...)
-distance = 100
-bearing = 90
-
-I would like to be able to generate an arc as close as possible with as few points as possible, obtaining the coordinates of this approximation.
-A good functionality would be to be able to control the error tolerance and to be able to dynamically graduate the number of points to approximate the arc.
-We must have in mind that we are working with coordinates and we cannot ignore surface curvature.
-The expected output would be a function that obtains as inputs, the origin point, the end point, center point, distance, bearing and optionally the error tolerance and returns as output a series of point coordinates from the original point to the end point that approximately form the required arc.
-Related links:
-https://gis.stackexchange.com/questions/326871/generate-arc-from-projection-coordinates
-Any help would be greatly appreciated.
-","1. https://www.igismap.com/formula-to-find-bearing-or-heading-angle-between-two-points-latitude-longitude/
-import math
-import numpy as np
-from shapely.geometry import Point, LineString
-
-def get_bearing(center_point, end_point):
-    
-    lat3 = math.radians(end_point[0])
-    long3 = math.radians(end_point[1])
-    lat1 = math.radians(center_point[0])
-    long1 = math.radians(center_point[1])
-    
-    dLon = long3 - long1
-    
-    X = math.cos(lat3) * math.sin(dLon)
-    Y = math.cos(lat1) * math.sin(lat3) - math.sin(lat1) * math.cos(lat3) * math.cos(dLon)
-    
-    end_brng = math.atan2(X, Y)
-    
-    return end_brng
-
-def get_arc_coordinates(center_point, origin_point, end_point, brng_init, distance):
-    '''
-    center_point: (center_latitude, center_long) 
-    origin_point: (origin_latitude, origin_long) 
-    end_point: (end_latitude, end_long)
-    brng_init: degrees
-    distance: aeronautical miles
-    '''
-    
-    brng_init = math.radians(brng_init) #Bearing in degrees converted to radians.
-    d = distance * 1.852 #Distance in km
-    
-    R = 6378.1 #Radius of the Earth
-    brng = get_bearing(center_point,end_point) #1.57 #Bearing is 90 degrees converted to radians.
-    
-    list_bearings = np.arange(brng, brng_init, 0.1) # 0.1 value to be modify with tolerance
-    
-    coordinates = []
-    
-    for i in list_bearings:
-        lat1 = math.radians(center_point[0]) #Center lat point converted to radians
-        lon1 = math.radians(center_point[1]) #Center long point converted to radians
-        brng = i
-        lat2 = math.asin( math.sin(lat1)*math.cos(d/R) +
-             math.cos(lat1)*math.sin(d/R)*math.cos(brng))
-        
-        lon2 = lon1 + math.atan2(math.sin(brng)*math.sin(d/R)*math.cos(lat1),
-                     math.cos(d/R)-math.sin(lat1)*math.sin(lat2))
-        
-        lat2 = math.degrees(lat2)
-        lon2 = math.degrees(lon2)
-        
-        coordinates.append(Point(lat2, lon2))
-
-    return LineString(coordinates)
-
-",Curve
-"I am trying to get a decay curve fitted to this data to get the values for other distances in R.
-
-I have tried a few self-starting models (SSmicem, SSsymp), but these dont seem to appreciate that I want this to be a decay curve and I end up with relatively large values towards the tail end here.
-I am wishing that the model comes to the asymptote of close to 0, but all models i have produced are giving large values at the tail end (larger than the observed values).
-Is there a way i can set this asymptote?
-","1. Did you draw the points in order to observe the shape of an expected curve ? Drawing it in log-log coordinates makes the inspection easier.
-By inspection one see that there is only one point in the range 0.03<y<0.5 which is a wide range (Thus very badly defined ). As a consequence one cannot expect a reliable fitting.
-Certainly they are many kind of functions which could be fitted. Without clue about the kind of function expected from physical consideration a chosen function will have no physical signifiance. For example the function below is probably without practical interest even with a not too bad fitting.
-
-",Curve
-"I have a task in which I would like to implement the graphical visualisation of a quadratic equation in the interval x =[-10,10] and - if any exist - the corresponding zeros, using CanvasRenderingContext2D methods.
-To convert the coordinates into pixel coordinates within the canvas I would use following functions:
-var toCanvasX = function(x) { 
-
-return (x +(max-min)/2)*canvas.width/(max-min);
-
-} 
-
-var toCanvasY = function(y) { 
-
-return canvas.height-(y+(max-min)/2)*canvas.height/(max-min)
-
-} 
-
-The graph should look like this:
-pic1
-pic2
-How could I solve it?
-Code:
-<!DOCTYPE html> 
-
-<html> 
-
-<head> 
-
-    <meta charset=""UTF-8""> 
-
-    <h1>Solver of Quadratic Equations</h1> 
-
-    <script> 
-
-        var a, b, c; 
-
-        var output; 
-
-         
-        function check() {      
-
-            a = document.forms[""input_form""][""anumber""].value; 
-
-            b = document.forms[""input_form""][""bnumber""].value; 
-
-            c = document.forms[""input_form""][""cnumber""].value; 
-
-
-            if (a == 0) { 
-
-                output = ""a cannot equal zero!""; 
-
-            } else if (isNaN(a)) { 
-
-                output = ""a has to be a number!""; 
-
-            } else if (isNaN(b)) { 
-
-                output = ""b has to be a number!""; 
-
-            } else if (isNaN(c)) { 
-
-                output = ""c has to be a number!""; 
-
-            } else { 
-
-                var x1 = (-b - Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a); 
-
-                var x2 = (-b + Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a); 
-
-                output = ""The polynomial <strong>"" + (a == 1 ? """" : a) + ""x\u00B2 + "" + (b == 1 ? """" : b) + ""x + "" + c + "" = 0</strong> has two zeros x1="" + x1 + "","" + "" "" + ""x2="" + x2; 
-
-            } 
-
-            document.getElementById(""output"").innerHTML = output; 
-
-        } 
-
-    </script> 
-
-</head> 
-
-<body>
- 
-
-This programme calculates zeros of quadratic polynomials of the form ax² + bx + c and graphically displays the solution in the interval x ∈ [-10,10].
-<br><br> 
-
-<form name=""input_form"" action=""javascript:check();""> 
-
-    a: <input type=""text"" name=""anumber"" required> 
-
-    b: <input type=""text"" name=""bnumber"" required> 
-
-    c: <input type=""text"" name=""cnumber"" required> 
-
-    <br><br> 
-
-    <input type=""submit"" value=""Calculate zeros""> 
-
-</form> 
-
-<p id=""output""/> 
-
-","1. I don't know where your two functions toCanvasX(x) and toCanvasY(y) get it's x and y respectively parameters from - nor where min and max are computed anyway but to draw a graph from a quadratic equation you need to evaluate it at several different values of ‍x. The result of this calculation is the y coordinate of the graph for a specific x value.
-As you're interested in an interval of [-10,10] it's as simple as looping from -10 to 10 in relatively small steps, e.g. 0.1:
-for (let x = -10; x < 10; x += 0.1) {
-}
-
-and inside this for-loop use the x variable to calculate y. Both variables are then used with the CanvasRenderingContext2D's lineTo(x,y) method to draw the actual graph.
-Here's an example:
-
-
-let canvas = document.createElement(""canvas"");
-document.body.appendChild(canvas);
-let context = canvas.getContext(""2d"");
-canvas.width = 400;
-canvas.height = 400;
-let a = 1;
-let b = 3;
-let c = 2;
-let y;
-context.strokeStyle = ""#000000"";
-context.beginPath();
-context.moveTo(canvas.width / 2, 0);
-context.lineTo(canvas.width / 2, canvas.height);
-context.moveTo(0, canvas.height / 2);
-context.lineTo(canvas.width, canvas.height / 2);
-context.stroke();
-context.closePath();
-context.strokeStyle = ""#5ead5e"";
-context.beginPath();
-let scale = 30;
-for (let x = -10; x < 10; x += 0.1) {
-  y = (-1) * ((a * (x * x)) + (b * x) + c);
-  context.lineTo(canvas.width / 2 + x * scale, canvas.height / 2 + y * scale);
-}
-context.stroke();
-context.closePath();
-let x1 = (-b - Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a);
-let x2 = (-b + Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a);
-
-context.beginPath();
-context.arc(canvas.width / 2 + x1 * scale, canvas.height / 2, 4, 0, 2 * Math.PI);
-context.fill();
-context.stroke();
-context.closePath();
-context.beginPath();
-context.arc(canvas.width / 2 + x2 * scale, canvas.height / 2, 4, 0, 2 * Math.PI);
-context.fill();
-context.stroke();
-context.closePath();
-
-
-
-",Curve
-"I am not really sure, if this is a prometheus issue, or just Longhorn, or maybe a combination of the two.
-Setup:
-
-Kubernetes K3s v1.21.9+k3s1
-Rancher Longhorn Storage Provider 1.2.2
-Prometheus Helm Chart 32.2.1 and image: quay.io/prometheus/prometheus:v2.33.1
-
-Problem:
-Infinitely growing PV in Longhorn, even over the defined max size. Currently using 75G on a 50G volume.
-Description:
-I have a really small 3 node cluster with not too many deployments running. Currently only one ""real"" application and the rest is just kubernetes system stuff so far.
-Apart from etcd, I am using all the default scraping rules.
-The PV is filling up a bit more than 1 GB per day, which seems fine to me.
-The problem is, that for whatever reason, the data used inside longhorn is infinitely growing. I have configured retention rules for the helm chart with a retention: 7d and retentionSize: 25GB, so the retentionSize should never be reached anyway.
-When I log into the containers shell and do a du -sh in /prometheus, it shows ~8.7GB being used, which looks good to me as well.
-The problem is that when I look at the longhorn UI, the used spaced is growing all the time. The PV does exist now for ~20 days and is currently using almost 75GB of a defined max of 50GB. When I take a look at the Kubernetes node itself and inspect the folder, which longhorn uses to store its PV data, I see the same values of space being used as in the Longhorn UI, while inside the prometheus container, everything looks good to me.
-I hope someone has an idea what the problem could be. I have not experienced this issue with any other deployment so far, all others are good and really decrease in size used, when something inside the container gets deleted.
-","1. I had the same problem recently and it was because Longhorn does not automatically reclaim blocks that are freed by your application, i.e. Prometheus. This causes the volume's size to grow indefinitely, beyond the configured size of the PVC. This is explained in the Longhorn Volume Actual Size documentation. You can trigger Longhorn to reclaim these blocks by using the Trim Filesystem feature, which should bring the size down to what you can see is used within the Container. You can set this up to run on a schedule as well to maintain it over time.
-Late response, but hopefully it helps anyone else faced with the same issue in the future.
-
-2. Can the snapshots be the reason for the increasing size?
-As I understand it, longhorn takes snapshots and they are added to the total actual size used on the node, if data in the snapshot is different to the current data in the volume, which happens in your case because old metrics are deleted and new ones are received.
-See this comment and this one.
-Know I'm answering late but came across the same issues and maybe it helps someone.
-",Longhorn
-"I'm running Longhorn v1.2.3 on RKE1 cluster (provisioned by rancher), this cluster has 5 nodes with dedicated 20GiB disks mounted on /var/lib/longhorn, with ext4 filesystem and 0% reserved blocks for root user/group.
-In the dashboard, i see the following stats:
-
-
-
-
-Type
-Size
-
-
-
-
-Schedulable
-33.5 Gi
-
-
-Reserved
-58.1 Gi
-
-
-Used
-6.18 Gi
-
-
-Disabled
-0 Bi
-
-
-Total
-97.8 Gi
-
-
-
-
-I changed Storage Minimal Available Percentage in settings to 5 (from 25 as i recall), but that haven't changed anything. When i open ""nodes"" tab, i see the following in ""Size"" tab:
-7.86 Gi
-+11.7 Gi Reserved
-
-exact size varies for different nodes, but it's around 8Gi.
-These dedicated disks were added after provisioning longhorn in cluster, and system disks are 40 GiB in size, so possibly the reason for this overuse is because reserved size was calculated at the time, when longhorn was deployed alongside operating system, and haven't adjusted when i mounted this folder to new disk.
-Why do i have more than half of my space ""reserved""? What can i do to get more usable space from longhorn? Thanks!
-","1. After digging deeper and finding that it was one day possible to adjust these values from UI (i wasn't able to find it), i've searched for longhorn CRDs, and came across nodes.longhorn.io. And inside definition i've found exactly what i searched for:
-spec:
-  allowScheduling: true
-  disks:
-    default-disk-fd0000000000:
-      allowScheduling: true
-      evictionRequested: false
-      path: /var/lib/longhorn/
-      storageReserved: 536870912
-      tags: null
-
-Here i changed storageReserved to 536870912 (512 MiB) on all nodes, just in case, and longhorn applied this change immediately. This is ok in my case, because those disk are dedicated, and, per docs
-
-Reserved: The space reserved for other applications and system.
-
-Now i have my storage back, hope it helps
-Edit: I've found a reason why i wasn't able to find a GUI setting - simply because, due to sidebar in my browser, it was hidden by not-that-obvious horizontal scroll bar.
-
-2. @UI go to Node -> (Operation) Edit node and disks -> Storage Reserved
-(do this for all nodes)
-
-3. https://github.com/longhorn/longhorn/issues/4773 added a storage-reserved-percentage-for-default-disk setting for this situation.
-",Longhorn
-"Volume metrics are not being exposed on /metrics endpoint on the longhorn manager
-Longhorn version:1.1.2 or 1.1.1
-Kubernetes version: 1.19.9-gke.1900
-
-Node config
-OS type and version: Ubuntu with Docker
-Disk type : Standard persistent disk 100GB
-Underlying Infrastructure : (GKE)
-
-I have a standard GKE cluster with ubuntu and gke version 1.19.9-gke.1900
-I have installed longhorn using kubectl
-kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.1.1/deploy/longhorn.yaml
-
-I have tried with 1.1.2 earlier and have the same problem .
-If I log onto the instance manager pod and run the curl on /metrics endpoint
-kubectl -n longhorn-system exec -it longhorn-manager-9d797 -- curl longhorn-manager-9d797:9500/metrics
-
-I get this prom output
-# HELP longhorn_disk_capacity_bytes The storage capacity of this disk
-# TYPE longhorn_disk_capacity_bytes gauge
-longhorn_disk_capacity_bytes{disk=""default-disk-4cd3831f07717134"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1.0388023296e+11
-# HELP longhorn_disk_reservation_bytes The reserved storage for other applications and system on this disk
-# TYPE longhorn_disk_reservation_bytes gauge
-longhorn_disk_reservation_bytes{disk=""default-disk-4cd3831f07717134"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 3.1164069888e+10
-# HELP longhorn_disk_usage_bytes The used storage of this disk
-# TYPE longhorn_disk_usage_bytes gauge
-longhorn_disk_usage_bytes{disk=""default-disk-4cd3831f07717134"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 5.855387648e+09
-# HELP longhorn_instance_manager_cpu_requests_millicpu Requested CPU resources in kubernetes of this Longhorn instance manager
-# TYPE longhorn_instance_manager_cpu_requests_millicpu gauge
-longhorn_instance_manager_cpu_requests_millicpu{instance_manager=""instance-manager-e-523d6b01"",instance_manager_type=""engine"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 113
-longhorn_instance_manager_cpu_requests_millicpu{instance_manager=""instance-manager-r-9d8f7ae9"",instance_manager_type=""replica"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 113
-# HELP longhorn_instance_manager_cpu_usage_millicpu The cpu usage of this longhorn instance manager
-# TYPE longhorn_instance_manager_cpu_usage_millicpu gauge
-longhorn_instance_manager_cpu_usage_millicpu{instance_manager=""instance-manager-e-523d6b01"",instance_manager_type=""engine"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 4
-longhorn_instance_manager_cpu_usage_millicpu{instance_manager=""instance-manager-r-9d8f7ae9"",instance_manager_type=""replica"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 4
-# HELP longhorn_instance_manager_memory_requests_bytes Requested memory in Kubernetes of this longhorn instance manager
-# TYPE longhorn_instance_manager_memory_requests_bytes gauge
-longhorn_instance_manager_memory_requests_bytes{instance_manager=""instance-manager-e-523d6b01"",instance_manager_type=""engine"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 0
-longhorn_instance_manager_memory_requests_bytes{instance_manager=""instance-manager-r-9d8f7ae9"",instance_manager_type=""replica"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 0
-# HELP longhorn_instance_manager_memory_usage_bytes The memory usage of this longhorn instance manager
-# TYPE longhorn_instance_manager_memory_usage_bytes gauge
-longhorn_instance_manager_memory_usage_bytes{instance_manager=""instance-manager-e-523d6b01"",instance_manager_type=""engine"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 7.29088e+06
-longhorn_instance_manager_memory_usage_bytes{instance_manager=""instance-manager-r-9d8f7ae9"",instance_manager_type=""replica"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1.480704e+07
-# HELP longhorn_manager_cpu_usage_millicpu The cpu usage of this longhorn manager
-# TYPE longhorn_manager_cpu_usage_millicpu gauge
-longhorn_manager_cpu_usage_millicpu{manager=""longhorn-manager-9d797"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 13
-# HELP longhorn_manager_memory_usage_bytes The memory usage of this longhorn manager
-# TYPE longhorn_manager_memory_usage_bytes gauge
-longhorn_manager_memory_usage_bytes{manager=""longhorn-manager-9d797"",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 2.9876224e+07
-# HELP longhorn_node_count_total Total number of nodes
-# TYPE longhorn_node_count_total gauge
-longhorn_node_count_total 3
-# HELP longhorn_node_cpu_capacity_millicpu The maximum allocatable cpu on this node
-# TYPE longhorn_node_cpu_capacity_millicpu gauge
-longhorn_node_cpu_capacity_millicpu{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 940
-# HELP longhorn_node_cpu_usage_millicpu The cpu usage on this node
-# TYPE longhorn_node_cpu_usage_millicpu gauge
-longhorn_node_cpu_usage_millicpu{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 256
-# HELP longhorn_node_memory_capacity_bytes The maximum allocatable memory on this node
-# TYPE longhorn_node_memory_capacity_bytes gauge
-longhorn_node_memory_capacity_bytes{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 2.950684672e+09
-# HELP longhorn_node_memory_usage_bytes The memory usage on this node
-# TYPE longhorn_node_memory_usage_bytes gauge
-longhorn_node_memory_usage_bytes{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1.22036224e+09
-# HELP longhorn_node_status Status of this node
-# TYPE longhorn_node_status gauge
-longhorn_node_status{condition=""allowScheduling"",condition_reason="""",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1
-longhorn_node_status{condition=""mountpropagation"",condition_reason="""",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1
-longhorn_node_status{condition=""ready"",condition_reason="""",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1
-longhorn_node_status{condition=""schedulable"",condition_reason="""",node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1
-# HELP longhorn_node_storage_capacity_bytes The storage capacity of this node
-# TYPE longhorn_node_storage_capacity_bytes gauge
-longhorn_node_storage_capacity_bytes{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 1.0388023296e+11
-# HELP longhorn_node_storage_reservation_bytes The reserved storage for other applications and system on this node
-# TYPE longhorn_node_storage_reservation_bytes gauge
-longhorn_node_storage_reservation_bytes{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 3.1164069888e+10
-# HELP longhorn_node_storage_usage_bytes The used storage of this node
-# TYPE longhorn_node_storage_usage_bytes gauge
-longhorn_node_storage_usage_bytes{node=""gke-longhorn-2-default-pool-277a6687-tjgl""} 5.855387648e+09
-
-I have created a sample mysql pod with PV and I can see it being provisioned and managed by longhorn with replicas on all 3 nodes on the cluster . However I don't see these metrics
-https://longhorn.io/docs/1.1.0/monitoring/metrics/#volume
-What am I missing here ? Any help is appreciated
-","1. For anyone finding this issue via google:
-Each longhorn-manager pod exposes only volume metrics about the volumes running on the same node. Therefore you need to configure your prometheus scrape_configs so that all longhorn-manager pods are scanned.
-The prometheus-operator should take care of that but for manual scraping you can use something like
-      - job_name: 'longhorn'
-        kubernetes_sd_configs:
-        - role: pod
-        relabel_configs:
-        - source_labels: [__meta_kubernetes_pod_container_name, __meta_kubernetes_pod_container_port_number]
-          action: keep
-          regex: 'longhorn-manager;9500'
-
-
-
-2. I was able to figure this out. Apparently the metrics are only exposed from one manager instance not all of them .
-",Longhorn
-"I'm looking for a hacky way to create temporary URLs with Minio
-I see on the Laravel docs it says: Generating temporary storage URLs via the temporaryUrl method is not supported when using MinIO.
-However from some digging I noticed that I can upload images successfully using:
-AWS_ENDPOINT=http://minio:9000
-I can't view them because the temporary url is on http://minio:9000/xxx
-If I change the AWS endpoint to
-AWS_ENDPOINT=http://localhost:9000
-The temporary url is on http://localhost:9000/xxx, the signature is validated and the file can be viewed.
-The issue exists in this call to make the command. The $command needs to have the host changed but I don't know if I can do that by just passing in an option.
-        $command = $this->client->getCommand('GetObject', array_merge([
-            'Bucket' => $this->config['bucket'],
-            'Key' => $this->prefixer->prefixPath($path),
-        ], $options));
-
-There is also the option to just change the baseUrl by providing a temporary_url in the filesystem config. however, because the URL has changed the signature is invalid.
-Is there a way I can update the S3Client to use a different host either by passing an option to the getCommand function or by passing a new S3Client to the AWS adapter to use the correct host?
-","1. A very hacky solution I've found is to re-create the AwsS3Adatapter:
-      if (is_development()) {
-        $manager = app()->make(FilesystemManager::class);
-        $adapter = $manager->createS3Driver([
-          ...config(""filesystems.disks.s3_private""),
-          ""endpoint"" => ""http://localhost:9000"",
-        ]);
-
-        return $adapter->temporaryUrl(
-          $this->getPathRelativeToRoot(),
-          now()->addMinutes(30)
-        );
-      }
-
-",MinIO
-"I am trying to redirect a example.com/minio location to minio console, which is run behind a nginx proxy both run by a docker compose file. My problem is that, when I'm trying to reverse proxy the minio endpoint to a path, like /minio it does not work, but when I run the minio reverse proxy on root path in the nginx reverse proxy, it works. I seriously cannot findout what the problem might be.
-This is my compose file:
-services:
-  nginx:
-    container_name: nginx
-    image: nginx
-    restart: unless-stopped
-    ports:
-      - 80:80
-      - 443:443
-    volumes:
-      - ./nginx.conf:/etc/nginx/conf.d/default.conf
-      - ./log/nginx:/var/log/nginx/
-  minio:
-    image: minio/minio
-    container_name: minio
-    volumes:
-      - ./data/minio/:/data
-    command: server /data --address ':9000' --console-address ':9001'
-    environment:
-      MINIO_ROOT_USER: minio_admin
-      MINIO_ROOT_PASSWORD: minio_123456
-    ports:
-      - 9000
-      - 9001
-    restart: always
-    logging:
-      driver: ""json-file""
-      options:
-        max-file: ""10""
-        max-size: 20m
-    healthcheck:
-      test: [""CMD"", ""curl"", ""-f"", ""http://127.0.0.1:9000/minio/health/live""]
-      interval: 30s
-      timeout: 20s
-      retries: 3
-
-My nginx configuration is like this:
-server {
-    listen 80;
-    server_name example.com;
-
-    # To allow special characters in headers
-    ignore_invalid_headers off;
-    # Allow any size file to be uploaded.
-    # Set to a value such as 1000m; to restrict file size to a specific value
-    client_max_body_size 0;
-    # To disable buffering
-    proxy_buffering off;
-
-
-    access_log /var/log/nginx/service-access.log;
-    error_log /var/log/nginx/service-error.log debug;
-
-    location / {
-        return 200 ""salam"";
-        default_type text/plain;
-    }
-    location /minio {
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header Host $http_host;
-
-        proxy_connect_timeout 300;
-        # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
-        proxy_http_version 1.1;
-        proxy_set_header Connection """";
-        chunked_transfer_encoding off;
-
-        proxy_pass http://minio:9001;
-    }
-}
-
-The picture I'm seeing of minio console at the domain is this:
-
-And the response of curling the endpoint ($ curl -k http://example.com/minio):
-<null>
-    <html lang=""en"">
-        <head>
-            <meta charset=""utf-8"" />
-            <base href=""/"" />
-            <meta content=""width=device-width,initial-scale=1"" name=""viewport"" />
-            <meta content=""#081C42"" media=""(prefers-color-scheme: light)"" name=""theme-color"" />
-            <meta content=""#081C42"" media=""(prefers-color-scheme: dark)"" name=""theme-color"" />
-            <meta content=""MinIO Console"" name=""description"" />
-            <link href=""./styles/root-styles.css"" rel=""stylesheet"" />
-            <link href=""./apple-icon-180x180.png"" rel=""apple-touch-icon"" sizes=""180x180"" />
-            <link href=""./favicon-32x32.png"" rel=""icon"" sizes=""32x32"" type=""image/png"" />
-            <link href=""./favicon-96x96.png"" rel=""icon"" sizes=""96x96"" type=""image/png"" />
-            <link href=""./favicon-16x16.png"" rel=""icon"" sizes=""16x16"" type=""image/png"" />
-            <link href=""./manifest.json"" rel=""manifest"" />
-            <link color=""#3a4e54"" href=""./safari-pinned-tab.svg"" rel=""mask-icon"" />
-            <title>MinIO Console</title>
-            <script defer=""defer"" src=""./static/js/main.eec275cb.js""></script>
-            <link href=""./static/css/main.90d417ae.css"" rel=""stylesheet"">
-        </head>
-        <body>
-            <noscript>You need to enable JavaScript to run this app.</noscript>
-            <div id=""root"">
-                <div id=""preload"">
-                    <img src=""./images/background.svg"" />
-                    <img src=""./images/background-wave-orig2.svg"" />
-                </div>
-                <div id=""loader-block"">
-                    <img src=""./Loader.svg"" />
-                </div>
-            </div>
-        </body>
-    </html>
-    %
-
-","1. minio doesn't work under non default path like location /minio
-You need to use
-location / {
-....
-proxy_pass http://localhost:9001;
-}
-or add another server block to nginx with subdomain like this
-server{
-
-listen 80;
-
-server_name minio.example.com;;
-
-     location / {
-       proxy_set_header X-Real-IP $remote_addr;
-       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-       proxy_set_header X-Forwarded-Proto $scheme;
-       proxy_set_header Host $http_host;
-
-       proxy_pass http://localhost:9001;
-   }
-}
-
-
-2. Ensure you have configured minio server with the browser redirect url to reflect your sub path.
-This can be set as environment variable like
-MINIO_SERVER_URL=""https://yourdomain.com""
-MINIO_BROWSER_REDIRECT_URL=""https://yourdomain.com/your_subpath""
-
-Ref :https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
-
-
-3. I also struggled with this for a long time and was finally able to resolve it.
-As far as I can tell, the key changes to make this work for me where:
-
-Manually specifying a rewrite directive (instead of relying on the Nginx proxy_pass+URI behaviour which didn't seem to work for me).
-Setting the resolver directive with short timeouts (so that rescheduling of services onto other nodes gets resolved).
-Setting $upstream to prevent DNS caching.
-
-I had to change your setup a little bit so that now the Minio S3 API is served behind minio.example.com while the UI Web Console is accessible at minio.example.com/console/.
-I have edited your config files below:
-docker-compose.yml:
-services:
-  nginx:
-    container_name: nginx
-    image: nginx
-    restart: unless-stopped
-    ports:
-      - 80:80
-      - 443:443
-    volumes:
-      - ./nginx.conf:/etc/nginx/conf.d/default.conf
-      - ./log/nginx:/var/log/nginx/
-  minio:
-    image: minio/minio
-    container_name: minio
-    volumes:
-      - ./data/minio/:/data
-    command: server /data --address ':9000' --console-address ':9001'
-    environment:
-      MINIO_SERVER_URL: ""http://minio.example.com/""
-      MINIO_BROWSER_REDIRECT_URL: ""http://minio.example.com/console/""    
-      MINIO_ROOT_USER: minio_admin
-      MINIO_ROOT_PASSWORD: minio_123456
-    ports:
-      - 9000
-      - 9001
-    restart: always
-    logging:
-      driver: ""json-file""
-      options:
-        max-file: ""10""
-        max-size: 20m
-    healthcheck:
-      test: [""CMD"", ""curl"", ""-f"", ""http://127.0.0.1:9000/minio/health/live""]
-      interval: 30s
-      timeout: 20s
-      retries: 3
-
-nginx.conf:
-server {
-    listen 80;
-    server_name minio.example.com;
-
-    # To allow special characters in headers
-    ignore_invalid_headers off;
-    # Allow any size file to be uploaded.
-    # Set to a value such as 1000m; to restrict file size to a specific value
-    client_max_body_size 0;
-    # To disable buffering
-    proxy_buffering off;
-
-
-    access_log /var/log/nginx/service-access.log;
-    error_log /var/log/nginx/service-error.log debug;
-
-
-    # Use Docker DNS
-    # You might not need this section but in case you need to resolve
-    # docker service names inside the container then this can be useful.
-    resolver 127.0.0.11 valid=10s;
-    resolver_timeout 5s;
-
-    # Apparently the following line might prevent caching of DNS lookups
-    # and force nginx to resolve the name on each request via the internal
-    # Docker DNS.
-    set $upstream ""minio"";
-
-    # Minio Console (UI)
-    location /console/ {
-
-        # This was really the key for me. Even though the Nginx docs say 
-        # that with a URI part in the `proxy_pass` directive, the `/console/`
-        # URI should automatically be rewritten, this wasn't working for me.
-        rewrite ^/console/(.*)$ /$1 break;
-
-        proxy_pass http://$upstream:9001;
-
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header Host $http_host;
-
-        proxy_connect_timeout 300;
-
-        # To support websocket
-        # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
-        proxy_http_version 1.1;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection ""upgrade"";
-        chunked_transfer_encoding off;    
-    }
-
-
-    # Proxy requests to the Minio API on port 9000
-    location / {
-
-        proxy_pass http://$upstream:9000;
-
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Proto $scheme;
-        proxy_set_header Host $http_host;
-
-        proxy_connect_timeout 300;
-
-        # To support websocket
-        # Default is HTTP/1, keepalive is only enabled in HTTP/1.1
-        proxy_http_version 1.1;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection ""upgrade"";
-        chunked_transfer_encoding off;
-    }
-
-}
-
-HTH!
-",MinIO
-"even with the multiple posts over the internet, I can't figure out how to make my GitLab-Runner working...
-I'm using GitLab CE 17.0.0 + 2 GitLab Runners 17.0.0, one hosted on an AlmaLinux 8 server and one hosted on a Windows 11 computer.
-Everything worked fine but I would like to set up the Shared Cache.
-I've set up a MinIO server, hosted on an AlmaLinux 8. GitLab Container Registry is working well with my MinIO server.
-Now I would like to set up my GitLab Runners. Both are using Docker executor. Config files are very similar:
-concurrent = 1
-check_interval = 0
-connection_max_age = ""15m0s""
-shutdown_timeout = 0
-
-[session_server]
-  session_timeout = 1800
-
-[[runners]]
-  name = ""runner-windows""
-  url = ""https://gitlab-url""
-  id = 19
-  token = ""...""
-  executor = ""docker""
-
-  [runners.cache]
-    Type = ""s3""
-    Shared = true
-
-    [runners.cache.s3]
-      ServerAddress = ""minio-url:9000""
-      AccessKey = ""...""
-      SecretKey = ""...""
-      BucketName = ""gitlab-ci-cache""
-      BucketLocation = ""eu-east-1""
-
-  [runners.docker]
-    tls_verify = false
-    privileged = false
-    disable_entrypoint_overwrite = false
-    oom_kill_disable = false
-    disable_cache = false
-    volumes = [""gitlab-pipeline-cache:/cache""]
-    shm_size = 0
-
-This configuration is not using my MinIO server as cache server. A Docker volume ""gitlab-pipeline-cache"" is created and used.
-From the runner hosts, if I use the MinIO client, I success to connect to my MinIO server, to upload files, etc. It's not a network issue.
-Thank you!
-
-EDIT 1: Add my .gitlab-ci.yml content + GitLab CI job output
-.gitlab-ci.yml content
-workflow:
-  rules:
-    - if: $CI_COMMIT_TAG != null
-    - if: $CI_PIPELINE_SOURCE == ""web""
-
-variables:
-  CACHE_DIR: /cache/$CI_PROJECT_ROOT_NAMESPACE/$CI_PROJECT_NAME/$CI_PIPELINE_ID
-
-stages:
-  - .pre
-  - touch
-
-create_cache_dir:
-  stage: .pre
-  tags:
-    - runner-almalinux8
-  image: alpine:latest
-  script:
-    - mkdir --parents $CACHE_DIR/
-
-create_file:
-  stage: touch
-  tags:
-    - runner-windows
-  image: alpine:latest
-  script:
-    - touch $CACHE_DIR/test_create_file.txt
-
-Job output (.pre)
-Running with gitlab-runner 17.0.0 (44feccdf)
-  on runner-almalinux8 -_vLbzjNv, system ID: s_4f5c9ad29d6f
-Preparing the ""docker"" executor
-00:02
-Using Docker executor with image alpine:latest ...
-Pulling docker image alpine:latest ...
-Using docker image sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd for alpine:latest with digest alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b ...
-Preparing environment
-00:01
-Running on runner--vlbzjnv-project-116-concurrent-0 via runner-almalinux8...
-Getting source from Git repository
-00:01
-Fetching changes with git depth set to 50...
-Reinitialized existing Git repository in /builds/<group>/<project>/.git/
-Checking out 456a298a as detached HEAD (ref is 1.0.0-rc1)...
-Skipping Git submodules setup
-Executing ""step_script"" stage of the job script
-00:01
-Using docker image sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd for alpine:latest with digest alpine@sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b ...
-$ mkdir --parents $CACHE_DIR/
-Cleaning up project directory and file based variables
-00:01
-Job succeeded
-
-Job output (touch)
-Running with gitlab-runner 17.0.0 (44feccdf)
-  on runner-windows cc5wbtykV, system ID: s_4f5c9ad29d6f
-Preparing the ""docker"" executor
-00:08
-Using Docker executor with image alpine:latest ...
-Using helper image:  registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-v17.0.0
-Pulling docker image registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-v17.0.0 ...
-Using docker image sha256:cb32fd9b1984b484e20e7b6806bd3a0ef5304abee2f0c64d5b38e1234c2a7bf5 for registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper:x86_64-v17.0.0 with digest registry.gitlab.com/gitlab-org/gitlab-runner/gitlab-runner-helper@sha256:aa094d2434e42a61215a64dfb50fb9b9dc29d81e4d708c1c896d0818a5d6f873 ...
-Pulling docker image alpine:latest ...
-Using docker image sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 for alpine:latest with digest alpine@sha256:77726ef6b57ddf65bb551896826ec38bc3e53f75cdde31354fbffb4f25238ebd ...
-Preparing environment
-00:01
-Running on runner-cc5wbtykv-project-326-concurrent-0 via runner-windows...
-Getting source from Git repository
-00:01
-Fetching changes with git depth set to 20...
-Initialized empty Git repository in /builds/<group>/<project>/.git/
-Created fresh repository.
-Checking out 456a298a as detached HEAD (ref is 1.0.0-rc1)...
-Skipping Git submodules setup
-Executing ""step_script"" stage of the job script
-00:00
-Using docker image sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 for alpine:latest with digest alpine@sha256:77726ef6b57ddf65bb551896826ec38bc3e53f75cdde31354fbffb4f25238ebd ...
-$ touch $CACHE_DIR/test_create_file.txt
-touch: /cache/<group>/<project>/3089/test_create_file.txt: No such file or directory
-Cleaning up project directory and file based variables
-00:01
-ERROR: Job failed: exit code 1
-
-","1. The cache location is where GitLab will store cache bundles. To actually cache things you need to declare cacheable items:
-default:
-  cache:
-    paths:
-    - test_create_file.txt
-
-job1:
-  stage: build
-  script:
-  - echo ""hello world"" > test_create_file.txt
-
-job2:
-  stage: test
-  script:
-  - cat test_crete_file.txt
-
-",MinIO
-"I am currently working on a project where I am attempting to use MinIO with a data moving program developed by my company. This broker software only allows for devices using port 80 to successfully complete a job; however, any avid user of MinIO knows that MinIO hosts on port 9000. So my question is, is there a way to change the port on which the MinIO server is hosted? I've tried looking through the config.json file to find an address variable to assign a port number to but each of the address variables I attempted to change had no effect on the endpoint port number. For reference, I am hosting MinIO on a windows 10 virtual machine during the test phase of the project and will be moving it onto a dedicated server (also windows 10) upon successful completion of testing.
-","1. Add --address :80 when you start your minio.
-You can refer to this: https://docs.min.io/docs/multi-tenant-minio-deployment-guide.html
-
-2. When you start the minio server use the following command…
-minio server start --address :[port you want to use]
-for example…
-minio server start --address :8000
-
-3. As per minio official documentation, you can Update/Create  an environment file at /etc/default/minio
-and update the environment variable called MINIO_OPTS
-
-# Set all MinIO server options
-#
-# The following explicitly sets the MinIO Console listen to address to
-# port 9001 on all network interfaces. The default behavior is dynamic
-# port selection.
-
-MINIO_OPTS=""--console-address :9001""
-
-you can update the port value of the console-address argument.
-restart the service using the below command
-sudo systemctl restart minio.service
-verify wheather port has been changed or not using the below command
- sudo systemctl status minio.service
-",MinIO
-"I am currently making some reaching for the MinIO.  I wonder if there is the object ID concept in MinIO that we can identify the object uniquely.
-or the only way is through the bucket name and file name to identify the stored object.
-","1. class Object:
-""""""Object information.""""""
-
-def __init__(self,  # pylint: disable=too-many-arguments
-             bucket_name,
-             object_name,
-             last_modified=None, etag=None,
-             size=None, metadata=None,
-             version_id=None, is_latest=None, storage_class=None,
-             owner_id=None, owner_name=None, content_type=None,
-             is_delete_marker=False):
-    self._bucket_name = bucket_name
-    self._object_name = object_name
-    self._last_modified = last_modified
-    self._etag = etag
-    self._size = size
-    self._metadata = metadata
-    self._version_id = version_id
-    self._is_latest = is_latest
-    self._storage_class = storage_class
-    self._owner_id = owner_id
-    self._owner_name = owner_name
-    self._content_type = content_type
-    self._is_delete_marker = is_delete_marker**strong text**
-
-https://github.com/minio/minio-py/blob/master/minio/datatypes.py
--First, sorry for my bad english-
-I don't know what your client api is (maybe just mc?), But official github code can help you.
-I find python api code at github. It has all attribute for minio object, But it has no object id in properties.
-So, it doesn't have any way to search data with object id in python api, in my thoughts...
-And only way to search object is using bucket name and object name as you write above.
-",MinIO
-"I have a FreeBSD 12.1-RELEASE server and a CentOS 7 server. Both run on amd64.
-I would like to set up a cluster file system, that runs on both platforms well. It should have CentOS 7 packages and FreeBSD packages. The solutions should be open-source software and ""free of use"".
-After a little research, I found the following, but nontheless I always encountered drawbacks:
-
-MooseFS3: Works on FreeBSD and CentOS, has packages for both, but only the MooseFS3 Pro version, which is commercial, has the functionality of real cluster functionality such as the possibility of mounting the file system from several nodes. Also I had locking problems with files that where access by my dovecot imap server daemon, when I run dovecot from the file system.
-
-GlusterFS: Seems to work well, but there are no packages for the most current version of 8.x for FreeBSD. FreeBSD provides only a port for GlusterFS 3.x as of today. Different versions of GlusterFS can not operate together.
-
-Ceph: Is very complex to configure, and I couldn't execute all of the steps of the official FreeBSD documentation for it, since the tool ceph-disk is deprecated in favor of ceph-volume. With Ceph-volume, though, I could not get it running with my zfs pool on FreeBSD, since the plugin for zfs for ceph-volume seemed to have some Linux code in it when it was ported to FreeBSD or similiar, so it might only run with ZFSOnLinux on Linux itsself.
-
-OCFS2: I don't have much experience with that one, but its releases seem a bit outdated.
-
-Lustre: No packages for FreeBSD and no acurate and up-to-date documentation how to set it up on a recent FreeBSD system
-
-BeeGFS (Fraunhofer): No packages for FreeBSD, only for Linux
-
-Hadoop MapR filesystem: Has a use case more for BigData storage than for a UNIX cluster filesystem, I don't know if it has FreeBSD packages.
-
-
-So I don't find a good solution for a Cluster filesystem that runs on both FreeBSD and CentOS Linux. Even I'm planning to migrate the CentOS server to Fedora Server, so it should run there as well.
-Anyone who can recommend me a recent compatible cluster file system that I could use on both FreeBSD and CentOS/Fedora Server and that allows real cluster file system features like replication and HA?
-Or is there currently no cluster filesystem that fulfills my needs and I have to migrate the two machines running the same OS?
-Thank you in advance.
-Best regards,
-rforberger
-","1. 
-MooseFS3: Works on FreeBSD and CentOS, has packages for both, but only the MooseFS3 Pro version, which is commercial, has the functionality of real cluster functionality such as the possibility of mounting the file system from several nodes.
-
-This is not true, you can mount MooseFS Community from as many nodes as you wish.
-
-2. Glusterfs may be worth to try, it is based on fuse, which is available on FreeBSD, so you need only to build the userspace part, which may not be impossible, if it is not available for you OS version. On Linux it is definitely the simplest one to set up, since it comes packaged with most of the distros.
-Lustre, despite supporting replicated directories, is more of a parallel filesystem oriented to HPC and high I/O performances, than a clustered filesystem oriented at redundancy, so I would not even consider it if redundancy is your purpose.
-I have no experience with the other ones.
-",MooseFS
-"My mfs version is moosefs-ce-2.0, it is installed on debian6 which is ext3 filesystem. There are a master and a metalogger and some chunkserver, when my master is down. How to recover master from metalogger? The documentation moosefs.org provided is outdated, I can't find more detailed information on documentaton. Or how to config muti-master on moosefs-ce-2.0?
-","1. It is described in the documentation. You can find the documentation here: MooseFS Documentation. Paragraph 4.2 (page 19) of MooseFS User's Manual ""Master metadata restore from metaloggers"" says:
-
-4.2 Master metadata restore from metaloggers
-In MooseFS Community Edition basic configuration there can be only one master and several metaloggers. If for some reason you loose all metadata files and changelogs from master server you can use data from metalogger to restore your data. To start dealing with recovery first you need to transfer all data stored on metalogger in /var/lib/mfs to master metadata folder. Files on metalogger will have ml prefix prepended to the filenames. After all files are copied, you need to create metadata.mfs file from changelogs and metadata.mfs.back files. To do this we need to use the command mfsmaster -a. Mfsmaster starts to build new metadata file and starts mfsmaster process.
-
-",MooseFS
-"I've been cleaning up my mfs installation and found a few files showing up as ""sustained"" on the mfscgi list (I have disconnected a server that has these chunks), but also they show up in the mfsmeta filesystem, under ""sustained"".
-How can I clean this up ?
-Is this sustained folder subject to trash policies ?
-Doesn't seem to allow me manual removal...
-","1. Sustained files in MooseFS are the files, which were deleted completely (from trash!) and are still used (open) by some process(es) on some mountpoints.
-So you need to stop these processes which use these files in order to let them be deleted.
-List of closed files is sent to Master Server regularly, so they should disappear from sustained soon (maybe with some small delay), especially if you unmounted the mounpoint.
-You can check list of open files for particular mountpoint using lsof -n | grep /mnt/mfs or in ""Resources"" tab in MFS CGI.
-",MooseFS
-"I found that moosefs trash take too much of my disk space. according to moosefs documentation, it will keep it for a while in case user want it back. But How to clean it up manually to save space?
-","1. In order to purge MooseFS' trash, you need to mount special directory called ""MooseFS Meta"".
-Create mountdir for MooseFS Meta directory first:
-mkdir /mnt/mfsmeta
-
-and mount mfsmeta:
-mfsmount -o mfsmeta /mnt/mfsmeta
-
-If your Master Server Host Name differs from default mfsmaster and/or port differs from default 9421, use appropriate switch, e.g.:
-mfsmount -H master.host.name -P PORT -o mfsmeta /mnt/mfsmeta
-
-Then you can find your deleted files in /mnt/mfsmeta/trash/SUBTRASH directory. Subtrash is a directory inside /mnt/mfsmeta named 000..FFF. Subtrashes are helpful if you have many (e.g. millions) of files in trash, because you can easily operate on them using Unix tools like find, whereas if you had all the files in one directory, such tools may fail.
-If you do not have many files in trash, mount Meta with mfsflattrash parameter:
-mfsmount -o mfsmeta,mfsflattrash /mnt/mfsmeta
-
-or if you use Master Host Name or Port other than default:
-mfsmount -H master.host.name -P PORT -o mfsmeta,mfsflattrash /mnt/mfsmeta
-
-In this case your deleted files will be available directly in /mnt/mfsmeta/trash (without subtrash).
-In both cases you can remove files by simply using rm file or undelete them by moving them to undel directory available in trash or subtrash (mv file undel).
-Remember, that if you do not want to have certain files moved to trash at all, set ""trash time"" (in seconds) for these files to 0 prior to deletion. If you set specific trash time for a directory, all the files created in this directory inherit trash time from parent, e.g.:
-mfssettrashtime 0 /mnt/mfs/directory
-
-You can also set a trash time to other value, e.g. 1 hour:
-mfssettrashtime 3600 /mnt/mfs/directory
-
-For more information on specific parameters passed to mfsmount or mfssettrashtime, see man mfsmount and man mfstrashtime.
-Hope it helps!
-Peter
-",MooseFS
-"In mfshdd.cfg file I setting path is /data/mfs
-Using MFS for several year,I found the /data/mfs more and more biger.I don't know How to setting mfsXXX.cfg to auto delete ?Anyone know please help me
-
-","1. You shouldn't delete any files from directories (typically HDDs mountpoints), which are listed in /etc/mfs/mfshdd.cfg, because the files which are inside directories presented by you on the screenshot above (chunks, named as you mention chunk_xxx.mfs) contain your data!
-If you want MooseFS to move chunks from specific HDD to any other HDDs, just put an asterisk (*) before the path to this HDD in /etc/mfs/mfshdd.cfg - it will move chunks to other HDDs, typically on other Chunkservers. If you want to move chunks to other HDDs on the same machine, put ""lower than"" sign (<) before the path to specific HDD in /etc/mfs/mfshdd.cfg, e.g.:
-*/data/mfs
-
-or:
-</data/mfs
-
-For more information, refer to man mfshdd.cfg and read the comments in /etc/mfs/mfshdd.cfg.dist (MooseFS 2.0) or /etc/mfs/mfshdd.cfg.sample (MooseFS 3.0).
-",MooseFS
-"We have a problem on our StorageGrid with S3 Bucket !
-When we try to put a Lifecycle Configuration on a S3 we all time have the same error :
-An error occurred (MalformedXML) when calling the PutBucketLifecycleConfiguration operation: Invalid XML node or node is at wrong location.
-Sometimes, when JSON file is not good, we have an error and he try to help us like this :
-Error parsing parameter '--lifecycle-configuration': Invalid JSON: Expecting ',' delimiter: line 3 column 17 (char 30) JSON received:
-But, when we clear the JSON file and try to put a Licecycle Configuration we all time have the same error :
-An error occurred (MalformedXML) when calling the PutBucketLifecycleConfiguration operation: Invalid XML node or node is at wrong location.
-I have searched and searched, but I haven't found an answer that solves my probem...
-The JSON file in question can be found below:
-{
-    ""Rules"": [
-        {
-            ""ID"": ""rule1"",
-            ""Expiration"": {
-                ""ExpiredObjectDeleteMarker"": true
-            },
-            ""NoncurrentVersionExpiration"": {
-                ""NewerNoncurrentVersions"": 10,
-                ""NoncurrentDays"": 30
-            },
-            ""Status"": ""Enabled""
-        }
-    ]
-}
-
-If someone have a issue or an idea to solve my problem you are welcome !!!
-","1. Refer to the documentation at https://docs.netapp.com/us-en/storagegrid-115/s3/operations-on-buckets.html for details.
-According to it, StorageGRID doesn't provide support for the following actions:
-
-AbortIncompleteMultipartUpload
-ExpiredObjectDeleteMarker
-Transition
-
-",NetApp
-"We read the latest apache documentation as https://kafka.apache.org/35/documentation.html
-And what is interesting is that , documentation not mentioned the option to use storage over NFS As Netapp or ONTAP
-A little background - we are supporting On-Prem Kafka cluster with 34 machines and Kafka used internal SAS disks
-kafka brokers setup (machine spec & disks):
-34 kafka brokers, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 16 cores.
-each broker has sdb device mounted to /var/kafka, in size 44.6T.
-the sdb device is combined of 16 SAS disks ~1TB in RAID-10. which means 8 disks are used as parity.
-now per customer request , customer want to check the option move On Prem storage as Netapp storage instead of physical disks
-honestly , I am little confused because we saw some documentation that not like the idea to use Kafka cluster over NFS as the following
-1)https://access.redhat.com/documentation/ru-ru/red_hat_amq_streams/2.1/html/configuring_amq_streams_on_openshift/assembly-deployment-configuration-str#considerations-for-data-storage-str
-2)https://strimzi.io/docs/operators/latest/configuring.html#considerations-for-data-storage-str
-3)https://docs.confluent.io/platform/current/kafka/deployment.html#disks
-4)https://sbg.technology/2018/07/10/kafka-nfs/
-5)Kafka doesn't work with external NFS Volume
-and other documentation that support the idea to use Netapp or Ontap as the following
-https://github.com/NetApp/trident/issues/808
-or
-https://docs.netapp.com/us-en/netapp-solutions/data-analytics/kafka-nfs-why-netapp-nfs-for-kafka-workloads.html#architectural-setup
-above link say that:
-With ONTAP 9.12.1 and higher, NFSv4.1, and Linux changes that are in RHEL 8.7 or 9.1 and higher, there are fixes to support running Kafka over NFS. There are some details about this at https://www.netapp.com/blog/simplify-apache-kafka-confluent/.
-In order to enable this functionality in ONTAP, there is a new volume setting in 9.12.1 called ""-is-preserve-unlink-enabled"", which must be set to ""true"". The ask is for Trident to provide a way for this setting to be enabled so that PVCs for Kafka can be created using the ontap-nas or ontap-nas-economy drivers.
-So can we say that only ONTAP 9.12.1 and higher, and Linux RHEL 8.7 or 9.1 and higher, can support Kafka cluster ?
-","1. That's right. NetApp supports from ONTAP 9.12.1.
-",NetApp
-"I added a FSxN file system to my BlueXP canvas a few weeks ago and it was working perfectly until yesterday. Now, when I try to ""enter it"" I am presented with a ""There is no connectivity"" error page.
-
-I have confirmed that I can access the FSxN from the connector over both ports 22 and 443, so I don't think it is actually a connectivity problem. Regardless. I checked the Security Group and ACL rules and I'm not seeing anything that would hinder traffic flowing between the two systems.
-If anyone has any ideas as to what else could be causing the problem, and/or where to look to diagnose the issue, I'd appreciate hearing them.
-","1. The problem you are seeing is when BlueXP has bad credentials for the fsxadmin user for the FSxN file system you are trying to manage. The simple fix is to simply remove the FSxN Working Environment from your canvas and add it back. When you do that, BlueXP should prompt you for the current password.
-Unfortunately, there is no way to just set the password for an FSxN file system like you can for a CVO. Well, that isn't entirely true, if you bring up your browser's developer's tools when the BlueXP console is prompting you for the password, after you fill in the password and click on “Submit"", you will see what API it uses to set the password.
-Note that the remove and rediscover method only works if the FSxN file system is only in one workspace. If it is in multiple, you will have to either use the undocumented API mentioned above, or remove it from all the workspaces and then add it back.
-",NetApp
-"TLDR: SSH command snapmirror delete works fine, PowerShell command Remove-NcSnapmirror misbehaves for SnapMirror relationship removal.
-PowerShell cmdlet: The
-Remove-NcSnapMirror
-cmdlet is supposed to delete the SnapMirror relationship only on the destination cluster.
-But for some reason, using this command removes the relationship on the source as well because of which SnapMirror resync cannot be used.
-SSH cmd: The snapmirror delete command works perfectly fine, which deletes the SnapMirror relationship ONLY on the destination cluster. After this step, I am able to run snapmirror resync to resync the SnapMirror relationship.
-","1. Ended up going for Invoke-NcSsh as atleast it can call the SSH commands in a PowerShell script so it’s fine.
-It’s not the PowerShell way, but it is a way.
-",NetApp
-"I have installed OpenEBS with replica count 3 in a 3-node k8s cluster. I need to find where the files are being stored.
-","1. The location of the data depends on the type of the OpenEBS Volume. The device location/path can be determined by querying the storage pool information. It is either hostPath (for jiva volumes) or a device path (for cstor volumes). 
-OpenEBS Jiva Volumes: The path can be also obtained by describing the replica pod/deployment. 
-kubectl get deployment <volumename-name>-rep -n <pvc-namespace> -o yaml
-
-OpenEBS cStor Volumes: The path depends on the disks used by the Storage Pool. Find the disks associated with the cStor Storage Pool and then get the device information by obtaining the details on the ""disk"" object. Commands to be used:
-kubectl get storageclass <pvc-storage-class> -o yaml
-#get the storage pool claim name 
-kubectl get storagepool <storage-pool-claim-name>-<uid> -o yaml
-#get disk name under disk list
-kubectl get disk <disk-name> -o yaml
-
-
-2. A generic solution relying on the folder name containing both openebs and a pvc:
-$ cd / && sudo find | grep ""openebs.*pvc""
-
-You can also pinpoint a particular PVC (given its name obtained from the VOLUME column of the kubectl get pvc command output) by adding | grep <PVC_NAME>):
-$ cd / && sudo find | grep ""openebs.*pvc-be410650-00af-4c89-afa6-e19c48426356""
-
-Sample output:
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.local
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.local/share
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.local/share/jupyter
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.local/share/jupyter/nbextensions
-[..]
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.jupyter
-./var/snap/microk8s/common/var/openebs/local/pvc-be410650-00af-4c89-afa6-e19c48426356/.jupyter/jupyter_notebook_config.py
-
-",OpenEBS
-"I have an OpenEBS setup with 3 data nodes and a default cstore storage class. 
-Creation of a file works pretty good: 
-time dd if=/dev/urandom of=a.log bs=1M count=100
-real    0m 0.53s
-
-I can delete the file and create it again with the quite the same times. 
-But when I rewrite onto the file it takes ages:
-time dd if=/dev/urandom of=a.log bs=1M count=100
-104857600 bytes (100.0MB) copied, 0.596577 seconds, 167.6MB/s
-real    0m 0.59s
-
-time dd if=/dev/urandom of=a.log bs=1M count=100
-104857600 bytes (100.0MB) copied, 16.621222 seconds, 6.0MB/s
-real    0m 16.62s
-
-time dd if=/dev/urandom of=a.log bs=1M count=100
-104857600 bytes (100.0MB) copied, 19.621924 seconds, 5.1MB/s
-real    0m 19.62s
-
-When I delete the a.log and write it again, then the ~167MS/s are back. Only writing onto an existing file takes so much time. 
-My problem is that i think this is why some of my applications (e.g. Databases) are too slow. Creation of a table in mysql took over 7sek.
-Here is the spec of my testcluster: 
-# Create the OpenEBS namespace
-apiVersion: v1
-kind: Namespace
-metadata:
-  name: openebs
----
-# Create Maya Service Account
-apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: openebs-maya-operator
-  namespace: openebs
----
-# Define Role that allows operations on K8s pods/deployments
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
-  name: openebs-maya-operator
-rules:
-- apiGroups: [""*""]
-  resources: [""nodes"", ""nodes/proxy""]
-  verbs: [""*""]
-- apiGroups: [""*""]
-  resources: [""namespaces"", ""services"", ""pods"", ""pods/exec"", ""deployments"", ""replicationcontrollers"", ""replicasets"", ""events"", ""endpoints"", ""configmaps"", ""secrets"", ""jobs"", ""cronjobs""]
-  verbs: [""*""]
-- apiGroups: [""*""]
-  resources: [""statefulsets"", ""daemonsets""]
-  verbs: [""*""]
-- apiGroups: [""*""]
-  resources: [""resourcequotas"", ""limitranges""]
-  verbs: [""list"", ""watch""]
-- apiGroups: [""*""]
-  resources: [""ingresses"", ""horizontalpodautoscalers"", ""verticalpodautoscalers"", ""poddisruptionbudgets"", ""certificatesigningrequests""]
-  verbs: [""list"", ""watch""]
-- apiGroups: [""*""]
-  resources: [""storageclasses"", ""persistentvolumeclaims"", ""persistentvolumes""]
-  verbs: [""*""]
-- apiGroups: [""volumesnapshot.external-storage.k8s.io""]
-  resources: [""volumesnapshots"", ""volumesnapshotdatas""]
-  verbs: [""get"", ""list"", ""watch"", ""create"", ""update"", ""patch"", ""delete""]
-- apiGroups: [""apiextensions.k8s.io""]
-  resources: [""customresourcedefinitions""]
-  verbs: [ ""get"", ""list"", ""create"", ""update"", ""delete"", ""patch""]
-- apiGroups: [""*""]
-  resources: [ ""disks"", ""blockdevices"", ""blockdeviceclaims""]
-  verbs: [""*"" ]
-- apiGroups: [""*""]
-  resources: [ ""cstorpoolclusters"", ""storagepoolclaims"", ""storagepoolclaims/finalizers"", ""cstorpoolclusters/finalizers"", ""storagepools""]
-  verbs: [""*"" ]
-- apiGroups: [""*""]
-  resources: [ ""castemplates"", ""runtasks""]
-  verbs: [""*"" ]
-- apiGroups: [""*""]
-  resources: [ ""cstorpools"", ""cstorpools/finalizers"", ""cstorvolumereplicas"", ""cstorvolumes"", ""cstorvolumeclaims""]
-  verbs: [""*"" ]
-- apiGroups: [""*""]
-  resources: [ ""cstorpoolinstances"", ""cstorpoolinstances/finalizers""]
-  verbs: [""*"" ]
-- apiGroups: [""*""]
-  resources: [ ""cstorbackups"", ""cstorrestores"", ""cstorcompletedbackups""]
-  verbs: [""*"" ]
-- apiGroups: [""coordination.k8s.io""]
-  resources: [""leases""]
-  verbs: [""get"", ""watch"", ""list"", ""delete"", ""update"", ""create""]
-- nonResourceURLs: [""/metrics""]
-  verbs: [""get""]
-- apiGroups: [""*""]
-  resources: [ ""upgradetasks""]
-  verbs: [""*"" ]
----
-# Bind the Service Account with the Role Privileges.
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
-  name: openebs-maya-operator
-subjects:
-- kind: ServiceAccount
-  name: openebs-maya-operator
-  namespace: openebs
-- kind: User
-  name: system:serviceaccount:default:default
-  apiGroup: rbac.authorization.k8s.io
-roleRef:
-  kind: ClusterRole
-  name: openebs-maya-operator
-  apiGroup: rbac.authorization.k8s.io
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: maya-apiserver
-  namespace: openebs
-  labels:
-    name: maya-apiserver
-    openebs.io/component-name: maya-apiserver
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: maya-apiserver
-      openebs.io/component-name: maya-apiserver
-  replicas: 1
-  strategy:
-    type: Recreate
-    rollingUpdate: null
-  template:
-    metadata:
-      labels:
-        name: maya-apiserver
-        openebs.io/component-name: maya-apiserver
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-      - name: maya-apiserver
-        imagePullPolicy: IfNotPresent
-        image: quay.io/openebs/m-apiserver:1.3.0
-        ports:
-        - containerPort: 5656
-        env:
-        - name: OPENEBS_NAMESPACE
-          valueFrom:
-            fieldRef:
-              fieldPath: metadata.namespace
-        - name: OPENEBS_SERVICE_ACCOUNT
-          valueFrom:
-            fieldRef:
-              fieldPath: spec.serviceAccountName
-        - name: OPENEBS_MAYA_POD_NAME
-          valueFrom:
-            fieldRef:
-              fieldPath: metadata.name
-        - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
-          value: ""true""
-        - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
-          value: ""true""
-        - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
-          value: ""quay.io/openebs/jiva:1.3.0""
-        - name: OPENEBS_IO_JIVA_REPLICA_IMAGE
-          value: ""quay.io/openebs/jiva:1.3.0""
-        - name: OPENEBS_IO_JIVA_REPLICA_COUNT
-          value: ""1""
-        - name: OPENEBS_IO_CSTOR_TARGET_IMAGE
-          value: ""quay.io/openebs/cstor-istgt:1.3.0""
-        - name: OPENEBS_IO_CSTOR_POOL_IMAGE
-          value: ""quay.io/openebs/cstor-pool:1.3.0""
-        - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
-          value: ""quay.io/openebs/cstor-pool-mgmt:1.3.0""
-        - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
-          value: ""quay.io/openebs/cstor-volume-mgmt:1.3.0""
-        - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
-          value: ""quay.io/openebs/m-exporter:1.3.0""
-        - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
-          value: ""quay.io/openebs/m-exporter:1.3.0""
-        - name: OPENEBS_IO_ENABLE_ANALYTICS
-          value: ""false""
-        - name: OPENEBS_IO_INSTALLER_TYPE
-          value: ""openebs-operator""
-        livenessProbe:
-          exec:
-            command:
-            - /usr/local/bin/mayactl
-            - version
-          initialDelaySeconds: 30
-          periodSeconds: 60
-        readinessProbe:
-          exec:
-            command:
-            - /usr/local/bin/mayactl
-            - version
-          initialDelaySeconds: 30
-          periodSeconds: 60
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: maya-apiserver-service
-  namespace: openebs
-  labels:
-    openebs.io/component-name: maya-apiserver-svc
-spec:
-  ports:
-  - name: api
-    port: 5656
-    protocol: TCP
-    targetPort: 5656
-  selector:
-    name: maya-apiserver
-  sessionAffinity: None
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: openebs-provisioner
-  namespace: openebs
-  labels:
-    name: openebs-provisioner
-    openebs.io/component-name: openebs-provisioner
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: openebs-provisioner
-      openebs.io/component-name: openebs-provisioner
-  replicas: 1
-  strategy:
-    type: Recreate
-    rollingUpdate: null
-  template:
-    metadata:
-      labels:
-        name: openebs-provisioner
-        openebs.io/component-name: openebs-provisioner
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-      - name: openebs-provisioner
-        imagePullPolicy: IfNotPresent
-        image: quay.io/openebs/openebs-k8s-provisioner:1.3.0
-        env:
-        - name: NODE_NAME
-          valueFrom:
-            fieldRef:
-              fieldPath: spec.nodeName
-        - name: OPENEBS_NAMESPACE
-          valueFrom:
-            fieldRef:
-              fieldPath: metadata.namespace
-        livenessProbe:
-          exec:
-            command:
-            - pgrep
-            - "".*openebs""
-          initialDelaySeconds: 30
-          periodSeconds: 60
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: openebs-snapshot-operator
-  namespace: openebs
-  labels:
-    name: openebs-snapshot-operator
-    openebs.io/component-name: openebs-snapshot-operator
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: openebs-snapshot-operator
-      openebs.io/component-name: openebs-snapshot-operator
-  replicas: 1
-  strategy:
-    type: Recreate
-  template:
-    metadata:
-      labels:
-        name: openebs-snapshot-operator
-        openebs.io/component-name: openebs-snapshot-operator
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-        - name: snapshot-controller
-          image: quay.io/openebs/snapshot-controller:1.3.0
-          imagePullPolicy: IfNotPresent
-          env:
-          - name: OPENEBS_NAMESPACE
-            valueFrom:
-              fieldRef:
-                fieldPath: metadata.namespace
-          livenessProbe:
-            exec:
-              command:
-              - pgrep
-              - "".*controller""
-            initialDelaySeconds: 30
-            periodSeconds: 60
-        - name: snapshot-provisioner
-          image: quay.io/openebs/snapshot-provisioner:1.3.0
-          imagePullPolicy: IfNotPresent
-          env:
-          - name: OPENEBS_NAMESPACE
-            valueFrom:
-              fieldRef:
-                fieldPath: metadata.namespace
-          livenessProbe:
-            exec:
-              command:
-              - pgrep
-              - "".*provisioner""
-            initialDelaySeconds: 30
-            periodSeconds: 60
----
-# This is the node-disk-manager related config.
-# It can be used to customize the disks probes and filters
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: openebs-ndm-config
-  namespace: openebs
-  labels:
-    openebs.io/component-name: ndm-config
-data:
-  node-disk-manager.config: |
-    probeconfigs:
-      - key: udev-probe
-        name: udev probe
-        state: true
-      - key: seachest-probe
-        name: seachest probe
-        state: false
-      - key: smart-probe
-        name: smart probe
-        state: true
-    filterconfigs:
-      - key: os-disk-exclude-filter
-        name: os disk exclude filter
-        state: true
-        exclude: ""/,/etc/hosts,/boot""
-      - key: vendor-filter
-        name: vendor filter
-        state: true
-        include: """"
-        exclude: ""CLOUDBYT,OpenEBS""
-      - key: path-filter
-        name: path filter
-        state: true
-        include: """"
-        exclude: ""loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md""
----
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
-  name: openebs-ndm
-  namespace: openebs
-  labels:
-    name: openebs-ndm
-    openebs.io/component-name: ndm
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: openebs-ndm
-      openebs.io/component-name: ndm
-  updateStrategy:
-    type: RollingUpdate
-  template:
-    metadata:
-      labels:
-        name: openebs-ndm
-        openebs.io/component-name: ndm
-        openebs.io/version: 1.3.0
-    spec:
-      nodeSelector:
-        ""openebs.io/nodegroup"": ""storage-node""
-      serviceAccountName: openebs-maya-operator
-      hostNetwork: true
-      containers:
-      - name: node-disk-manager
-        image: quay.io/openebs/node-disk-manager-amd64:v0.4.3
-        imagePullPolicy: Always
-        securityContext:
-          privileged: true
-        volumeMounts:
-        - name: config
-          mountPath: /host/node-disk-manager.config
-          subPath: node-disk-manager.config
-          readOnly: true
-        - name: udev
-          mountPath: /run/udev
-        - name: procmount
-          mountPath: /host/proc
-          readOnly: true
-        - name: sparsepath
-          mountPath: /var/openebs/sparse
-        env:
-        - name: NAMESPACE
-          valueFrom:
-            fieldRef:
-              fieldPath: metadata.namespace
-        - name: NODE_NAME
-          valueFrom:
-            fieldRef:
-              fieldPath: spec.nodeName
-        - name: SPARSE_FILE_DIR
-          value: ""/var/openebs/sparse""
-        - name: SPARSE_FILE_SIZE
-          value: ""10737418240""
-        - name: SPARSE_FILE_COUNT
-          value: ""3""
-        livenessProbe:
-          exec:
-            command:
-            - pgrep
-            - "".*ndm""
-          initialDelaySeconds: 30
-          periodSeconds: 60
-      volumes:
-      - name: config
-        configMap:
-          name: openebs-ndm-config
-      - name: udev
-        hostPath:
-          path: /run/udev
-          type: Directory
-      - name: procmount
-        hostPath:
-          path: /proc
-          type: Directory
-      - name: sparsepath
-        hostPath:
-          path: /var/openebs/sparse
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: openebs-ndm-operator
-  namespace: openebs
-  labels:
-    name: openebs-ndm-operator
-    openebs.io/component-name: ndm-operator
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: openebs-ndm-operator
-      openebs.io/component-name: ndm-operator
-  replicas: 1
-  strategy:
-    type: Recreate
-  template:
-    metadata:
-      labels:
-        name: openebs-ndm-operator
-        openebs.io/component-name: ndm-operator
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-        - name: node-disk-operator
-          image: quay.io/openebs/node-disk-operator-amd64:v0.4.3
-          imagePullPolicy: Always
-          readinessProbe:
-            exec:
-              command:
-                - stat
-                - /tmp/operator-sdk-ready
-            initialDelaySeconds: 4
-            periodSeconds: 10
-            failureThreshold: 1
-          env:
-            - name: WATCH_NAMESPACE
-              valueFrom:
-                fieldRef:
-                  fieldPath: metadata.namespace
-            - name: POD_NAME
-              valueFrom:
-                fieldRef:
-                  fieldPath: metadata.name
-            # the service account of the ndm-operator pod
-            - name: SERVICE_ACCOUNT
-              valueFrom:
-                fieldRef:
-                  fieldPath: spec.serviceAccountName
-            - name: OPERATOR_NAME
-              value: ""node-disk-operator""
-            - name: CLEANUP_JOB_IMAGE
-              value: ""quay.io/openebs/linux-utils:3.9""
----
-apiVersion: v1
-kind: Secret
-metadata:
-  name: admission-server-certs
-  namespace: openebs
-  labels:
-    app: admission-webhook
-    openebs.io/component-name: admission-webhook
-type: Opaque
-data:
-  cert.pem: <...pem...>
-  key.pem: <...pem...>
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: admission-server-svc
-  namespace: openebs
-  labels:
-    app: admission-webhook
-    openebs.io/component-name: admission-webhook-svc
-spec:
-  ports:
-  - port: 443
-    targetPort: 443
-  selector:
-    app: admission-webhook
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: openebs-admission-server
-  namespace: openebs
-  labels:
-    app: admission-webhook
-    openebs.io/component-name: admission-webhook
-    openebs.io/version: 1.3.0
-spec:
-  replicas: 1
-  strategy:
-    type: Recreate
-    rollingUpdate: null
-  selector:
-    matchLabels:
-      app: admission-webhook
-  template:
-    metadata:
-      labels:
-        app: admission-webhook
-        openebs.io/component-name: admission-webhook
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-        - name: admission-webhook
-          image: quay.io/openebs/admission-server:1.3.0
-          imagePullPolicy: IfNotPresent
-          args:
-            - -tlsCertFile=/etc/webhook/certs/cert.pem
-            - -tlsKeyFile=/etc/webhook/certs/key.pem
-            - -alsologtostderr
-            - -v=2
-            - 2>&1
-          volumeMounts:
-            - name: webhook-certs
-              mountPath: /etc/webhook/certs
-              readOnly: true
-      volumes:
-        - name: webhook-certs
-          secret:
-            secretName: admission-server-certs
----
-apiVersion: admissionregistration.k8s.io/v1beta1
-kind: ValidatingWebhookConfiguration
-metadata:
-  name: validation-webhook-cfg
-  labels:
-    app: admission-webhook
-    openebs.io/component-name: admission-webhook
-webhooks:
-  # failurePolicy Fail means that an error calling the webhook causes the admission to fail.
-  - name: admission-webhook.openebs.io
-    failurePolicy: Ignore
-    clientConfig:
-      service:
-        name: admission-server-svc
-        namespace: openebs
-        path: ""/validate""
-      caBundle: <...ca..>
-    rules:
-      - operations: [ ""CREATE"", ""DELETE"" ]
-        apiGroups: [""*""]
-        apiVersions: [""*""]
-        resources: [""persistentvolumeclaims""]
-      - operations: [ ""CREATE"", ""UPDATE"" ]
-        apiGroups: [""*""]
-        apiVersions: [""*""]
-        resources: [""cstorpoolclusters""]
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: openebs-localpv-provisioner
-  namespace: openebs
-  labels:
-    name: openebs-localpv-provisioner
-    openebs.io/component-name: openebs-localpv-provisioner
-    openebs.io/version: 1.3.0
-spec:
-  selector:
-    matchLabels:
-      name: openebs-localpv-provisioner
-      openebs.io/component-name: openebs-localpv-provisioner
-  replicas: 1
-  strategy:
-    type: Recreate
-  template:
-    metadata:
-      labels:
-        name: openebs-localpv-provisioner
-        openebs.io/component-name: openebs-localpv-provisioner
-        openebs.io/version: 1.3.0
-    spec:
-      serviceAccountName: openebs-maya-operator
-      containers:
-      - name: openebs-provisioner-hostpath
-        imagePullPolicy: Always
-        image: quay.io/openebs/provisioner-localpv:1.3.0
-        env:
-        - name: NODE_NAME
-          valueFrom:
-            fieldRef:
-              fieldPath: spec.nodeName
-        - name: OPENEBS_NAMESPACE
-          valueFrom:
-            fieldRef:
-              fieldPath: metadata.namespace
-        - name: OPENEBS_IO_ENABLE_ANALYTICS
-          value: ""true""
-        - name: OPENEBS_IO_INSTALLER_TYPE
-          value: ""openebs-operator""
-        - name: OPENEBS_IO_HELPER_IMAGE
-          value: ""quay.io/openebs/openebs-tools:3.8""
-        livenessProbe:
-          exec:
-            command:
-            - pgrep
-            - "".*localpv""
-          initialDelaySeconds: 30
-          periodSeconds: 60
-
-I am on Kubernetes: 
-Client Version: version.Info{Major:""1"", Minor:""16"", GitVersion:""v1.16.2"", GitCommit:""c97fe5036ef3df2967d086711e6c0c405941e14b"", GitTreeState:""clean"", BuildDate:""2019-10-15T19:18:23Z"", GoVersion:""go1.12.10"", Compiler:""gc"", Platform:""linux/amd64""}
-Server Version: version.Info{Major:""1"", Minor:""16"", GitVersion:""v1.16.2"", GitCommit:""c97fe5036ef3df2967d086711e6c0c405941e14b"", GitTreeState:""clean"", BuildDate:""2019-10-15T19:09:08Z"", GoVersion:""go1.12.10"", Compiler:""gc"", Platform:""linux/amd64""}
-
-How can i investigate why it takes so long? 
-What can me bring some more insides? 
-","1. Spoke to Peter over Slack. The following changes helped to improve the performance:
-
-Increasing the queue size of the iSCSI Initiator
-Increase the cpu from 2 to 6 for each of the three nodes 
-Create my own sc to use a ext3 instead of the default ext4.
-
-With the above changes, the subsequent writes numbers were around 140-150 MB/s.
-",OpenEBS
-"I am trying to run OpenEBS on Minikube v1.29.0 with --driver=docker and --kubernetes-version=v1.23.12. I have installed OpenEBS using the following command:
-kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
-
-However, the openebs-ndm pod is stuck in ContainerCreating status.
-When I run kubectl describe pod openebs-ndm-bbj6s -n openebs, I get the following error message:
-Events:
-  Type     Reason       Age                From               Message
-  ----     ------       ----               ----               -------
-  Normal   Scheduled    51s                default-scheduler  Successfully assigned openebs/openebs-ndm-bbj6s to minikube
-  Warning  FailedMount  19s (x7 over 51s)  kubelet            MountVolume.SetUp failed for volume ""udev"" : hostPath type check failed: /run/udev is not a directory
-
-I have tried installing udev as suggested here on my host but it didn't work. Any ideas on how to solve this issue?
-","1. If /run/udev is available in a local machine and not present in minkube cluster, then try to mount that folder into minkube cluster by using the minkube mount command, because to run the OpenEBS properly it required access to /run/udev.
-#Syntax of minkube mount
-$ minikube start --mount-string=""source_path:destination_path"" --mount
-
-#In your case try something like this
-$ minikube start --mount-string=""/run/udev:/run/udev"" --mount
-
-This will mount the /run/udev to the minkube cluster. Now redeploy the pods and monitor the volume mount of the pod.
-Have a glance at a similar error reference in github issues.
-",OpenEBS
-"I am using OpenEBS 0.6 version and I want to do scale down/up of Jiva replica count for my different applications? Is it possible?
-","1. As of version 3.3.x, you can change the number of replicas by changing the JivaVolumePolicy as described in https://openebs.io/docs/3.3.x/user-guides/jiva/jiva-install#provisioning-jiva-volumes.
-For example, in a default install of openebs in a microk8s cluster, you can change the value of replicationFactor: 3 to desired value using the command:
-kubectl edit JivaVolumePolicy openebs-jiva-default-policy -n openebs
-
-
-2. Yes, we can scale OpenEBS Jiva replica count. Detailed steps are mentioned in the link below.
-https://docs.openebs.io/docs/next/tasks_volumeprovisioning.html
-",OpenEBS
-"Can I replicate data between Kubernetes PV into two separate clusters situated in different data centers?
-I have a cluster with associated PV running in Primary site. I have a separate cluster running in DR site.
-How do I continuously replicate data in primary site to DR site so that when application is running from from DR? The data written to PR PVs are available in DR.
-Application writes files to the PV like xls, csv etc.
-I can use any OSS storage orchestrator like openebs, rook, storageos etc.
-Database is outside of kubernetes.
-","1. Narain is right. Kubernetes doesn't contain any functionality that would allow you to synchronize two PVs used by two different clusters. So you would need to find your own solution to synchronize those two filesystems. It can be an existing solution like lsyncd, proposed in this thread or any custom solution like the above mentioned rsync which can be wrapped into a simple bash script and run periodically in cron.
-
-2. Forget Kubernetes for some time. End of the day, you are talking sync files between two storages. Mounting it into Kubernetes as PV is just your choice. So it can be as simple as a rsync setup between two storages?
-
-3. You can replicate same PV across different nodes within same cluster using openEBS Replicated Volumes as long as you are using a proper openebs engine.
-https://openebs.io/docs/#replicated-volumes
-",OpenEBS
-"I've been updating some of my old code and answers with Swift 3 but when I got to Swift Strings and Indexing with substrings things got confusing. 
-Specifically I was trying the following:
-let str = ""Hello, playground""
-let prefixRange = str.startIndex..<str.startIndex.advancedBy(5)
-let prefix = str.substringWithRange(prefixRange)
-
-where the second line was giving me the following error
-
-Value of type 'String' has no member 'substringWithRange'
-
-I see that String does have the following methods now:
-str.substring(to: String.Index)
-str.substring(from: String.Index)
-str.substring(with: Range<String.Index>)
-
-These were really confusing me at first so I started playing around index and range. This is a followup question and answer for substring. I am adding an answer below to show how they are used.
-","1. 
-All of the following examples use
-var str = ""Hello, playground""
-
-Swift 4
-Strings got a pretty big overhaul in Swift 4. When you get some substring from a String now, you get a Substring type back rather than a String. Why is this? Strings are value types in Swift. That means if you use one String to make a new one, then it has to be copied over. This is good for stability (no one else is going to change it without your knowledge) but bad for efficiency.
-A Substring, on the other hand, is a reference back to the original String from which it came. Here is an image from the documentation illustrating that.
-
-No copying is needed so it is much more efficient to use. However, imagine you got a ten character Substring from a million character String. Because the Substring is referencing the String, the system would have to hold on to the entire String for as long as the Substring is around. Thus, whenever you are done manipulating your Substring, convert it to a String.
-let myString = String(mySubstring)
-
-This will copy just the substring over and the memory holding old String can be reclaimed. Substrings (as a type) are meant to be short lived.
-Another big improvement in Swift 4 is that Strings are Collections (again). That means that whatever you can do to a Collection, you can do to a String (use subscripts, iterate over the characters, filter, etc).
-The following examples show how to get a substring in Swift.
-Getting substrings
-You can get a substring from a string by using subscripts or a number of other methods (for example, prefix, suffix, split). You still need to use String.Index and not an Int index for the range, though. (See my other answer if you need help with that.)
-Beginning of a string
-You can use a subscript (note the Swift 4 one-sided range):
-let index = str.index(str.startIndex, offsetBy: 5)
-let mySubstring = str[..<index] // Hello
-
-or prefix:
-let index = str.index(str.startIndex, offsetBy: 5)
-let mySubstring = str.prefix(upTo: index) // Hello
-
-or even easier:
-let mySubstring = str.prefix(5) // Hello
-
-End of a string
-Using subscripts:
-let index = str.index(str.endIndex, offsetBy: -10)
-let mySubstring = str[index...] // playground
-
-or suffix:
-let index = str.index(str.endIndex, offsetBy: -10)
-let mySubstring = str.suffix(from: index) // playground
-
-or even easier:
-let mySubstring = str.suffix(10) // playground
-
-Note that when using the suffix(from: index) I had to count back from the end by using -10. That is not necessary when just using suffix(x), which just takes the last x characters of a String.
-Range in a string
-Again we simply use subscripts here.
-let start = str.index(str.startIndex, offsetBy: 7)
-let end = str.index(str.endIndex, offsetBy: -6)
-let range = start..<end
-
-let mySubstring = str[range]  // play
-
-Converting Substring to String
-Don't forget, when you are ready to save your substring, you should convert it to a String so that the old string's memory can be cleaned up.
-let myString = String(mySubstring)
-
-Using an Int index extension?
-I'm hesitant to use an Int based index extension after reading the article Strings in Swift 3 by Airspeed Velocity and Ole Begemann. Although in Swift 4, Strings are collections, the Swift team purposely hasn't used Int indexes. It is still String.Index. This has to do with Swift Characters being composed of varying numbers of Unicode codepoints. The actual index has to be uniquely calculated for every string.
-I have to say, I hope the Swift team finds a way to abstract away String.Index in the future. But until then, I am choosing to use their API. It helps me to remember that String manipulations are not just simple Int index lookups.
-
-2. I'm really frustrated at Swift's String access model: everything has to be an Index. All I want is to access the i-th character of the string using Int, not the clumsy index and advancing (which happens to change with every major release). So I made an extension to String:
-extension String {
-    func index(from: Int) -> Index {
-        return self.index(startIndex, offsetBy: from)
-    }
-
-    func substring(from: Int) -> String {
-        let fromIndex = index(from: from)
-        return String(self[fromIndex...])
-    }
-
-    func substring(to: Int) -> String {
-        let toIndex = index(from: to)
-        return String(self[..<toIndex])
-    }
-
-    func substring(with r: Range<Int>) -> String {
-        let startIndex = index(from: r.lowerBound)
-        let endIndex = index(from: r.upperBound)
-        return String(self[startIndex..<endIndex])
-    }
-}
-
-let str = ""Hello, playground""
-print(str.substring(from: 7))         // playground
-print(str.substring(to: 5))           // Hello
-print(str.substring(with: 7..<11))    // play
-
-
-3. Swift 5 Extension:
-extension String {
-    subscript(_ range: CountableRange<Int>) -> String {
-        let start = index(startIndex, offsetBy: max(0, range.lowerBound))
-        let end = index(start, offsetBy: min(self.count - range.lowerBound, 
-                                             range.upperBound - range.lowerBound))
-        return String(self[start..<end])
-    }
-
-    subscript(_ range: CountablePartialRangeFrom<Int>) -> String {
-        let start = index(startIndex, offsetBy: max(0, range.lowerBound))
-         return String(self[start...])
-    }
-}
-
-Usage: 
-let s = ""hello""
-s[0..<3] // ""hel""
-s[3...]  // ""lo""
-
-Or unicode:
-let s = ""😎🤣😋""
-s[0..<1] // ""😎""
-
-",Swift
-"This question is about a software tool primarily used by programmers and is on-topic for StackOverflow. Please comment and provide an explanation if you feel otherwise
-I have an Azure Kubernetes cluster with Velero installed. A Service Principal was created for Velero, per option 1 of the instructions.
-Velero was working fine until the credentials for the Service Principal were reset. Now the scheduled backups are failing.
-NAME                                    STATUS      ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
-daily-entire-cluster-20210727030055     Failed      0        0          2021-07-26 23:00:55 -0000       13d       default            <none>
-
-How can I update the secret for Velero?
-","1. 1. Update credentials file
-First, update your credentials file (for most providers, this is credentials-velero and the contents are described in the plugin installation instructions: AWS, Azure, GCP)
-2. Update secret
-Now update the velero secret. On linux:
-kubectl patch -n velero secret cloud-credentials -p '{""data"": {""cloud"": ""'$(base64 -w 0 credentials-velero)'""}}'
-
-
-patch tells kubectl to update a resource by merging the provided data
--n velero tells kubectl to use the velero namespace
-secret is the resource type
-cloud-credentials is the name of the secret used by Velero to store credentials
--p  specifies that the next word is the patch data. It's more common to patch using JSON rather than YAML
-'{""data"": {""cloud"": ""<your-base64-encoded-secret-will-go-here>""}}' this is the JSON data that matches the existing structure of the Velero secret in Kubernetes. <your-base64-encoded-secret-will-go-here> is a placeholder for the command we'll insert.
-$(base64 -w 0 credentials-velero) reads the file credentials-velero in the current directory, turns off word wrapping of the output (-w 0), BASE64-encodes the contents of the file, and inserts the result in the data.
-
-",Velero
-"I am working on a applied math project that utilizes Metal API for acceleration, I want to know if I could use containerd to encapsulate Metal API calls in macOS. Currently, all my apple machines are x86. I also have some linux machines and win machines. I plan to use K8s to orgranize all the machines, hence it would be great if macOS allows me to use Metal API inside containerd.
-","1. You may take a look at this project: https://github.com/makllama/makllama
-It uses virtual-kubelet + containerd on macOS to distribute LLM tasks.
-",containerd
-"I am running Rancher-Desktop over Ubuntu:22.04, retooling Docker in replacement of Rancher Desktop.
-Every works perfectly except, in the past where Docker allows me to run DinD (Docker in Docker) by mounting the docker.sock -v /var/run/docker.sock:/var/run/docker.sock then inside my running container  e.g. docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock ${SOMEIMAGEWITHDOCKER} /bin/bash, I could talk dockerd inside the container, similar to whats is working on my host machine.
-I thought the same can be done for Rancher's Desktop containerd, if I could mount the cointainerd socket to an image which contains nerdctl tool.
-However, I've searched and can't seem to find a way in doing so.
-
-The default ""/run/k3s/containerd/containerd.sock"" does not exist in Ubuntu nor I can find anything named containerd.sock
-
-","1. Using Containerd on RancherDesktop, you can't get access to the socket file by default as it runs everything inside a VM. Rancher Desktop provides a utilities directory, usually at ~/.rd that has tools for interacting with it, one of with is rdctl, a proxy tool for working with the VM. If you inspect the version of nerdctl (~/.rd/bin/nerdctl), it just proxies the commands into the VM using ~/.rd/bin/rdctl, which itself runs shell commands inside the VM, so it's not exposing containerd to the host system.
-Depending on your Host OS (Ubuntu?) and virtualization system, you could rdctl shell into the VM and change settings. On MacOS, for instance, the /Users directory is shared, so you could reconfigure containerd to put it's socket file in /Users/RancherGuest/containerd.sock, and this would be exposed on the host OS. You could also open containerd up to TCP connections and forward the port from the Host OS. Any of these ""solutions"" though would deviate from the base Rancher Desktop setup, and would get reverted when you updated versions.
-Alternatively, if you set Rancher Desktop to use MobyD instead of containerd, then the file ~/.rd/docker.sock becomes available. Do note that this file exists but is unused if you're in containerd mode (I embarressingly lost some time figuring this out...).
-Note: I know this is an older question, but it was a top result when I was searching for a similar problem, so figured I'd share my findings here as well
-",containerd
-"I am using the following Golang code to pull containerd image.
-package main
-
-import (
-    ""context""
-    ""log""
-
-    ""github.com/containerd/containerd""
-    ""github.com/containerd/containerd/namespaces""
-)
-
-func main() {
-    // create connection to containerd
-    client, err := containerd.New(""/run/containerd/containerd.sock"")
-    if err != nil {
-        log.Fatal(err)
-    }
-    defer client.Close()
-
-    // use k8s.io namespace
-    ctx := namespaces.WithNamespace(context.Background(), ""k8s.io"")
-    
-    // pull image
-    image, err := client.Pull(ctx, ""docker.io/library/openjdk:17"", containerd.WithPullUnpack)
-    if err != nil {
-        log.Fatal(err)
-    }
-
-    log.Printf("" Pulled image: %s"", image.Name())
-}
-
-The code worked well and silently pulled the image for me.
-The question is, can I show the progress (e.g print BytesPulled/BytesTotal on console) during the pull? Any idea will be appreciated.
-","1. Use github.com/schollz/progressbar/v3 to show progress like below:
-
-The code that I tried:
-package main
-
-import (
-    ""context""
-    ""fmt""
-    ""io""
-    ""os""
-
-    ""github.com/docker/docker/api/types""
-    ""github.com/docker/docker/client""
-    ""github.com/schollz/progressbar/v3""
-)
-
-func main() {
-    cli, err := client.NewClientWithOpts(client.FromEnv)
-    if err != nil {
-        panic(err)
-    }
-
-    if err := pullImage(cli, ""openjdk:8""); err != nil {
-        panic(err)
-    }
-
-    fmt.Println(""Java 8 image has been successfully pulled."")
-}
-
-func pullImage(cli *client.Client, imageName string) error {
-    out, err := cli.ImagePull(context.Background(), imageName, types.ImagePullOptions{})
-    if err != nil {
-        return err
-    }
-    defer out.Close()
-
-    bar := progressbar.DefaultBytes(-1, ""Pulling image"")
-    _, err = io.Copy(io.MultiWriter(os.Stdout, bar), out)
-    return err
-}
-
-",containerd
-"Kubernetes documentation describes pod as a wrapper around one or more containers. containers running inside of a pod share a set of namespaces (e.g. network) which makes me think namespaces are nested (I kind doubt that). What is the wrapper here from container runtime's perspective?
-Since containers are just processes constrained by namespaces, Cgroups e.g. Perhaps, pod is just the first container launched by Kubelet and the rest of containers are started and grouped by namespaces.
-","1. The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same localhost network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.
-The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).
-
-2. Pod is just a co-located group of container and an Kubernetes object.
-Instead of deploying them separate you can do deploy a pod of containers.
-Best practices is that you should not actually run multiple processes via single container and here is the place where pod idea comes to a place. So with running pods you are grouping containers together and orchestrate them as single object.
-Containers in a pod runs the same Network namespace (ip address and port space) so you have to be careful no to have the same port space used by two processes.
-This differs for example when it comes to filesystem, since the containers fs comes from the image fs. The file systems are isolated unless they will share one Volume.
-
-3. Analogy : Think of pod as your apartment. Your apartment has different rooms for different stuff like kitchen for cooking, bedroom for sleep etc.
-These different rooms are containers within your pod (apartment) targeted to provide different services.
-Naturally all rooms (containers) within your apartment (pod) will share the same network aka living space / walking space to go from one room to another.
-That makes your apartment kind of wrapper for your rooms.
-",containerd
-"I have a kubernetes cluster running on 3 VMs and I enabled master nodes for pods. I also have docker private registry running on another VM with valid SSL certificates installed. I am using CRI-O in my kubernetes cluster. This is what I am doing
-
-VM with Jenkins server and kubctl configured so I can connect to the cluster remotely
-
-Separate VM specifically for docker registry. Bought SSL cert from Godaddy and added in /etc/docker/certs.d
-
-Created secret following this doc
-
-Added certs in /etc/crio/certs.d on all nodes including master and two worker nodes.
-I am able to pull and push images from my jenkins VM and Kubernetes also seems working but only on Master node. The pod works perfectly fine in Master node but two worker nodes shows certificate error. They have ImagePullBackOff with below error
-    Failed to pull image ""imagehub.mydomain.com:443/iam-config-server:0.0.2"": rpc error: code = Unknown desc = pinging container registry imagehub.mydomain.com:443: Get ""https://imagehub.mydomain.com:443/v2/"": x509: certificate signed by unknown authority.
-
-
-
-Its a spring boot application and here is my deploy.yml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: iamconfigserver-deploy
-spec:
-  replicas: 3
-  selector:
-    matchLabels:
-      app: iam-config-server
-  minReadySeconds: 10
-  strategy:
-    type: RollingUpdate
-    rollingUpdate:
-      maxUnavailable: 1
-      maxSurge: 1
-  template:
-    metadata:
-      labels:
-        app: iam-config-server
-    spec:
-      containers:
-      - name: iamconfigserver-pod
-        image: imagehub.mydomain.com:443/iam-config-server:0.0.2
-        ports:
-        - containerPort: 8071
-      imagePullSecrets:
-      - name: regcred
-
-svc.yml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: iamconfigserver-deploy
-spec:
-  replicas: 3
-  selector:
-    matchLabels:
-      app: iam-config-server
-  minReadySeconds: 10
-  strategy:
-    type: RollingUpdate
-    rollingUpdate:
-      maxUnavailable: 1
-      maxSurge: 1
-  template:
-    metadata:
-      labels:
-        app: iam-config-server
-    spec:
-      containers:
-      - name: iamconfigserver-pod
-        image: imagehub.mydomain.com:443/iam-config-server:0.0.2
-        ports:
-        - containerPort: 8071
-      imagePullSecrets:
-      - name: regcred
-
-I could run this in master node and get the secert
-kubectl get secret regcred --output=yaml
-
-
-apiVersion: v1
-data:
-  .dockerconfigjson: ew..............Cgl9Cn0=
-kind: Secret
-metadata:
-  creationTimestamp: ""2022-03-24T06:20:44Z""
-  name: regcred
-  namespace: default
-  resourceVersion: ""471374""
-  uid: 2e6ba870-asf3-33dd-8340-sdfsafsdfsd4
-type: kubernetes.io/dockerconfigjson
-
-I am not sure what I am missing here. My Kubernetes VMs are all running on a separate physical server including master node. But still confused why only pods are running successfully only on master node. Its a development environment and I do understand its not ideal to run pods on master node. A help would be really appreciated. I am not sure if the location of certs in CRI-O is accurate but still its working fine on master node.
-","1. This helped me out:
-https://github.com/cri-o/cri-o/issues/1768
-https://github.com/Nordix/xcluster/tree/master/ovl/private-reg
-We need to define the local registry in CRIO conf file on the master and worker nodes.
-Then after try to pull the image from defined registry from each node, it should work.
-crictl -D pull registry-ip:port/imagename
-
-2. 1. Create an ECR Credential Helper Configuration
-Amazon ECR provides a credential helper that can be used to authenticate Docker and CRI-O with your ECR registry. Follow these steps to set it up:
-Update CRI-O Configuration
-Configure CRI-O to use the ECR credential helper by updating the ~/.docker/config.json file or creating one if it doesn't exist:
-mkdir -p ~/.docker
-cat <<EOF > ~/.docker/config.json
-{
-  ""credHelpers"": {
-    ""865246394951.dkr.ecr.eu-west-1.amazonaws.com"": ""ecr-login""
-  }
-}
-EOF
-
-Replace 65246391234.dkr.ecr.eu-west-1.amazonaws.com with your actual ECR registry.
-Restart CRI-O
-Restart the CRI-O service to apply the configuration changes:
-sudo systemctl restart crio
-
-2. Create Kubernetes Secret for ECR
-Create a Kubernetes secret for your ECR registry. First, get an authentication token from ECR:
-aws ecr get-login-password --region eu-west-1 | sudo crictl pull 652463912341.dkr.ecr.eu-west-1.amazonaws.com/app:v1
-
-Then, create the Kubernetes secret:
-kubectl create secret docker-registry ecr-secret \
-  --docker-server=65246391234.dkr.ecr.eu-west-1.amazonaws.com \
-  --docker-username=AWS \
-  --docker-password=$(aws ecr get-login-password --region eu-west-1) \
-  --docker-email=<your-email>
-
-Replace <your-email> with your email address.
-3. Use the Secret in Your Pod Specification
-Update your pod specification to use the ecr-secret for pulling images:
-apiVersion: v1
-kind: Pod
-metadata:
-  name: my-private-pod
-spec:
-  containers:
-  - name: my-container
-    image: 65246391234.dkr.ecr.eu-west-1.amazonaws.com/app:v1
-  imagePullSecrets:
-  - name: ecr-secret
-
-4. Verify
-Deploy your pod and check the status:
-kubectl apply -f pod-spec.yaml
-kubectl get pods -w
-
-5. Additional Troubleshooting
-
-Check CRI-O Logs: If you encounter issues, check the CRI-O logs for detailed error messages:
-sudo journalctl -u crio -f
-
-
-
-",CRI-O
-"I'm trying to remove all unused images with specific name format from Kubernetes cluster like below.
-crictl images | grep -E -- 'foo|bar' | awk '{print \$3}' | xargs -n 1 crictl rmi
-
-But this one also deletes all the images with naming ""foo"" or ""bar"" even it's in use by container.
-Tried using ""crictl rmi -q"" but that deletes multiple other images which are not in the filter above.
-","1. Probably you want to run
-crictl rmi --prune
-
-You need a rather current crictl for that. From the help:
-$ crictl rmi --help
-NAME:
-   crictl rmi - Remove one or more images
-
-USAGE:
-   crictl rmi [command options] IMAGE-ID [IMAGE-ID...]
-
-OPTIONS:
-   --all, -a    Remove all images (default: false)
-   --prune, -q  Remove all unused images (default: false)
-   --help, -h   show help (default: false)
-
-
-2. If your version of crictl does not have the --prune option, you can use the following command:
-yum -y install jq
-comm -23 <(crictl images  -q | sort) <(crictl ps -q | xargs -n 1 crictl inspect -o json | jq -r '.info.config.image.image' | sort) | xargs crictl rmi
-
-comm usage:
-# 1 < 2
-comm -13 <(cat 1) <(cat 2)
- 
-# 1 > 2
-comm -23 <(cat 1) <(cat 2)
-
-",CRI-O
-"I recently gathered a 1 master 3 worker cluster on Naver cloud platform.
-However, I am stuck deploying a metrics-server and stuck here for weeks.
-In short, my kube-apiserver cannot reach metrics-server apiservice (v1beta1)
-error log from: kubectl logs kube-apiserver-master -n kube-system:
-
-E0229 08:54:20.172156       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://192.168.182.16:10250/apis/metrics.k8s.io/v1beta1: Get ""https://192.168.182.16:10250/apis/metrics.k8s.io/v1beta1"": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
-E0229 08:08:18.569700       1 controller.go:113] loading OpenAPI spec for ""v1beta1.metrics.k8s.io"" failed with: Error, could not get list of group versions for APIService)
-
-So I searched the web and tried adding ""--kubelet-insecure-tls"" flag to metrics-server deployment or adding ""hostNetwork: true"" config under spec.container under the same manifest that I added the flag above (which ended up in a CrashLoopBackOff state for the metrics-server pod)
-I am not sure what is the cause of metrics-server apiservice not working. Maybe I though ""v1beta1"" api is deprecated or too old for k8s 1.28.x?
-Another speculation is metrics-server not deploying when I set ""hostNetwork: true"" in the deployment manifest of metrics-server.
-My master node OS is Ubuntu 20.04, two of the worker nodes the same and one worker node Ubuntu 18.04.
-Kubernetes version is 1.28.x across the four nodes and using CRI-O as CRI. Also, using Calico as CNI (reason I added ""hostNetwork: true"" to metrics-server deployment manifest, which didn't work)
-metrics-server version that I am trying to deploy is the latest version 0.7.x.
-Here is the components.yaml that I use to deploy metrics-server via ""k apply -f components.yaml"":
-apiVersion: v1
-kind: ServiceAccount
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: metrics-server
-  namespace: kube-system
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  labels:
-    k8s-app: metrics-server
-    rbac.authorization.k8s.io/aggregate-to-admin: ""true""
-    rbac.authorization.k8s.io/aggregate-to-edit: ""true""
-    rbac.authorization.k8s.io/aggregate-to-view: ""true""
-  name: system:aggregated-metrics-reader
-rules:
-- apiGroups:
-  - metrics.k8s.io
-  resources:
-  - pods
-  - nodes
-  verbs:
-  - get
-  - list
-  - watch
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: system:metrics-server
-rules:
-- apiGroups:
-  - """"
-  resources:
-  - nodes/metrics
-  verbs:
-  - get
-- apiGroups:
-  - """"
-  resources:
-  - pods
-  - nodes
-  verbs:
-  - get
-  - list
-  - watch
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: RoleBinding
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: metrics-server-auth-reader
-  namespace: kube-system
-roleRef:
-  apiGroup: rbac.authorization.k8s.io
-  kind: Role
-  name: extension-apiserver-authentication-reader
-subjects:
-- kind: ServiceAccount
-  name: metrics-server
-  namespace: kube-system
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: metrics-server:system:auth-delegator
-roleRef:
-  apiGroup: rbac.authorization.k8s.io
-  kind: ClusterRole
-  name: system:auth-delegator
-subjects:
-- kind: ServiceAccount
-  name: metrics-server
-  namespace: kube-system
----
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: system:metrics-server
-roleRef:
-  apiGroup: rbac.authorization.k8s.io
-  kind: ClusterRole
-  name: system:metrics-server
-subjects:
-- kind: ServiceAccount
-  name: metrics-server
-  namespace: kube-system
----
-apiVersion: v1
-kind: Service
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: metrics-server
-  namespace: kube-system
-spec:
-  ports:
-  - name: https
-    port: 443
-    protocol: TCP
-    targetPort: https
-  selector:
-    k8s-app: metrics-server
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: metrics-server
-  namespace: kube-system
-spec:
-  selector:
-    matchLabels:
-      k8s-app: metrics-server
-  strategy:
-    rollingUpdate:
-      maxUnavailable: 0
-  template:
-    metadata:
-      labels:
-        k8s-app: metrics-server
-    spec:
-      containers:
-      - args:
-        - --cert-dir=/tmp
-        - --secure-port=10250
-        #- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
-        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
-        - --kubelet-use-node-status-port
-        #- --metric-resolution=15s
-        - --metric-resolution=30s
-        - --kubelet-insecure-tls
-        # command:
-        # - /metrics-server
-        # - --kubelet-insecure-tls
-        # - --kubelet-preferred-address-types=InternalIP
-        image: registry.k8s.io/metrics-server/metrics-server:v0.7.0
-        imagePullPolicy: IfNotPresent
-        livenessProbe:
-          failureThreshold: 3
-          httpGet:
-            path: /livez
-            port: https
-            scheme: HTTPS
-          periodSeconds: 10
-        name: metrics-server
-        ports:
-        - containerPort: 10250
-          name: https
-          protocol: TCP
-        readinessProbe:
-          failureThreshold: 3
-          httpGet:
-            path: /readyz
-            port: https
-            scheme: HTTPS
-          initialDelaySeconds: 20
-          periodSeconds: 10
-        resources:
-          requests:
-            cpu: 100m
-            memory: 200Mi
-        securityContext:
-          allowPrivilegeEscalation: false
-          capabilities:
-            drop:
-            - ALL
-          readOnlyRootFilesystem: true
-          runAsNonRoot: true
-          runAsUser: 1000
-          seccompProfile:
-            type: RuntimeDefault
-        volumeMounts:
-        - mountPath: /tmp
-          name: tmp-dir
-      nodeSelector:
-        kubernetes.io/os: linux
-      # below option was added for using Calico CNI
-      hostNetwork: true
-      priorityClassName: system-cluster-critical
-      serviceAccountName: metrics-server
-      volumes:
-      - emptyDir: {}
-        name: tmp-dir
----
-apiVersion: apiregistration.k8s.io/v1
-kind: APIService
-metadata:
-  labels:
-    k8s-app: metrics-server
-  name: v1beta1.metrics.k8s.io
-spec:
-  group: metrics.k8s.io
-  groupPriorityMinimum: 100
-  # insecureSkipTLSVerify: true
-  insecureSkipTLSVerify: false
-  service:
-    name: metrics-server
-    namespace: kube-system
-  version: v1beta1
-  versionPriority: 100
-
-By request on the comment, here is /etc/kubernetes/manifests/kube-apiserver.yaml args:
-these are the args in /etc/kubernetes/manifests/kube-apiserver.yaml:
-
-apiVersion: v1
-kind: Pod
-metadata:
-  annotations:
-    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.7:6443
-  creationTimestamp: null
-  labels:
-    component: kube-apiserver
-    tier: control-plane
-  name: kube-apiserver
-  namespace: kube-system
-spec:
-  containers:
-  - command:
-    - kube-apiserver
-    - --advertise-address=10.0.0.7
-    - --allow-privileged=true
-    - --authorization-mode=Node,RBAC
-    - --client-ca-file=/etc/kubernetes/pki/ca.crt
-    - --enable-admission-plugins=NodeRestriction
-    - --enable-bootstrap-token-auth=true
-    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
-    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
-    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
-    - --etcd-servers=https://127.0.0.1:2379
-    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
-    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
-    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
-    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
-    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
-    - --requestheader-allowed-names=front-proxy-client
-    #- --requestheader-allowed-names=aggregator
-    - --enable-aggregator-routing=true
-    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
-    - --requestheader-extra-headers-prefix=X-Remote-Extra-
-    - --requestheader-group-headers=X-Remote-Group
-    - --requestheader-username-headers=X-Remote-User
-    - --secure-port=6443
-    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
-    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
-    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
-    - --service-cluster-ip-range=10.96.0.0/12
-    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
-    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
-    - --v=4
-
-In addition, I have initialized kubeadm via this command:
-$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket unix:///var/run/crio/crio.sock --kubernetes-version v1.28.2
-Best,
-","1. Kubelet certificate needs to be signed by cluster Certificate Authority (or disable certificate validation by passing --kubelet-insecure-tls to Metrics Server)
-Reference: kubernetes metrics server
-Please see the solution given in How to troubleshoot metrics-server on kubeadm? on adding --kubelet-insecure-tls argument
-Under spec.template.spec.containers, on the same level as name: metrics-server add
-args:
-- --kubelet-insecure-tls
-- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
-- --metric-resolution=30s
-
-",CRI-O
-"""Serverless"" infrastructure like AWS lambda makes use of Coordinated Restore at Checkpoint to improve startup time of java programs.
-AWS documentation states that
-
-The state of connections that your function establishes during the initialization phase isn't guaranteed when Lambda resumes your function from a snapshot. Validate the state of your network connections and re-establish them as necessary. In most cases, network connections that an AWS SDK establishes automatically resume. For other connections, review the best practices.
-
-Spring docs mention
-
-Leveraging checkpoint/restore of a running application typically requires additional lifecycle management to gracefully stop and start using resources like files or sockets and stop active threads.
-
-I am wondering what I need to do when using HttpClient from the standard library or CloseableHttpClient from Apache to deal with this.
-Let's say I am performing an HTTP request before the snapshot to perform client priming. What do I need to do in the afterRestore hook to avoid any network related problems?
-@Override
-public void beforeCheckpoint(org.crac.Context<? extends Resource> context) throws Exception {
-    var response = performPrimingRequest(httpClient);
-    System.out.println(response.statusCode());
-}
-
-A connection that was established will be closed, and the destination IP might not be valid anymore. So I assume recreate the client or at least clear the connection pool. Is this possible with the standard JavaClient? Anything else required?
-","1. I am putting this answer as research result, I havent used CRaC before.
-CRaC looks like a very thin layer and from this line I understand that http client or any thread will get a nudge to go on.
-I would suggest to put a retry logic. After restore, probably the http connection will hang first then get timeout but in second try you may get a response. This connection may require authentication that will need another refresh on other parts but you got the picture.
-On the other hand Micronaut looks like having more investment on CRaC. And for spring boot you can check this demo.
-
-So I assume recreate the client or at least clear the connection pool. Is this possible with the standard JavaClient? Anything else required?
-
-Yes probably a good refresh will be required after restore. You can use actuators like the doc says and here is another sample which helps about db connections in spring.
-",Firecracker
-"I am creating a wrapper for firecracker.
-To start a VM with firecracker on command line, you have to pass a socket file to the firecracker executable. Something like this:
-firecracker --api-sock /path/to/file.socket
-
-Then from another terminal, you can make requests to this server/socket something like this:
-curl --unix-socket /tmp/firecracker.socket -XPUT 'http://localhost/actions' -d '{""action_type"": ""SendCtrlAltDel""}'
-
-I am trying to replicate the same thing from within a Gin server.
-I have an endpoint which does the first work, which is to start a server. A  minimal code looks like this:
-cmd := exec.Command(""firecracker"", ""--api-sock"", ""/path/to/file.socket"")
-
-err := cmd.Start()
-
-This endpoint starts the server and listens for any command. The problem is, I don't know how to use the socket file to make a PUT request to this server. I have found this on the web, but it does not makes much sense to me.
-Here is a starter code which does not use any socket file.
-func BootSource(c *gin.Context) {
-    var body models.BootSource
-    c.BindJSON(&body)
-    bodyJson, _ := json.Marshal(body)
-
-    // initialize http client
-    client := &http.Client{}
-
-    // set the HTTP method, url, and request body
-    req, err := http.NewRequest(http.MethodPut, ""http://localhost/boot-source"", bytes.NewBuffer(bodyJson))
-    if err != nil {
-        panic(err)
-    }
-
-    // set the request header Content-Type for json
-    _, err = client.Do(req)
-    if err != nil {
-        panic(err)
-    }
-}
-
-How do I make this PUT request use the socket file?
-Please also note that I'm using Gin framework.
-","1. To do this, you'll need to override the Transport used by your http.Client to configure a function for it to use to create a connection to the socket:
-client := http.Client{
-  Transport: &http.Transport{
-    DialContext: func(ctx context.Context, _, _ string) (net.Conn, error) {
-      return net.Dial(""unix"", ""/path/to/socket"") 
-    },
-  },
-}
-
-You can then use that client, and all requests made by it will use that connection. Usually for HTTP services exposed over a socket, the host used in the request is not important, so you can just use any value that makes sense to you e.g
-client.Get(""http://firecracker/some/api/path"")
-
-However, as you are trying to use the Firecracker API, why not just use their SDK: https://github.com/firecracker-microvm/firecracker-go-sdk
-This will handle the set up of the connection for you, and prevent you needing to manually craft all of the requests.
-
-2. Extending to this answer, you can keep the http defaults by cloning default transport
-    defaultTransport, ok := http.DefaultTransport.(*http.Transport)
-    if !ok {
-            panic(""http.DefaultTransport is not a *http.Transport"")
-    }
-    unixTransport := defaultTransport.Clone()
-    defaultDialContext := unixTransport.DialContext
-    unixTransport.DialContext = func(ctx context.Context, _, _ string) (net.Conn, error) {
-            return defaultDialContext(ctx, ""unix"", ""/path/to/socket"")
-    }
-    client := http.Client{Transport: unixTransport}
-    client.Get(""http://example.com"")
-
-",Firecracker
-"I'm trying to SSH into 2 different firecracker VMs on the same host. I am creating the configuration dynamically as seen below. Both VMs should be fully isolated on their own network. I have 2 IPs allocated (1 for TUN and 1 for the VM).
-I can SSH into VM1, but not VM2. Is my IP addressing logic incorrect? How can I properly understanding this.
-#!/bin/bash
-
-generate_config() {
-  local machine_number=""$1""
-  local fc_ip=""$2""
-  local tap_ip=""$3""
-  local fc_mac=""$4""
-  local tap_dev=""tap_${machine_number}""
-  local mask_long=""255.255.255.252""
-  local mask_short=""/30""
-
-  ip link del ""$tap_dev"" 2> /dev/null || true
-  ip tuntap add dev ""$tap_dev"" mode tap
-  sysctl -w net.ipv4.conf.${tap_dev}.proxy_arp=1 > /dev/null
-  sysctl -w net.ipv6.conf.${tap_dev}.disable_ipv6=1 > /dev/null
-  ip addr add ""${tap_ip}${mask_short}"" dev ""$tap_dev""
-  ip link set dev ""$tap_dev"" up
-
-  local kernel_boot_args=""ro console=ttyS0 noapic reboot=k panic=1 pci=off nomodules random.trust_cpu=on""
-  kernel_boot_args=""${kernel_boot_args} ip=${fc_ip}::${tap_ip}:${mask_long}::eth0:off""
-
-  cat > ""firecracker_config_${machine_number}.json"" << EOF
-{
-  ""boot-source"": {
-    ""kernel_image_path"": ""/root/setup/kernel"",
-    ""boot_args"": ""${kernel_boot_args}""
-  },
-  ""drives"": [
-    {
-      ""drive_id"": ""rootfs"",
-      ""path_on_host"": ""/firecracker/filesystems/rootfs.ext4"",
-      ""is_root_device"": true,
-      ""is_read_only"": false
-    }
-  ],
-  ""network-interfaces"": [
-    {
-      ""iface_id"": ""eth0"",
-      ""guest_mac"": ""${fc_mac}"",
-      ""host_dev_name"": ""${tap_dev}""
-    }
-  ]
-}
-EOF
-}
-
-# Generate configurations for two VMs
-generate_config 1 ""169.254.0.21"" ""169.254.0.22"" ""02:FC:00:00:00:05""
-generate_config 2 ""170.254.0.21"" ""170.254.0.22"" ""03:FC:00:00:00:05""
-
-","1. there is a dev tool that handles this problematic for you,
-https://github.com/cubewave/cubewave
-Maybe you could use that to bypass your own network configuration
-",Firecracker
-"Is there any way to run Firecracker inside Docker container.
-I tried the basic networking in firecracker although having containerized firecracker can have many benefits
-
-No hurdle to create and manage overlay network and attach
-Deploy in Docker swarm and in Kubernetes
-No need to clean IPTables/Network rules
-etc.
-
-","1. You can use kata-containers to simplify
-https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support
-
-2. I came up with something very basic as this: 
-https://github.com/s8sg/docker-firecracker
-It allows creating go application that can run inside containerized firecracker  
-
-3. Setup Tutorial
-You find a good tutorial with all the basics at the Weaveworks
-
-fire-up-your-vms-with-weave-ignite
-
-it introduces
-
-weaveworks ignite (Github)
-
-Ignite works like a One-to-One replacement for ""docker"", and it does work on my Raspberry PI 4, with Debian11 too.
-How to use
-Create and start a VM
- $ sudo ignite run weaveworks/ignite-ubuntu \
-                --cpus 1 \
-                --memory 1GB \
-                --ssh \
-                --name my-vm1
-
-Show your VM Processes
- $ ignite ps
-
-Login into your running VM
- $ sudo ignite ssh my-vm1
-
-It takes a couple of sec to start (manualy) a new VM on my Raspberry PI 4 (8Gb, 64bit Debian11):
-
-Login into any of these
-$ sudo ignite ssh my-vm3
-
-
-Footloose
-If you add footloose you can start up a cluster of MicroVMs, which allows additional scenarios. It works  more less like docker-swarm with VMs. Footloose reads a description of the Cluster of Machines to create from a file, by default named footloose.yaml. Please check
-
-footloose vm cluster (Github)
-
-Note: be aware of a Apache ignite, which is a solution for something else, and don't get confused by it.
-The initial idea for this answer is from my gist here
-",Firecracker
-"I've created a docker container (ubuntu:focal) with a C++ application that is using boost::filesystem (v1.76.0) to create some directories while processing data. It works if I run the container locally, but it fails when deployed to Cloud Run.
-A simple statement like
-boost::filesystem::exists(boost::filesystem::current_path())
-
-fails with ""Invalid argument '/current/path/here'"". It doesn't work in this C++ application, but from a Python app running equivalent statements, it does work.
-Reading the docs I can see Cloud Run is using gVisor and not all the system calls are fully supported (link: https://gvisor.dev/docs/user_guide/compatibility/linux/amd64/), nevertheless I would expect simple calls to work: check if a directory exists, create a directory, remove,...
-Maybe I'm doing something wrong when deploying my container. Is there any way to work around it? Any boost configuration I can use to prevent it from using some syscalls?
-Thanks for your help!
-","1. I have run into the same issue when running SlideIO (which use boost::filesystem) in google cloud function. It works fine locally but always returns ""boost::filesystem::status: Invalid argument [system:22]"" on google cloud.
-I switched to second generation execution env for Cloud Run which provides full Linux compatibility rather than system call emulation. And the code works fine again.
-",gVisor
-"I'm trying to install the gvisor addon in minikube: https://github.com/kubernetes/minikube/blob/master/deploy/addons/gvisor/README.md
-minikube start --container-runtime=containerd  \
-    --docker-opt containerd=/var/run/containerd/containerd.sock
-minikube addons enable gvisor
-
-After a short wait, the gvisor pod is running
-NAME         READY   STATUS    RESTARTS   AGE
-pod/gvisor   1/1     Running   0          24s
-
-So far, so good. But when I try to create the example pod, it stays stuck in ContainerCreating
-Events:
-  Type     Reason                  Age   From               Message
-  ----     ------                  ----  ----               -------
-  Normal   Scheduled               55s   default-scheduler  Successfully assigned default/nginx-untrusted to minikube
-  Warning  FailedCreatePodSandBox  50s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox ""75807f2be807da0264c5210cba355a294bd8725ca29ea565b05685cb5fa4ddee"": failed to set bridge addr: could not add IP address to ""cni0"": permission denied
-  Warning  FailedCreatePodSandBox  38s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox ""47e315477fd91cd2a542e98f26fed5e2e758b8655c298048dcb3b2aa1cb47a49"": failed to set bridge addr: could not add IP address to ""cni0"": permission denied
-  Warning  FailedCreatePodSandBox  22s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox ""be59a5452249bec42f5400f7465de22e9c91cd35b9a492673b7215dc3097571d"": failed to set bridge addr: could not add IP address to ""cni0"": permission denied
-  Warning  FailedCreatePodSandBox  6s    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox ""4fd2de889a5c685dfe590e8ed4d86b66ce3a11c28cfa02eb42835b2b3b492723"": failed to set bridge addr: could not add IP address to ""cni0"": permission denied
-
-Searching for what might be causing the permission denied message hasn't turned up anything useful. I have tried forcing differnt CNI options (bridge, calico), but these just lead to other errors. If I disable gvisor, I can create pods without any issues, so the containerd runtime seems to be working fine.
-Any tips on how to track down where the ""permission denied"" message is coming from would be appreciated. minikube logs just seems to repeat the same ""permission denied"" message.
-","1. I raised an issue and here is the response:
-https://github.com/google/gvisor/issues/7877#issuecomment-1226399080
-
-containerd has updated its configuration format (once again). Minikube breaks because the plugin is trying to use the old format. Let me do a quick fix for now...we'll need a better way to patch config.toml to configure the runtime. Right now, it replaces the entire file and may lose other configuration changes.
-
-In short, it should be patched in the next release.
-",gVisor
-"I am using *nginx* as reverse proxy to redirect requests from host server to *LXD* container which runs *laravel 11* app, all runs smoothly, the only issue is that validation messages aren't displayed at all, to track that i am dumping error message and then i triggered validation error, messages are dumped when running app in localhost so its not a code issue.
-**testing on localhost**
-[![validation errors not displayed](https://i.sstatic.net/JfK7BwC2.png)](https://i.sstatic.net/JfK7BwC2.png)
-**testing on server**
-[![validation errors displayed](https://i.sstatic.net/C9dD4vrk.png)](https://i.sstatic.net/C9dD4vrk.png)
-I extended the size of buffers for both *proxy* and *fastcgi*, however the problem persists, i might be configuring the whole thing wrong maybe.
-**nginx conf in LXD**
-server {
-   listen 80;
-   server_name lxd-ip;
-   root /var/www/public;
-   index index.php;
-   charset utf-8;
-
-   location / {
-       try_files $uri $uri/ /index.php?$query_string;
-   }
-
-   location = /favicon.ico {
-       access_log off;
-       log_not_found off;
-   }
-
-   location = /robots.txt {
-       access_log off;
-       log_not_found off;
-   }
-
-   error_page 404 /index.php;
-
-   location ~ \\.php$ {
-       fastcgi_buffers 20 128k;
-       fastcgi_buffer_size 256k;
-       fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
-       fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
-       include fastcgi_params;
-   }
-
-   location ~ /\\.(?!well-known).\* {
-       deny all;
-   }
-}
-
-**nginx conf in HOST**
-server {
-   listen 80;
-   server_name admin.exemple.com www.exemple.com;
-   return 301 https://$server_name$request_uri;
-}
-
-server {
-   listen 443 ssl;
-   ssl_certificate /etc/letsencrypt/live/exemple.com/fullchain.pem;
-   ssl_certificate_key /etc/letsencrypt/live/exemple.com/privkey.pem;
-
-   server_name admin.exemple.com;
-
-   location / {
-       proxy_pass http://lxd-ip:80;
-       proxy_http_version 1.1;
-       proxy_set_header Host $host;
-       proxy_set_header X-Real-IP $remote_addr;
-       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-       proxy_set_header X-Forwarded-Proto $scheme;
-       proxy_cookie_path ~^/.\* /;
-       proxy_cookie_domain admin.exemple.com admin.exemple.com;
-       proxy_cache_bypass $http_upgrade;
-       proxy_buffer_size 256k;
-       proxy_buffers 20 128k;
-       proxy_busy_buffers_size 256k;
-   }
-
-   location ~ \.php$ {
-       fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
-       fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
-       include fastcgi_params;
-       fastcgi_buffers 20 128k;
-       fastcgi_buffer_size 256k;
-       client_max_body_size 10M;
-   }
-}
-
-this is how usually i configure laravel session
-SESSION_DRIVER=cookie
-SESSION_LIFETIME=120
-SESSION_SECURE_COOKIE=true
-SESSION_DOMAIN=""admin.exemple.com""
-SANCTUM_STATEFUL_DOMAINS=""https://admin.exemple.com""
-ALLOWED_ORIGINS=https://exemple.com
-
-","1. I needed to change how laravel handles session to a more server based driver, in my case i used database as session driver.
-inside .env set SESSION_DRIVER to database
-Create session table php artisan session:table then php artisan migrate
-",lxd
-"I want to work with lxc containers.
-I have installed the ubuntu lxd package, but I work with lxc command.
-So I do not understand what are the differences between lxc and lxd containers.
-Are these the same thing?
-","1. LXD is a daemon service that exposes REST API and manages the containers via liblxc.
-LXC is a command line tool that calls REST API locally or remotely.
-https://www.mywebtech.blog/guide-linux-lxd-containers-ubuntu/
-
-2. Note that the name of the project is lxc, alias Linux Containers. Lxd is the server process with what you are interacting, like dockerd in the docker world.
-lxc is also the command line client tool of the lxd. Thus, we have here a little confusion, because lxc means both the software name and the command line client tool.
-",lxd
-"I am experimenting with LXD. Since LXD is supposed to be aware of the capabilities of ZFS (copy-on-write, etc.), I set up a ZFS pool (consisting of a single, dedicated partition) to hold the containers. I then installed LXD and ran ""lxd init"". In the init-process, I instructed LXD to use ZFS, and pointed it to the existing ZFS pool.
-When I then created a new container, LXD created two directories in the ZFS pool: ""containers"" and images"". However, these directories are completely empty. The actual files are stored in /var/lib/lxd (on an ext4 partition, should that be important).
-Probably I'm missing something obvious, but: what am I missing here? Why is LXD not using the ZFS pool handed to it during the ""init"" process?
-","1. Can't comment on original but having the same sort of problem. During lxd init, I chose ZFS for the backend, and it even created the containers and images directory in my zfs dataset. The issue seems to be it's not using it and still being stored in /var/lib/lxd/containers
-Edit: Found the issue, check https://github.com/lxc/lxd/issues/1690.
-""zfs list -t all"" and you'll see that everything is indeed stored in zfs. Need to pay attention to those mountpoints.
-
-2. I think may be your lxd pool points to the same /var/lib/lxd in my case pool is pointing to /var/lib/lxd:
-Check the zfspool list:
-NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
-lxd   87.5G  1.09G  86.4G         -     7%     1%  1.00x  ONLINE  -
-
-This is in my case:
-drwx--x--x 4 root   root      4096 Sep  1 13:39 .
-drwxr-xr-x 9 lxd    nogroup   4096 Sep  4 11:21 ..
-lrwxrwxrwx 1 root   root        35 Sep  1 13:39 CDaemon -> /var/lib        /lxd/containers/CDaemon.zfs
-drwxr-xr-x 4 231072  231072      5 Aug 18 13:25 CDaemon.zfs
-lrwxrwxrwx 1 root   root        42 Sep  1 13:14 GraphiteServer -> /var/lib/lxd/containers/GraphiteServer.zfs
-drwxr-xr-x 4 231072  231072      5 Aug 18 13:25 GraphiteServer.zfs
-
-",lxd
-"Recently some alternatives for running docker containers or even the app container have developed.
-I know that there is rkt from coreos (https://coreos.com/blog/rocket/) and triton from joyent (https://www.joyent.com/)
-How do these two approaches compare?
-Edit
-Maybe I should re-phrase my question after these good comments from @ Lakatos Gyula
-How does Triton compare to coreos or kubernetes for running docker-containers at scale?
-","1. So in a way, this is an apples to oranges to grapes comparison.  CoreOS is an operating system, Kubernetes is open source container orchestration software, and Triton is a PaaS.  
-So CoreOS, it's a minimal operating system with a focus on security.  I've been using this in production for several months now at work, haven't found a reason to not like it yet.  It does not have a package manager, but it comes preinstalled with both rkt and Docker.  You can run both docker and rkt just fine on there.  It also comes with Etcd, which is a distributed key-value store, and it happens that kubernetes is backed by it.  It also comes with Flannel which is a networking program for networking between containers and machines in your cluster.  CoreOS also ships with Fleet, which you can think of like a distributed version of systemd, which systemd is CoreOS' init system.  And as of recently, CoreOS ships with Kubernetes itself.
-Kubernetes is a container orchestration software that is made up of a few main components.  There are masters, which use the APIServer, controller and scheduler to manage the cluster.  And there are nodes which use the ""kubelet"" and kube-proxy"".  Through these components, Kubernetes schedules and manages where to run your containers on your cluster.  As of v1.1 Kubernetes also can auto-scale your containers.  I also have been using this in production as long as I have been using CoreOS, and the two go together very well.
-Triton is Joyent's Paas for Docker. Think of it like Joyent's traditional service, but instead of BSD jails (similar concept to Linux containers) and at one point Solaris Zones (could be wrong on that one, that was just something I heard from word of mouth), you're using Docker containers.  This does abstract away a lot of the work you'd have to do with setting up CoreOS and Kubernetes, that said there are services that'll do the same and use kubernetes under the hood.  Now I haven't used Triton like I have used Kubernetes and CoreOS, but it definitely seems to be quite well engineered.
-Ultimately, I'd say it's about your needs.  Do you need flexibility and visibility, then something like CoreOS makes sense, particularly with Kubernetes.  If you want that abstracted away and have these things handled for you, I'd say Triton makes sense.
-",rkt
-"golang version < 1.5 - there are plenty of static linking examples, posts and recipes.  What about >= 1.5? (google search has returned no useful results for my search terms.) Anyone have any recommendations on how to produce a statically linked binary that can be executed inside a basic rkt (from CoreOS) container?
-my go:
-$go version
-go version go1.5 linux/amd64
-
-when I try to run my container:
-sudo rkt --insecure-skip-verify run /tmp/FastBonusReport.aci
-
-I get:
-[38049.477658] FastBonusReport[4]: Error: Unable to open ""/lib64/ld-linux-x86-64.so.2"": No such file or directory
-
-suggesting that the executable in the container is depending on this lib and hence not static.
-my manifest looks like:
-cat <<EOF > /tmp/${myapp}/manifest
-{
-    ""acKind"": ""ImageManifest"",
-    ""acVersion"": ""0.9.0"",
-    ""name"": ""${lowermyapp}"",
-    ""labels"": [
-        {""name"": ""os"", ""value"": ""linux""},
-        {""name"": ""arch"", ""value"": ""amd64""}
-    ],
-    ""app"": {
-        ""exec"": [
-            ""/bin/${myapp}""
-        ],
-        ""user"": ""0"",
-        ""group"": ""0""
-    }
-}
-EOF
-
-my command line to build the binary looks like:
-go build ${myapp}.go
-
-This article has a few examples golang < 1.5. And then there is this getting started article on the CoreOS site.
-","1. I hate to answer my own question. The comments have been correct CGO_ENABLED=0 go build ./... seems to have have done the trick. 
-While it was not part of the original question, once the program started executing in the rkt container it could not perform a proper DNS request. So there must be something else going on too.
-
-2. Static linking:
-Go 1.5:
-go build -ldflags ""-extldflags -static"" ...
-
-With Go 1.6 I had to use:
-go build -ldflags ""-linkmode external -extldflags -static"" ...
-
-
-3. try to build static link version :
-go build -ldflags '-extldflags ""-static""' -tags netgo,osusergo .
-
-use -tags osusergo,netgo to force static build without glibc dependency library.
-",rkt
-"I tried using rktlet(https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md)
-But when I try to 
-kubelet --cgroup-driver=systemd \
-> --container-runtime=remote \
-> --container-runtime-endpoint=/var/run/rktlet.sock \
-> --image-service-endpoint=/var/run/rktlet.sock
-
-I am getting the below errors
-Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
-I0320 13:10:21.661373    3116 server.go:407] Version: v1.13.4
-I0320 13:10:21.663411    3116 plugins.go:103] No cloud provider specified.
-W0320 13:10:21.664635    3116 server.go:552] standalone mode, no API client
-W0320 13:10:21.669757    3116 server.go:464] No api server defined - no events will be sent to API server.
-I0320 13:10:21.669791    3116 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
-I0320 13:10:21.670018    3116 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
-I0320 13:10:21.670038    3116 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
-I0320 13:10:21.670125    3116 container_manager_linux.go:272] Creating device plugin manager: true
-I0320 13:10:21.670151    3116 state_mem.go:36] [cpumanager] initializing new in-memory state store
-I0320 13:10:21.670254    3116 state_mem.go:84] [cpumanager] updated default cpuset: """"
-I0320 13:10:21.670271    3116 state_mem.go:92] [cpumanager] updated cpuset assignments: ""map[]""
-W0320 13:10:21.672059    3116 util_unix.go:77] Using ""/var/run/rktlet.sock"" as endpoint is deprecated, please consider using full url format ""unix:///var/run/rktlet.sock"".
-W0320 13:10:21.672124    3116 util_unix.go:77] Using ""/var/run/rktlet.sock"" as endpoint is deprecated, please consider using full url format ""unix:///var/run/rktlet.sock"".
-E0320 13:10:21.673168    3116 remote_runtime.go:72] Version from runtime service failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
-E0320 13:10:21.673228    3116 kuberuntime_manager.go:184] Get runtime version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
-F0320 13:10:21.673249    3116 server.go:261] failed to run Kubelet: failed to create kubelet: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
-
-How do I create a kube cluster using rkt? Please help.
-","1. That's the way to run Rktlet. However, Rktlet is still pretty experimental and I believe it's not being actively developed either. The last commit as per this writing was in 05/2018. 
-You can try running it the other way as described here or here. Basically, use --container-runtime=rkt, --rkt-path=PATH_TO_RKT_BINARY, etc. on the kubelet.
-Is there a reason why you are need rkt? Note that --container-runtime=rkt is deprecated in the latest Kubernetes but should still work (1.13 as of this writing).
-
-2. Not sure about unknown service runtime.v1alpha2.RuntimeService but unknown service runtime.v1alpha2.ImageService in my case helps to remove ""cri"" from disabled_plugins in /etc/containerd/config.toml config:
-#disabled_plugins = [""cri""]
-disabled_plugins = []
-
-and restart containerd service systemctl restart containerd.service
-
-3. You can check ctr plugin ls output for some plugin in error state:
-ctr plugin ls
-TYPE                            ID                       PLATFORMS      STATUS
-io.containerd.content.v1        content                  -              ok
-io.containerd.snapshotter.v1    aufs                     linux/amd64    skip
-io.containerd.snapshotter.v1    btrfs                    linux/amd64    skip
-io.containerd.snapshotter.v1    devmapper                linux/amd64    error
-io.containerd.snapshotter.v1    native                   linux/amd64    ok
-io.containerd.snapshotter.v1    overlayfs                linux/amd64    ok
-io.containerd.snapshotter.v1    zfs                      linux/amd64    skip
-io.containerd.metadata.v1       bolt                     -              ok
-io.containerd.differ.v1         walking                  linux/amd64    ok
-io.containerd.gc.v1             scheduler                -              ok
-io.containerd.service.v1        introspection-service    -              ok
-io.containerd.service.v1        containers-service       -              ok
-io.containerd.service.v1        content-service          -              ok
-io.containerd.service.v1        diff-service             -              ok
-io.containerd.service.v1        images-service           -              ok
-io.containerd.service.v1        leases-service           -              ok
-io.containerd.service.v1        namespaces-service       -              ok
-io.containerd.service.v1        snapshots-service        -              ok
-io.containerd.runtime.v1        linux                    linux/amd64    ok
-io.containerd.runtime.v2        task                     linux/amd64    ok
-io.containerd.monitor.v1        cgroups                  linux/amd64    ok
-io.containerd.service.v1        tasks-service            -              ok
-io.containerd.internal.v1       restart                  -              ok
-io.containerd.grpc.v1           containers               -              ok
-io.containerd.grpc.v1           content                  -              ok
-io.containerd.grpc.v1           diff                     -              ok
-io.containerd.grpc.v1           events                   -              ok
-io.containerd.grpc.v1           healthcheck              -              ok
-io.containerd.grpc.v1           images                   -              ok
-io.containerd.grpc.v1           leases                   -              ok
-io.containerd.grpc.v1           namespaces               -              ok
-io.containerd.internal.v1       opt                      -              ok
-io.containerd.grpc.v1           snapshots                -              ok
-io.containerd.grpc.v1           tasks                    -              ok
-io.containerd.grpc.v1           version                  -              ok
-
-",rkt
-"Suppose you insert (15) if I run this program, the output will be (14 16 17 18 19)
-How can I make the program to insert the number 15 in the correct position (pos = 1) or any number (n) in it's correct position (pos).
-(define list1 '(14 16 17 18 19))
-
-(define lst (list))
-(define (insert lst n)
-  (if (empty? lst)
-      '()
-      (foldr cons (list n) lst))) ;The value gets inserted at the end of the list
-
-
-","1. We have many sorting algorithm like Quicksort, histogram sort, bubble sort.
-You can see this Sorting Algorithms or wikipedia.
-If n is bigger than every data inside list which is lst become '() we just return (list n)
-e.g. (f '() 1) -> '(1)
-When n less or equal to first element we insert in first position.
-e.g. (f '(2) 1) -> (cons 1 '(2))
-If not we want data like this:
-(f '(1 2 4) 3) -> (cons 1 (f '(2 4) 3)) -> (cons 1 (cons 2 (f '(4) 3))) -> (cons 1 (cons 2 (cons 3 '(4)))) -> (list 1 2 3 4)
-(define (insert-with-<-order lst n)
-  (cond
-    [(empty? lst)
-     (list n)]
-    [(<= n (first lst))
-     (cons n lst)]
-    [else
-     (cons (first lst)
-           (insert-with-<-order (rest lst) n))]))
-
-(insert-with-<-order '(1 1 2 3) 1.5)
-(insert-with-<-order '(-2 0.5 1 2 5) 0.1)
-(insert-with-<-order '(1 2 3) 4)
-
-Use sort
-(define (insert-with-order lst n)
-  (sort (cons n lst) <))
-
-(insert-with-order '(14 16 17 18 19) 15)
-
-",rkt
-"I am inside a worker node on a GKE cluster.
-I am exec-ing into a container as root using the following command
-runc --root /run/containerd/runc/k8s.io/ exec -cap CAP_SYS_ADMIN -t -u 0 <container-id> bash
-
-root@<pod-name>:/#
-whoami
-root
-
-However, attempting to install packages fails as follows
-root@<pod-name>:/# apt update
-Reading package lists... Done
-E: List directory /var/lib/apt/lists/partial is missing. - Acquire (30: Read-only file system)
-
-Is there a way around this?
-","1. Error E: List directory /var/lib/apt/lists/partial is missing. - Acquire (30: Read-only file system) states that your container is booted in read only mode and you cannot add or change the contents inside the container.
-There might be many reasons for this similar to the one’s mentioned below, going through them will help you in resolving this issue:
-
-Check whether the container image you are using is a read only image by running the image locally. If the image itself is a read only image try to download a new container image with read write access.
-
-If you are using some security context in your container deployment manifest file check whether you have set readOnlyRootFilesystem: as true. If this is set as true you cannot add or remove changes to your container. Try to redeploy your container by removing this parameter or setting readOnlyRootFilesystem: parameter as false.
-
-
-If you could provide us more details about the container image you are using or the deployment manifest file you are using or any steps for reproducing the issue, it will be helpful to the community members for providing a more accurate answer.
-Also as David maze suggested if you haven't committed the changes made to this container and build a new image out of it, the changes which you made will be lost. Hence it is suggested to use docker file for installing the packages instead of using a container image to build a new image.
-",runc
-"How do these two compare?
-As far as I understand, runc is a runtime environment for containers. That means that this component provides the necessary environment to run containers. What is the role of containerd then?
-If it does the rest (networking, volume management, etc) then what is the role of the Docker Engine? And what about containerd-shim? Basically, I'm trying to understand what each of these components do.
-","1. I will give a high level overview to get you started:
-
-containerd is a container runtime which can manage a complete container lifecycle - from image transfer/storage to container execution, supervision and networking.
-container-shim handle headless containers, meaning once runc initializes the containers, it exits handing the containers over to the container-shim which acts as some middleman.
-runc is lightweight universal run time container, which abides by the OCI specification. runc is used by containerd for spawning and running containers according to OCI spec. It is also the repackaging of libcontainer.
-grpc used for communication between containerd and docker-engine.
-OCI maintains the OCI specification for runtime and images. The current docker versions support OCI image and runtime specs.
-
-
-More Links:
-
-Open Container Specification
-A nice dockercon 2016 presentation
-
-
-2. Docker engine is the whole thing, it was a monolith that enabled users to run containers. Then it was broken down into individual components. It was broken down into:
-- docker engine
-- containerd
-- runc
-
-runC is the lowest level component that implements the OCI interface. It interacts with the kernel and does the ""runs"" the container
-containerd does things like take care of setting up the networking, image transfer/storage etc - It takes care of the complete container runtime (which means, it manages and makes life easy for runC, which is the actual container runtime). Unlike the Docker daemon it has a reduced feature set; not supporting image download, for example.
-Docker engine just does some high level things itself like accepting user commands, downloading the images from the docker registry etc. It offloads a lot of it to containerd.
-""the Docker daemon prepares the image as an Open Container Image (OCI) bundle and makes an API call to containerd to start the OCI bundle. containerd then starts the container using runC.""
-Note, the runtimes have to be OCI compliant, (like runC is), that is, they have to expose a fixed API to managers like containerd so that they(containerd) can make life easy for them(runC) (and ask them to stop/start containers)
-
-rkt is another container runtime, which does not support OCI yet, but supports the appc specification. But it is a full fledged solution, it manages and makes it's own life easy, so it needs no containerd like daddy.
-So, that's that. Now let's add another component (and another interface) to the mix - Kubernetes
-Kubernetes can run anything that satisfies the CRI - container runtime interface. 
-You can run rkt with k8s, as rkt satisfies CRI - container runtime interface. Kubernetes doesn't ask for anything else, it just needs CRI, it doesn't give a FF about how you run your containers, OCI or not.
-containerd does not support CRI, but cri-containerd which is a shim around containerd does. So, if you want to run containerd with Kubernetes, you have to use cri-containerd (this also is the default runtime for Kubernetes). cri-containerd recently got renamed to CRI Plugin.
-If you want to get the docker engine in the mix as well, you can do it. Use dockershim, it will add the CRI shim to the docker engine. 
-
-Now, like containerd can manage and make life easy for runC (the container runtime), it can manage and make life easy for other container runtimes as well - in fact, for every container runtime that supports OCI - like Kata container runtime (known as ~kata-runtime~ - https://github.com/kata-containers/runtime.) - which runs kata containers, Clear Container runtime (by Intel).
-Now we know that rkt satisfies the CRI, cri-containerd (aka CRI Plugin) does it too. 
-Note what containerd is doing here. It is not a runtime, it is a manager for runC which is the container runtime. It just manages the image download, storage etc. Heck, it doesn't even satisfy CRI. 
-That's why we have CRI-O. It is just like containerd, but it implements CRI. CRI-O needs a container runtime to run images. It will manage and make life easy for that runtime, but it needs a runtime. It will take any runtime that is OCI compliant. So, naturally, ~kata-runtime~ is CRI-O compliant, runC is CRI-O compliant. 
-Use with Kubernetes is simple, point Kubernetes to CRI-O as the container runtime. (yes yes, CRI-O, but CRI-O and the actual container runtime IS. And Kubernetes is referring to that happy couple when it says container runtime). 
-Like containerd has docker to make it REALLY usable, and to manage and make life easy for containerd, CRI-O needs someone to take care of image management - it has buildah, umochi etc.
-crun is another runtime which is OCI compliant and written in C. It is by RedHat.
-We already discussed, kata-runtime is another runtime which is OCI compliant. So, we can use kata-runtime with CRI-O like we discussed.
-
-Note, here, the kubelet is talking to CRI-O via the CRI. CRI-O is talking to cc-runtime (which is another runtime for Intel's clear containers, yes, OCI compliant), but it could be kata-runtime as well.
-Don't forget containerd, it can manage and make life easy for all OCI complaint runtimes too - runC sure, but also kata-runtime, cc-runtime
-
-Here, note just the runtime is moved from runC to kata-runtime. 
-To do this, in the containerd config, just change runtime to ""kata""
-Needless to say, it can run on Kubernetes either by CRI-O, or by cri-containerd (aka CRI Plugin). 
-
-This is really cool :top:
-Kubernetes, represented here by it's Ambassador, Mr. Kubelet runs anything that satisfies the CRI. 
-Now, we have several candidates that can.
-- Cri-containerd makes containerd do it.
-- CRI-O does it natively.
-- Dockershim makes the docker engine do it.
-Now, all the 3 guys above, can manage and make life easy for all OCI compliant runtimes - runC, kata-runtime, cc-runtimes.
-We also have frakti, which satisfies CRI, like rkt, but doesn't satisfy OCI, and comes bundled with it's own container runtime.
-Here we have CRI-O in action managing and making life easy for OCI compliant kata-runtime and runC both
-
-We have some more runtimes as well:
-
-railcar - OCI compliant, written in rust
-Pouch - Alibaba's modified runC
-nvidia runtime - nvidia's fork of runC
-
-ref: https://github.com/darshanime/notes/blob/master/kubernetes.org#notes
-
-3. runc is one of the component of containerd and handles kernel level interaction for running containers. In earlier versions, containerd was essentially a high level abstraction around runc but now it's way more than that. From container.io:
-
-runc is a component of containerd, the executor for containers. containerd has a wider scope than just executing containers: downloading container images, managing storage and network interfaces, calling runc with the right parameters to run containers.
-
-This image from same source nicely describes this.
-Docker Engine is the end user product that uses containerd as a main component and implements other functionalities that doesn't fall under containerd's scope.
-Note that Docker extracted out containerd as a separate component, so it can be used and developed by others products too.
-[Edit]
-I wrote more about this teminology here
-",runc
-"I can't find any info on what macro to use in an ifdef to determine the illumos kernel. I use __linux to catch Linux.
-Filler for stackoverflow grammar check filler filler filler filler.
-","1. Illumos based kernels such as SmartOS and OpenIndiana use __sun and it is sometimes suggested to check for both __sun and __SVR4.
-[root@mysmartostestzone ~]# uname -a
-SunOS mysmartostestzone 5.11 joyent_20170202T033902Z i86pc i386 i86pc Solaris
-
-[root@mysmartostestzone ~]# cat test.c
-#include <stdio.h>
-
-int
-main(int argc, char **argv)
-{
-#ifdef sun
-printf(""sun\n"");
-#endif
-
-#ifdef __sun
-printf(""__sun\n"");
-#endif
-
-#if defined(__sun) && defined(__SVR4)
-printf(""__sun && __SVR4\n"");
-#endif
-}
-
-[root@mysmartostestzone ~]# cc test.c
-
-[root@mysmartostestzone ~]# ./a.out
-sun
-__sun
-__sun && __SVR4
-
-Update:
-There will soon be an __illumos__ macro:
-https://www.illumos.org/issues/13726
-",SmartOS
-"tldr; I'm trying to receive a ZFS stream, that has been created as replicate (-R) from a cloned filesystem. Using zfs recv -o origin=[clone-origin] just gives cannot receive: local origin for clone [...] does not exist.
-Precondition
-I have a SmartOS zone ZFS filesystem, which is cloned from a particular image. (IMAGE-uuid and ZONE-uuid have been replaced for better readability)
-$ zfs list -r -o name,origin zones/[ZONE]
-NAME          ORIGIN              
-zones/[ZONE]  zones/[IMAGE]@[ZONE]
-
-The zone filesystem has serveral snapshots:
-$ zfs list -r -t all -o name, origin zones/[ZONE]
-NAME                  ORIGIN              
-zones/[ZONE]          zones/[IMAGE]@[ZONE]
-zones/[ZONE]@[SNAP0]  -
-zones/[ZONE]@[SNAP1]  -
-zones/[ZONE]@[SNAP2]  -
-[...]
-
-Regarding the base image, SmartOS (better vmadm) creates a snapshot of the image for the newly created zone. The zone root is created as clone based on this snapshot (here with guid 11194422825011190557).
-$ zfs list -r -o name,origin,guid zones/[IMAGE]
-NAME                        ORIGIN  GUID
-zones/[IMAGE]               -       5616748063181666458
-zones/[IMAGE]@[OTHER-ZONE]  -       11174377117517693115
-zones/[IMAGE]@[OTHER-ZONE]  -       5587104570997150836
-zones/[IMAGE]@[OTHER-ZONE]  -       535244446308996462
-zones/[IMAGE]@[OTHER-ZONE]  -       12527420623439849960
-zones/[IMAGE]@[ZONE]        -       11194422825011190557
-zones/[IMAGE]@[OTHER-ZONE]  -       18143527942366063753
-zones/[IMAGE]@[OTHER-ZONE]  -       15066902894708043304
-zones/[IMAGE]@[OTHER-ZONE]  -       16574922393629090803
-zones/[IMAGE]@[OTHER-ZONE]  -       818178725388359655
-zones/[IMAGE]@[OTHER-ZONE]  -       11867824093224114226
-zones/[IMAGE]@[OTHER-ZONE]  -       9357513766021831186
-
-Backup
-To create a backup of my zone root, I created a snapshot and a replicate stream.
-zfs snapshot zones/[ZONE]@[DATE]
-zfs send -R zones/[ZONE]@[DATE] > [ZONE]_[DATE].zfs
-
-Inspecting it with zstreamdump shows the expected origin. It is in hex but 0x9b5a943fae511b1d is 11194422825011190557:
-$ zstreamdump < [ZONE]_[DATE].zfs
-BEGIN record
-        hdrtype = 2
-        features = 4
-        magic = 2f5bacbac
-        creation_time = 0
-        type = 0
-        flags = 0x0
-        toguid = 0
-        fromguid = 0
-        toname = zones/[ZONE]@[DATE]
-nvlist version: 0
-        tosnap = [DATE]
-        fss = (embedded nvlist)
-        nvlist version: 0
-                0xf19ec8c66f3ca037 = (embedded nvlist)
-                nvlist version: 0
-                        name = zones/[ZONE]
-                        parentfromsnap = 0x0
-                        origin = 0x9b5a943fae511b1d
-                        props = (embedded nvlist)
-                        nvlist version: 0
-                                devices = 0x0
-                                compression = 0x2
-                                quota = 0x500000000
-                        (end props)
-[...]
-
-Restore
-To recover a desaster, I recreate the zone using vmadm create with a backup of the vm description (the ZONE-uuid is preserved). vmadm pulls the image and creates the respective zfs filesystem zones/[IMAGE] with a snapshot, as clone origin for the recreated zone filesystem zones/[ZONE].
-So the structure is the same as before the crash:
-$ zfs list -r -o name,origin zones/[ZONE]
-NAME          ORIGIN              
-zones/[ZONE]  zones/[IMAGE]@[ZONE]
-
-However the guid of the image-snapshot (created by vmadm), is different - as expected. The stream expects 0x9b5a943fae511b1d (or 11194422825011190557), but it actually is 12464070312561851369:
-: zfs list -r -o name,guid zones/[IMAGE]
-NAME                  GUID
-zones/[IMAGE]         5616748063181666458
-[...]
-zones/[IMAGE]@[ZONE]  12464070312561851369
-[...]
-
-That's where - I thought - the -o origin= parameter of zfs recv comes in.
-Problem
-Restoring the actual data by receiving the zfs stream, ends up with an error:
-$ zfs recv -vF zones/[ZONE] < [ZONE]_[DATE].zfs
-cannot receive: local origin for clone zones/[ZONE]@[SNAP0] does not exist
-
-(where SNAP0 is the first snapshot of the backed up filesystem, see ""Precondition"" above)
-This is expected, since the guid changed. So I forced the origin to the image snapshot with the new guid (12464070312561851369), but the error remains the same:
-$ zfs recv -vF -o origin=zones/[IMAGE]@[ZONE] zones/[ZONE] < [ZONE]_[DATE].zfs
-cannot receive: local origin for clone zones/[ZONE]@[SNAP0] does not exist
-
-Question
-Is my interpretation of the -o origin=-parameter correct?
-Why doesn't work it as expected?
-If this is the wrong way, how can I create a backup and restore a zfs filesystem that is cloned?
-Thanks a lot for reading and helping!
-","1. It seems you stumbled on a ZFS bug that only now is getting a bit of attention.
-If you can change how the stream is created
-The -R flag tries to preserve all kinds of relations that may often not be of relevance, such as parent clones etc. There is no handy alternative that would only ""send all incrementals up until this one"". Instead, you have to do two passes. This is not specific to vdadm, so for ZFS in general, the logic is as follows:
-zfs send zones/[ZONE]@[EARLIESTDATE] > [ZONE]_[EARLIESTDATE].zfs
-zfs send -I zones/[ZONE]@[EARLIESTDATE] zones/[ZONE]@[DATE] > [ZONE]_[EARLIESTDATE]-[DATE].zfs
-zfs recv -vF zones/[ZONE] < [ZONE]_[EARLIESTDATE].zfs
-zfs recv -vF zones/[ZONE] < [ZONE]_[EARLIESTDATE]-[DATE].zfs
-
-After this, only -I passes are needed between the latest backed up snapshot and the newest one at the source.
-If you have to restore the already created stream
-One proposed solution is to use a modified variant of zfs described here:
-https://github.com/openzfs/zfs/issues/10135
-Please ensure you know how this affects your dataset, though. Then your command would be
-FQ_OVERRIDE_GTND=1 .zfs recv vF -o origin=zones/[IMAGE]@[ZONE] zones/[ZONE] < [ZONE]_[DATE].zfs
-
-Another bug report of the same is here:
-https://github.com/openzfs/zfs/issues/10935
-",SmartOS
-"I'm trying to port Quadlods to SmartOS. It compiles and runs on Linux and DragonFly BSD. I haven't tried running it on Windows, but other programs using the xy class with the isfinite method compile and run on Windows. However, compiling it on SmartOS, I get this error:
-[ 15%] Building CXX object CMakeFiles/quadlods.dir/filltest.cpp.o
-In file included from /usr/include/math.h:36,
-                 from /opt/local/gcc9/include/c++/9.3.0/bits/std_abs.h:40,
-                 from /opt/local/gcc9/include/c++/9.3.0/cstdlib:77,
-                 from /opt/local/gcc9/include/c++/9.3.0/ext/string_conversions.h:41,
-                 from /opt/local/gcc9/include/c++/9.3.0/bits/basic_string.h:6493,
-                 from /opt/local/gcc9/include/c++/9.3.0/string:55,
-                 from /opt/local/gcc9/include/c++/9.3.0/stdexcept:39,
-                 from /opt/local/gcc9/include/c++/9.3.0/optional:38,
-                 from /opt/local/gcc9/include/c++/9.3.0/bits/node_handle.h:39,
-                 from /opt/local/gcc9/include/c++/9.3.0/bits/stl_tree.h:72,
-                 from /opt/local/gcc9/include/c++/9.3.0/map:60,
-                 from /home/phma/src/quadlods/quadlods.h:27,
-                 from /home/phma/src/quadlods/filltest.h:25,
-                 from /home/phma/src/quadlods/filltest.cpp:26:
-/home/phma/src/quadlods/xy.h:35:8: error: expected ')' before '!=' token
-   35 |   bool isfinite() const;
-      |        ^~~~~~~~
-
-The file that defines the macro, causing this bizarre error, is /usr/include/iso/math_c99.h:
-#define isfinite(x) (__builtin_isfinite(x) != 0)
-
-The class definition in the header file is
-class xy
-{
-public:
-  xy(double e,double n);
-  xy();
-  double getx() const;
-  double gety() const;
-  double length() const;
-  bool isfinite() const;
-  bool isnan() const;
-  friend xy operator+(const xy &l,const xy &r);
-  friend xy operator+=(xy &l,const xy &r);
-  friend xy operator-=(xy &l,const xy &r);
-  friend xy operator-(const xy &l,const xy &r);
-  friend xy operator-(const xy &r);
-  friend xy operator*(const xy &l,double r);
-  friend xy operator*(double l,const xy &r);
-  friend xy operator/(const xy &l,double r);
-  friend xy operator/=(xy &l,double r);
-  friend bool operator!=(const xy &l,const xy &r);
-  friend bool operator==(const xy &l,const xy &r);
-  friend xy turn90(xy a);
-  friend xy turn(xy a,int angle);
-  friend double dist(xy a,xy b);
-protected:
-  double x,y;
-};
-
-Is it possible to make this compile on SmartOS without renaming the method? I thought of undefining the isfinite macro, but in another program (not Quadlods, whose header file is only quadlods.h), the xy class is in a header file for the library. Besides, the isfinite method calls std::isfinite.
-","1. The solution, which Jonathan Perkin gave me on IRC, is to put #include <cmath> just after the include guard of xy.h. This undefines the macro. It now compiles on Linux, BSD, and SmartOS.
-",SmartOS
-"Today I'm trying to create a VM using smartos.
-I built this config file (called router.json):
-{
-""alias"": ""router"",
-""hostname"": ""router"",
-""brand"": ""joyent"",
-""max_physical_memory"": 256,
-""image_uuid"": ""088b97b0-e1a1-11e5-b895-9baa2086eb33"",
-""quota"": 10,
-""nics"": [
-    {
-        ""nic_tag"": ""admin"",
-        ""ip"": ""dhcp"",
-        ""allow_ip_spoofing"": ""1"",
-        ""primary"": ""1""
-    },
-    {
-        ""nic_tag"": ""stub0"",
-        ""ip"": ""10.0.0.1"",
-        ""netmask"": ""255.255.255.0"",
-        ""allow_ip_spoofing"": ""1"",
-        ""gateway"": ""10.0.0.1""
-    }
-]
-
-The ran this command:
-# vmadm validate create -f router.json
-VALID 'create' payload for joyent brand VMs.
-
-But I still have an error when I try to create the VM:
-# vmadm create -f router.json
-provisioning dataset 088b97b0-e1a1-11e5-b895-9baa2086eb33 with brand joyent is not supported
-
-Anyone have an idea?
-Thanks a lot.
-","1. You are missing a closing curly brace '}' on the JSON payload above, which I assume is just a copy/paste error.
-After fixing the JSON, I get the following:
-[root@smartos ~]# vmadm validate create -f router.json
-{
-  ""bad_values"": [
-    ""image_uuid""
-  ],
-  ""bad_properties"": [],
-  ""missing_properties"": []
-}
-
-Have you imported that image yet?
-[root@smartos ~]# imgadm import 088b97b0-e1a1-11e5-b895-9baa2086eb33
-
-After importing I get:
-[root@smartos ~]# vmadm validate create -f router.json
-VALID 'create' payload for joyent brand VMs.
-[root@smartos ~]# vmadm create -f router.json
-Invalid nic tag ""stub0""
-
-Of course, I don't have an etherstub NIC setup yet.
-[root@smartos ~]# nictagadm add -l stub0
-
-Then I can create the instance with your payload:
-[root@smartos ~]# vmadm create -f router.json
-Successfully created VM 53c2648c-d963-62b6-a9dd-e0b9809355d0
-
-If you still are having issue can you provide the version you're using?
-[root@smartos ~]# uname -a
-SunOS smartos 5.11 joyent_20170413T062226Z i86pc i386 i86pc
-
-",SmartOS
-"I've created a cluster with dataplane 2 on Google Kubernetes Engine.
-Looking through the logs of the various kube-system pods, I find a fair amount of noise from the metrics reporter container of the anetd deployment.
-The errors look like:
-containerId: 3298fxxxxxxxxxxxxxxxxxxx
-containerName: cilium-agent-metrics-collector
-fluentTimestamp: 1715830550029446400
-log: 2024-05-16T03:35:50.02941677Z stderr F {""level"":""error"",""ts"":1715830550.0293708,""caller"":""prometheus/parse.go:140"",""msg"":""Unrecognized line"",""scrape_target"":""http://localhost:9990/metrics"",""line_number"":1107,""text"":""cilium_k8s_client_api_latency_time_seconds_bucket{method=\""POST\"",path=\""/apis/cilium.io/v2/namespaces/{namespace}/ciliumendpoints\"",le=\""10\""} 16"",""stacktrace"":""google3/cloud/kubernetes/metrics/components/collector/prometheus/prometheus.(*parser).ParseText\n\tcloud/kubernetes/metrics/components/collector/prometheus/parse.go:140\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.runScrapeLoop\n\tcloud/kubernetes/metrics/components/collector/collector.go:84\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.Run\n\tcloud/kubernetes/metrics/components/collector/collector.go:62\nmain.main\n\tcloud/kubernetes/metrics/components/collector/main.go:40\nruntime.main\n\tthird_party/go/gc/src/runtime/proc.go:267""}
-namespace: kube-system
-nodeName: gke-dev-infra-default-pool-e9e1dc67-n4vx
-podName: anetd-svwhb
-
-If you can't scroll all the way on the long line, the important bits are:
-""prometheus/parse.go:140"",""msg"":""Unrecognized line""
-""cilium_k8s_client_api_latency_time_seconds_bucket{method=\""POST\"",path=\""/apis/cilium.io/v2/namespaces/{namespace}/ciliumendpoints\"",le=\""10\""} 16""
-
-and then a stack trace.
-Given that these containers and pods are provided by Google, I can't do a lot about it myself, but it sure would be nice to not get millions of error log messages per day from an idle cluster ...
-","1. It looks like some GKE versions have bugs where cilium-agent-metrics-collector produces a lot of unwanted error logs.
-To avoid these kind of issues try upgrading to the latest version like >=1.28.7-gke.1201000 for 1.28, >=1.29.2-gke.1425000 for 1.29, 28.16.0+ and 29.4.1+. For more details refer to this official GCP release notes.
-Refer to this similar issue which has been already reported in the Public Issue Tracker and also go through the Google Community issues for more information.
-If the issue still persists, raise a new bug in the Public Issue Tracker by describing your problem.
-",Cilium
-"I'm interested in using Kubernetes NetworkPolicy to control network policy. I want to know if the NetworkPolicy is blocking traffic so I can either fix the policies or fix/stop whatever is in violation. 
-We use Calico and they view this as a paid feature. https://github.com/projectcalico/calico/issues/1035
-Cilium has cilium monitor which sounds like it would work if we started using Cilium.
-http://docs.cilium.io/en/latest/troubleshooting/
-Is there a general, vendor-neutral way to monitor network traffic that violates Kuberenetes NetworkPolicy?
-","1. AFAIU, there is no way to create such vendor-neutral tool because NetworkPolicy is just an abstraction. Each networking plugin enforces them differently, (Cilium does that mostly in BPF for L3 and L4 and Envoy for L7), so each plugin needs to provide its own means of accessing this information.
-AFAIK, there is no initiative in Kubernetes community to store this information and provide an interface for CNI plugins to provide this information, but it seems like it would be a fun project.
-Disclaimer: I am on Cilium dev team.
-
-2. Calico's native NetworkPolicy supports a ""log"" action that allows you to log packets. Then, you can monitor these logs with a monitoring software. Logging is not a default option using calico! (see calico's doc)
-So, for example, you have a pod called ""db"", and you want to create a network policy that blocks and LOGS all TCP traffic destined for the ""db"" pod, here is a sample manifest (calico-npc-db.yaml):
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
-  name: calico-npc-db-netpol
-  namespace: npc
-spec:
-  selector: app == 'db'
-  ingress:
-  - action: Log
-    protocol: TCP
-
-Then you apply this manifest:
-k apply -f calico-npc-db.yaml
-In your cluster node, let's assume the node name is ks8-worker-02, you will see the following type of blocking message in the standard log files (/var/log):
-Apr 24 11:18:42 k8s-worker-02 kernel: [586144.409226] calico-packet:
-IN=cali945ac7714c6 OUT=calie2d9c08122c 
-MAC=ee:ee:ee:ee:ee:ee:0a:c8:dc:75:88:f5:08:00 SRC=192.168.118.67 
-DST=192.168.118.68 LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=2617 DF 
-PROTO=TCP SPT=53712 DPT=3306 WINDOW=64860 RES=0x00 SYN URGP=0 
-
-",Cilium
-"the bpf code:
-//go:build ignore
-#include <linux/bpf.h>
-#include <bpf/bpf_helpers.h>
-
-char _license[] SEC(""license"") = ""GPL"";
-
-struct execve {
-    __u64 unused;
-    __u32 nr;
-    const char *filename;
-    const char *const *argv;
-    const char *const *envp;
-};
-
-SEC(""tracepoint/syscalls/sys_enter_execve"")
-int sys_enter_execve(struct execve *ctx) {
-    unsigned int ret;
-    unsigned int args_size = 0;
-    char argp[128] = {0};
-    for (int i = 0; i < 10; i++) {
-        const char *arg = NULL;
-        bpf_probe_read_user(&arg, sizeof(arg), &ctx->argv[i]);
-        if (!arg)
-            return 0;
-        if (args_size >= 128)
-            return 0;
-        ret = bpf_probe_read_user_str(&argp[args_size], sizeof(arg)+1, arg);
-        if (ret > sizeof(arg)+1)
-            return 0;
-        args_size += ret;
-        bpf_printk(""arg%d: %s "", i, arg);
-    }
-    bpf_printk(""argp: %s\n"", argp);
-    return 0;
-}```
-
-the golang code:
-```package main
-
-import (
-    ""log""
-    ""time""
-
-    ""github.com/cilium/ebpf/link""
-    ""github.com/cilium/ebpf/rlimit""
-)
-
-func main() {
-
-    if err := rlimit.RemoveMemlock(); err != nil {
-        log.Fatal(err)
-    }
-
-    var objs counterObjects
-    if err := loadCounterObjects(&objs, nil); err != nil {
-        log.Fatalf(""loading objects: %v"", err)
-    }
-    defer objs.Close()
-
-    tpExecve, err := link.Tracepoint(""syscalls"", ""sys_enter_execve"", objs.SysEnterExecve, nil)
-    if err != nil {
-        log.Fatal(err)
-    }
-    defer tpExecve.Close()
-
-    log.Printf(""waiting for signals"")
-
-    ticker := time.NewTicker(2 * time.Second)
-    defer ticker.Stop()
-
-    for range ticker.C {
-        log.Printf(""tick \n"")
-    }
-}
-
-and there is the error:
-$ sudo ./cilium-go
-2024/04/09 14:43:31 loading objects: field SysEnterExecve: program sys_enter_execve: load program: permission denied: invalid variable-offset indirect access to stack R1 var_off=(0x0; 0x7f) size=9 (106 line(s) omitted)
-As you can see, I want to capture the arguments of the argv list in the function and store them in a one-dimensional array, but I've failed. I don't know how to fix this error or how to implement this feature. I hope someone here can help me out.
-","1. I found that this line of code char argp [128]={0}; In the middle, its size was set too small, which caused this error. Setting 128 to a number of 480 or larger can solve the problem
-",Cilium
-"I use perf to sample the ebpf function, but I use bpf_ktime_get_ns to get the current second of the system found to be negative, I don't know why
-SEC(""perf_event"")
-int do_perf_event(struct bpf_perf_event_data *ctx) {
-    u32 tgid = 0;
-    u32 pid = 0;
-    u64 id = bpf_get_current_pid_tgid();
-    pid = id;
-    if ( pid == 0 )
-        return 0;
-    tgid = id >> 32;
-    //create map key
-    struct key_t key = {0};
-    key.pid =tgid;
-    u64 ts = bpf_ktime_get_ns();
-    key.ntime= ts;
-    bpf_perf_event_output((struct pt_regs *)ctx, &output_events, BPF_F_CURRENT_CPU, &key, sizeof(key));
-    return 0;
-}
-
-reader, err := perf.NewReader(objs.bpfMaps.OutputEvents, 10000 * int(unsafe.Sizeof(bpfSchedEventT{})))
-for {
-    record, err := reader.Read()
-    var event bpfSchedEventT
-    if err := binary.Read(bytes.NewBuffer(record.RawSample), binary.LittleEndian, &event); err != nil {
-         continue
-     }
-
-     fmt.Println(event.Pid,event.Ntime)
-}
-
- pid          bpf_ktime_get_ns 2304287 -4952501534609899520 2304287 -4951917187129409536 2304287 -4909853600282312704 2304287 -4909930153779396608 2304287 -4909776359590461440 2304287 -4909124929015775232 2305278 -4908938351341469696 2134146 -4909027119725543424 4026847 -4908622722784821248
-I don't know why the time is negative
-",,Cilium
-"I have taken over a project that uses django cumulus for cloud storage. On my development machine, some times I use a slow internet connection, and every time I save a change, django recompiles and tries to make a connection to the racksapace store
-Starting new HTTPS connection (1): identity.api.rackspacecloud.com
-
-This sometimes takes 15 seconds and is a real pain. I read a post where someone said they turned off cumulus for local development. I think this was done by setting
-DEFAULT_FILE_STORAGE
-
-but unfortunately the poster did not specify. If someone knows a simple setting I can put in my local settings to serve media and static files from my local machine and stop django trying to connect to my cloud storage on every save, that is what I want to do.
-","1. Yeah it looks like you should just need the DEFAULT_FILE_STORAGE to be default value, which is django.core.files.storage.FileSystemStorage according to the source code.
-However, a better approach would be to not set anything in your local settings and set the DEFAULT_FILE_STORAGE and CUMULUS in a staging_settings.py or prod_settings.py file.
-
-2. The constant reloading of the rackspace bucket was because the previous developer had 
-from cumulus.storage import SwiftclientStorage
-class PrivateStorage(SwiftclientStorage):
-
-and in models.py
-from common.storage import PrivateStorage
-PRIVATE_STORE = PrivateStorage()
-...
-class Upload(models.Model):
-    upload = models.FileField(storage=PRIVATE_STORE, upload_to=get_upload_path)
-
-This meant every time the project reloaded, it would create a new https connection to rackspace, and time out if the connection was poor. I created a settings flag to control this by putting the import of SwiftclientStorage and defining of PrivateStorage like so
-from django.conf import settings
-if settings.USECUMULUS:
-    from cumulus.storage import SwiftclientStorage
-
-    class PrivateStorage(SwiftclientStorage):
-...
-else:
-    class PrivateStorage():
-        pass
-
-",Cumulus
-"i've just downloaded cumulus, POCO, OpenSSL and LuaJIT and visual studio. now i'm trying to compile it as it said in instruction here
-however i've never used visual studio and i've never programed on visual c. so i'm stuck at the very begining.
-in instruction i've put link above said ""Visual Studio 2008/2010 solution and project files are included. It searchs external librairies in External/lib folder and external includes in External/include folder in the root Cumulus folder. So you must put POCO, OpenSSL and LuaJIT headers and libraries in these folders."". i tryed everything but compiler can't find 'Poco/foundation.h'.
-and it seems to me if i deal with this error there will more over.
-so if someone has expirience in compiling cumulus-server please help me to deal with it.
-thanks a lot for you help!
-","1. step 1 - create 2 files
-cumulus_root_folder/external/lib
-cumulus_root_folder/external/include
-step 2 - put the headers into the include folder from the other 3 dependent projects
-dependent projects are: openssl, poco, luajit.
-put openssl file into the external/include from openssl-version/include
-put Poco file into the external/include from poco-version/Foundation/include
-put SAX, DOM, XML files into the external/include/Poco from poco-version/XML/include/Poco
-put Net file into the external/include/Poco from poco-version/Net/include/Poco
-put Util file into the external/include/Poco from poco-version/Util/include/Poco
-put LuaJIT's headers with the same way.
-now you can build cumuluslib.
-step 3 - Open your cumuluslib project with specific visual studio version then build it.
-When it's done you can see the lib file at cumulus_root_folder/cumuluslib/lib
-step 4 - now you have to build the 3 dependent projects and put their lib files to cumulus_root_folder/external/lib its tough mission, maybe you will need 32-bit windows. Do not forget: when you building poco, do it with debug if not, some of your files will be missing.
-When you done with building and gathering lib files you can build cumulusserver. Same way as cumuluslib. Then your cumulus.exe will be in cumulus_root_folder/cumulusserver/debug
-",Cumulus
-"I am considering using Nimbus for a cloud application. I know that Nimbus uses Cumulus but I don't know if it supports Amazon-RDS service (and it must because it is a requirement).
-Does anyone know?
-","1. I sent an e-mail to Nimbus support and the reply to the question was the following:
-
-Hi Pedro,
-Sorry but Nimbus does not support Amazon-RDS.
-
-I want to point out, that Nimbus support was very friendly. 
-I wish the Nimbus project the best of lucks and I hope this answer saves some headaches to a lot of you.
-",Cumulus
-"I am having issues deploying juypterhub on kubernetes cluster. The issue I am getting is that the hub pod is stuck in pending. 
-Stack:
-kubeadm
-flannel
-weave
-helm
-jupyterhub
-Runbook:
-$kubeadm init --pod-network-cidr=""10.244.0.0/16"" 
-$sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf && export KUBECONFIG=$HOME/admin.conf
-$kubectl create -f pvc.yml
-$kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-aliyun.yml
-$kubectl apply --filename https://git.io/weave-kube-1.6
-$kubectl taint nodes --all node-role.kubernetes.io/master-
-
-Helm installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html
-Jupyter installations as per https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html
-config.yml
-proxy:
-  secretToken: ""asdf""
-singleuser:
-  storage:
-    dynamic:
-      storageClass: local-storage
-
-pvc.yml
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  name: standard
-spec:
-  capacity:
-    storage: 100Gi
-  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
-  volumeMode: Filesystem
-  accessModes:
-  - ReadWriteOnce
-  persistentVolumeReclaimPolicy: Delete
-  storageClassName: local-storage
-  local:
-    path: /dev/vdb
-  nodeAffinity:
-    required:
-      nodeSelectorTerms:
-      - matchExpressions:
-        - key: kubernetes.io/hostname
-          operator: In
-          values:
-          - example-node
----
-kind: PersistentVolumeClaim
-apiVersion: v1
-metadata:
-  name: standard
-spec:
-  storageClassName: local-storage
-  accessModes:
-    - ReadWriteOnce
-  resources:
-    requests:
-      storage: 3Gi
----
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
-  name: local-storage
-provisioner: kubernetes.io/no-provisioner
-volumeBindingMode: WaitForFirstConsumer
-
-The warning is:
-$kubectl --namespace=jhub get pod
-
-NAME                     READY   STATUS    RESTARTS   AGE
-hub-fb48dfc4f-mqf4c      0/1     Pending   0          3m33s
-proxy-86977cf9f7-fqf8d   1/1     Running   0          3m33s
-
-$kubectl --namespace=jhub describe pod hub
-
-Events:
-  Type     Reason            Age                From               Message
-  ----     ------            ----               ----               -------
-  Warning  FailedScheduling  35s (x3 over 35s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
-
-
-
-$kubectl --namespace=jhub describe pv
-
-Name:            standard
-Labels:          type=local
-Annotations:     pv.kubernetes.io/bound-by-controller: yes
-Finalizers:      [kubernetes.io/pv-protection]
-StorageClass:    manual
-Status:          Bound
-Claim:           default/standard
-Reclaim Policy:  Retain
-Access Modes:    RWO
-VolumeMode:      Filesystem
-Capacity:        10Gi
-Node Affinity:   <none>
-Message:
-Source:
-    Type:          HostPath (bare host directory volume)
-    Path:          /dev/vdb
-    HostPathType:
-Events:            <none>
-
-
-$kubectl --namespace=kube-system describe pvc
-
-Name:          hub-db-dir
-Namespace:     jhub
-StorageClass:
-Status:        Pending
-Volume:
-Labels:        app=jupyterhub
-               chart=jupyterhub-0.8.0-beta.1
-               component=hub
-               heritage=Tiller
-               release=jhub
-Annotations:   <none>
-Finalizers:    [kubernetes.io/pvc-protection]
-Capacity:
-Access Modes:
-VolumeMode:    Filesystem
-Events:
-  Type       Reason         Age                From                         Message
-  ----       ------         ----               ----                         -------
-  Normal     FailedBinding  13s (x7 over 85s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
-Mounted By:  hub-fb48dfc4f-mqf4c
-
-I tried my best to follow the localstorage volume configuration on the official kubernetes website, but with no luck
--G
-","1. Managed to fix it using the following configuration.
-Key points:
- - I forgot to add the node in nodeAffinity
- - it works without putting in volumeBindingMode
-apiVersion: v1
-kind: PersistentVolume
-metadata:
-  name: standard
-spec:
-  capacity:
-    storage: 2Gi
-  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
-  volumeMode: Filesystem
-  accessModes:
-  - ReadWriteOnce
-  persistentVolumeReclaimPolicy: Retain
-  storageClassName: local-storage
-  local:
-    path: /temp
-  nodeAffinity:
-    required:
-      nodeSelectorTerms:
-      - matchExpressions:
-        - key: kubernetes.io/hostname
-          operator: In
-          values:
-          - INSERT_NODE_NAME_HERE
-
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
-  annotations:
-    storageclass.kubernetes.io/is-default-class: ""true""
-  name: local-storage
-provisioner: kubernetes.io/no-provisioner
-
-config.yaml
-proxy:
-  secretToken: ""token""
-singleuser:
-  storage:
-    dynamic:
-      storageClass: local-storage
-
-make sure your storage/pv looks like this:
-root@asdf:~# kubectl --namespace=kube-system describe pv
-Name:              standard
-Labels:            <none>
-Annotations:       kubectl.kubernetes.io/last-applied-configuration:
-                     {""apiVersion"":""v1"",""kind"":""PersistentVolume"",""metadata"":{""annotations"":{},""name"":""standard""},""spec"":{""accessModes"":[""ReadWriteOnce""],""capa...
-                   pv.kubernetes.io/bound-by-controller: yes
-Finalizers:        [kubernetes.io/pv-protection]
-StorageClass:      local-storage
-Status:            Bound
-Claim:             jhub/hub-db-dir
-Reclaim Policy:    Retain
-Access Modes:      RWO
-VolumeMode:        Filesystem
-Capacity:          2Gi
-Node Affinity:
-  Required Terms:
-    Term 0:        kubernetes.io/hostname in [asdf]
-Message:
-Source:
-    Type:  LocalVolume (a persistent volume backed by local storage on a node)
-    Path:  /temp
-Events:    <none>
-
-root@asdf:~# kubectl --namespace=kube-system describe storageclass
-Name:                  local-storage
-IsDefaultClass:        Yes
-Annotations:           storageclass.kubernetes.io/is-default-class=true
-Provisioner:           kubernetes.io/no-provisioner
-Parameters:            <none>
-AllowVolumeExpansion:  <unset>
-MountOptions:          <none>
-ReclaimPolicy:         Delete
-VolumeBindingMode:     Immediate
-Events:                <none>
-
-Now the hub pod looks something like this:
-root@asdf:~# kubectl --namespace=jhub describe pod hub
-Name:               hub-5d4fcd8fd9-p6crs
-Namespace:          jhub
-Priority:           0
-PriorityClassName:  <none>
-Node:               asdf/192.168.0.87
-Start Time:         Sat, 23 Feb 2019 14:29:51 +0800
-Labels:             app=jupyterhub
-                    component=hub
-                    hub.jupyter.org/network-access-proxy-api=true
-                    hub.jupyter.org/network-access-proxy-http=true
-                    hub.jupyter.org/network-access-singleuser=true
-                    pod-template-hash=5d4fcd8fd9
-                    release=jhub
-Annotations:        checksum/config-map: --omitted
-                    checksum/secret: --omitted--
-Status:             Running
-IP:                 10.244.0.55
-Controlled By:      ReplicaSet/hub-5d4fcd8fd9
-Containers:
-  hub:
-    Container ID:  docker://d2d4dec8cc16fe21589e67f1c0c6c6114b59b01c67a9f06391830a1ea711879d
-    Image:         jupyterhub/k8s-hub:0.8.0
-    Image ID:      docker-pullable://jupyterhub/k8s-hub@sha256:e40cfda4f305af1a2fdf759cd0dcda834944bef0095c8b5ecb7734d19f58b512
-    Port:          8081/TCP
-    Host Port:     0/TCP
-    Command:
-      jupyterhub
-      --config
-      /srv/jupyterhub_config.py
-      --upgrade-db
-    State:          Running
-      Started:      Sat, 23 Feb 2019 14:30:28 +0800
-    Ready:          True
-    Restart Count:  0
-    Requests:
-      cpu:     200m
-      memory:  512Mi
-    Environment:
-      PYTHONUNBUFFERED:        1
-      HELM_RELEASE_NAME:       jhub
-      POD_NAMESPACE:           jhub (v1:metadata.namespace)
-      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'proxy.token' in secret 'hub-secret'>  Optional: false
-    Mounts:
-      /etc/jupyterhub/config/ from config (rw)
-      /etc/jupyterhub/secret/ from secret (rw)
-      /srv/jupyterhub from hub-db-dir (rw)
-      /var/run/secrets/kubernetes.io/serviceaccount from hub-token-bxzl7 (ro)
-Conditions:
-  Type              Status
-  Initialized       True
-  Ready             True
-  ContainersReady   True
-  PodScheduled      True
-Volumes:
-  config:
-    Type:      ConfigMap (a volume populated by a ConfigMap)
-    Name:      hub-config
-    Optional:  false
-  secret:
-    Type:        Secret (a volume populated by a Secret)
-    SecretName:  hub-secret
-    Optional:    false
-  hub-db-dir:
-    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
-    ClaimName:  hub-db-dir
-    ReadOnly:   false
-  hub-token-bxzl7:
-    Type:        Secret (a volume populated by a Secret)
-    SecretName:  hub-token-bxzl7
-    Optional:    false
-QoS Class:       Burstable
-Node-Selectors:  <none>
-Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
-                 node.kubernetes.io/unreachable:NoExecute for 300s
-Events:          <none>
-
-",Flannel
-"Setting up a new k8s cluster on Centos 7 using flannel as the CNI plugin. When joining a worker to the cluster, the CNI0 bridge is not created.
-Environment is kubernetes 13.2.1, Docker-CE 18.09, Flannel 010. Centos 7.4. My understanding is that CNI0 is created by brctl when called by flannel.  With docker debug,  I can see that the install-cni-kube-flannel container is instantiated.  In looking at /var/lib, I do not see that /var/lib/cni directory is created.
-I would expect that CNI0 and the /var/lib/cni directory would be created by the install-cni-kube-flannel container.  How would I troubleshoot this further ?  Are there log capabilities for the CNI interface ?
-","1. With further research, I observed that the /var/lib/cni directory on the worker node was not created until I deployed a pod to that node and exposed a service.  Once I did that,  the CNI plugin was called,  /var/lib/cni was created as well as CNI0. 
-
-2. For me setting this parameter did the thing
-sysctl -w net.ipv4.ip_forward=1
-
-",Flannel
-"I am trying to setup a Kubernetes cluster on local machines. Bare metal. No OpenStack, No Maas or something.
-After kubeadm init ... on the master node, kubeadm join ... on the slave nodes and applying flannel at the master I get the message from the slaves:
-
-runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
-
-Can anyone tell me what I have done wrong or missed any steps?
-Should flannel be applied to all the slave nodes as well? If yes, they do not have a admin.conf...
-PS. All the nodes do not have internet access. That means all files have to be copied manually via ssh.
-","1. The problem was the missing internet connection. After loading the Docker images manually to the worker nodes they appear to be ready.
-Unfortunately I did not found a helpful error message around this.
-
-2. I think this problem cause by kuberadm first init coredns but not init flannel,so it throw ""network plugin is not ready: cni config uninitialized"".
-Solution:
-1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
-2. Reset the coredns pod
-kubectl delete coredns-xx-xx
-3. Then run kubectl get pods to see if it works.
-if you see this error ""cni0"" already has an IP address different from 10.244.1.1/24"".
-follow this:
-ifconfig  cni0 down
-brctl delbr cni0
-ip link delete flannel.1
-
-if you see this error ""Back-off restarting failed container"", and you can get the log by
-root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
-.:53
-2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
-2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
-CoreDNS-1.2.6
-linux/amd64, go1.11.2, 756749c
- [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
- [FATAL] plugin/loop: Forwarding loop detected in ""."" zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: ""HINFO 1599094102175870692.6819166615156126341."".
-
-Then you can see the file ""/etc/resolv.conf"" on the failed node, if the nameserver is localhost there will be a loopback.Change to:
-#nameserver 127.0.1.1
-nameserver 8.8.8.8
-
-
-3. Usually flannel is deployed as daemonset. Meaning on all worker nodes.
-",Flannel
-"i have been trying to setup k8s in a single node,everything was installed fine. but when i check the status of my kube-system pods, 
-CNI -> flannel pod has crashed, reason ->  Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: x.x.x.x x.x.x.x x.x.x.x
-CoreDNS pods status is ContainerCreating.
-In My Office, the current server has been configured to have an static ip and when i checked /etc/resolv.conf
-This is the output
-# Generated by NetworkManager
-search ORGDOMAIN.BIZ
-nameserver 192.168.1.12
-nameserver 192.168.2.137
-nameserver 192.168.2.136
-# NOTE: the libc resolver may not support more than 3 nameservers.
-# The nameservers listed below may not be recognized.
-nameserver 192.168.1.10
-nameserver 192.168.1.11
-
-i'm unable to find the root cause, what should i be looking at?
-","1. In short, you have too many entries in /etc/resolv.conf.
-This is a known issue:
-
-Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm (>= 1.11) automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.
-
-Also
-
-Linux’s libc is impossibly stuck (see this bug from 2005) with limits of just 3 DNS  nameserver  records and 6 DNS  search  records. Kubernetes needs to consume 1  nameserver  record and 3  search  records. This means that if a local installation already uses 3  nameservers or uses more than 3  searches, some of those settings will be lost. As a partial workaround, the node can run  dnsmasq  which will provide more  nameserver  entries, but not more  search  entries. You can also use kubelet’s  --resolv-conf  flag.
-If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. Check  here  for more information.
-
-You possibly could change that in the Kubernetes code, but I'm not sure about the functionality. As it's set to that value for purpose.
-Code can be located here
-const (
-    // Limits on various DNS parameters. These are derived from
-    // restrictions in Linux libc name resolution handling.
-    // Max number of DNS name servers.
-    MaxDNSNameservers = 3
-    // Max number of domains in search path.
-    MaxDNSSearchPaths = 6
-    // Max number of characters in search path.
-    MaxDNSSearchListChars = 256
-)
-
-
-2. I have the same issue but only three entries in my resolv.conf.
-
-Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 1.1.1.1
-
-My resolv.conf
-nameserver 10.96.0.10 
-nameserver 1.1.1.1
-nameserver 1.0.0.1
-options timeout:1
-
-But indeed my /run/systemd/resolv/resolv.conf was having redundant DNS.
-nameserver 10.96.0.10
-nameserver 1.1.1.1
-nameserver 1.0.0.1
-# Too many DNS servers configured, the following entries may be ignored.
-nameserver 1.1.1.1
-nameserver 1.0.0.1
-nameserver 2606:4700:4700::1111
-nameserver 2606:4700:4700::1001
-search .
-
-When erasing all 1.1.1.1 and 1.0.0.1, on systemd-resolved service restart, they reappear in double...
-",Flannel
-"i update my system by:
-$ apt-get upgrade
-
-then bad things happened, when i reboot the system, i had it get a timeout about network connection.
-i am pretty sure that, my network connection is fine (it unchanged during update), i can get ip allocated (both ethernet and wlan)
-i have consulted google:
-# anyway, i was told to run
-$ sudo netplan apply
-# and i get
-WARNING:root:Cannot call Open vSwitch: ovsdb-server.service is not running.
-
-i have never installed this ovsdb stuff in my server, but this warning is really annoying
-
-it may related to network timeout, or not
-
-how can i fix this (to erase this waring or just help me to solve network connection problem)
-i tried:
-$ systemctl status systemd-networkd-wait-online.service
-
-and i get:
-× systemd-networkd-wait-online.service - Wait for Network to be Configured
-     Loaded: loaded (/lib/systemd/system/systemd-networkd-wait-online.service; enabled; vendor preset: disabled)
-     Active: failed (Result: timeout) since Tue 2023-08-22 05:12:01 CST; 2 months 3 days ago
-       Docs: man:systemd-networkd-wait-online.service(8)
-    Process: 702 ExecStart=/lib/systemd/systemd-networkd-wait-online (code=exited, status=0/SUCCESS)
-   Main PID: 702 (code=exited, status=0/SUCCESS)
-        CPU: 22ms
-
-Aug 22 05:11:59 ubuntu systemd[1]: Starting Wait for Network to be Configured...
-Aug 22 05:12:01 ubuntu systemd[1]: systemd-networkd-wait-online.service: start operation timed out. Terminating.
-Aug 22 05:12:01 ubuntu systemd[1]: systemd-networkd-wait-online.service: Failed with result 'timeout'.
-Aug 22 05:12:01 ubuntu systemd[1]: Failed to start Wait for Network to be Configured.
-
-","1. i have solved this problem
-netplan apply says ovsdb-server.service is not running, then i just install this openvswitch
-since i run ubuntu server in raspberry pi, i need to install extra lib:
-# run this first
-$ sudo apt-get install linux-modules-extra-raspi
-# run this then
-$ sudo apt-get install openvswitch-switch-dpdk
-
-you may need to check installation by run these command again
-after the installation complete, no annoying WARNING shows again:
-$ sudo netplan try
-
-however, systemd-networkd-wait-online.service still timeout, no matter how many times you restart it
-i have consulted the man page for systemd-networkd-wait-online.service usage
-this service is just to wait all interface managed by systemd-networkd are ready
-in fact, i only use ethernet interface and wlan interface, these interfaces work well
-$ ip a
-# status of my interfaces
-
-so i asked chatgpt about how to wait specific interfaces for systemd-networkd-wait-online.service
-it told my to add args in /lib/systemd/system/systemd-networkd-wait-online.service
-$ vim /lib/systemd/system/systemd-networkd-wait-online.service
-[Service]
-Type=oneshot
-# flag `--interface` is used to wait specific interface
-# in this case, i need to wait wlan interface and ethernet interface
-ExecStart=/lib/systemd/systemd-networkd-wait-online --interface=wlan0 --interface=eth0
-RemainAfterExit=yes
-# this parameter is used to set timeout, 30s is enough for my pi
-TimeoutStartSec=30sec
-
-after edition, you need to reload this script and restart service
-$ systemctl daemon-reload
-$ systemctl restart systemd-networkd-wait-online.service
-
-that is all, everything gonna be fine (maybe)
-
-2. It is a buggy warning that can be ignored.
-See https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/2041727 on the resolution of the bug.
-
-3. I got the error on my Ubuntu 22.04 server when configuring a static IP to it.
-This is a bug which has already been posted here(https://bugs.launchpad.net/ubuntu/+source/netplan.io/+bug/2041727). However in as much as it is a Warning!, ignoring it was not sufficient.
-To do solve the issue on my end, I did a system upgrade;
-  sudo apt upgrade
-
-after this I installed openvswitch-switch-dpdk
-    sudo apt install openvswitch-switch-dpdk
-
-after I applied the netplan configuration using;
-   sudo netplan apply
-
-and the issue was resolved. Hope this
-",Open vSwitch
-"I'm trying to create Open vSwitch QoS settings by using the Ansible openvswitch_db module.
-The Ansible task should emulate a command like the following:
-ovs-vsctl set port vm00-eth1 qos=@newqos -- --id=@newqos create qos type=egress-policer other-config:cir=30000000 other-config:cbs=3000000
-I tried an Ansible task like the following, but it doesn't change any QoS settings.
-- openvswitch.openvswitch.openvswitch_db:
-    table: Port
-    record: vm00-eth1
-    col: other_config
-    key: cir
-    value: 30000000
-
-
-The Ansible task runs through successfully, but there is still no QoS setting on that port:
-root@host00 ~ # ovs-appctl qos/show vm00-eth1
-QoS not configured on vm00-eth1
-
-","1. 
-ovs-vsctl set port vm00-eth1 qos=@newqos -- --id=@newqos create qos type=egress-policer other-config:cir=30000000 other-config:cbs=3000000
-
-When you use the command above, you are operating two tables in fact.
-Port table and QoS table are refered together by ""--"".
-The comand do actions:
-
-set the value of column qos in Port table for entry vm00-eth1 to newqos
-set the value of column type in QoS table for entry newqos to egress-policer
-set the value of key cir in column other_config in QoS table for entry newqos to 30000000
-set the value of key cbs in column other_config in QoS table for entry newqos to * 3000000*
-
-So you have to understand the relationship of key, column and table in ovsdb. And connections between tables.
-It would not be easy to put all db operations in the ansible openvswitch_db module. I would recommand you use ansible shell command instead instead:
-- name: setting qos
-  shell: ""{{ item.command }}""
-  args:
-    warn: false
-  with_items:
-    - { ""command"": ""ovs-vsctl set port vm00-eth1 qos=@newqos -- --id=@newqos create qos type=egress-policer other-config:cir=30000000 other-config:cbs=3000000"" }
-
-",Open vSwitch
-"I have set two queues in a port .I want to know how many packets are waiting in the queues .now I can only get the tx_packets of a queue ,can I get rx_packets of a queue? or,do you have a way to get the space has been used in a queue?
-","1. Due to issues like Microbursts, and the fact that they are processed at the speed of light, capturing the moment of a problem or overflow in such queues with a query-based monitoring system is almost impossible or requires extreme luck. Only mechanisms called ""lossless"" have been developed in the TCP and Ethernet layers for these issues. The chip knowing that the queue is full generates some flow control packets at the Ethernet or TCP layer. This leads to a warning-level log being written both on the switch/router and host side, and very rarely, statistics being added. My advice would be to read up on PFC, Congestion Control, and DCBX.
-",Open vSwitch
-"We have a ArgoCD setup running in kind, where Crossplane is installed as ArgoCD Application (example repository here). Crossplane Providers are also installed via an ArgoCD Application like this:
-apiVersion: argoproj.io/v1alpha1
-kind: Application
-metadata:
-  name: provider-aws
-  namespace: argocd
-  labels:
-    crossplane.jonashackt.io: crossplane
-  finalizers:
-    - resources-finalizer.argocd.argoproj.io
-spec:
-  project: default
-  source:
-    repoURL: https://github.com/jonashackt/crossplane-argocd
-    targetRevision: HEAD
-    path: upbound/provider-aws/provider
-  destination:
-    namespace: default
-    server: https://kubernetes.default.svc
-  syncPolicy:
-    automated:
-      prune: true    
-    retry:
-      limit: 5
-      backoff:
-        duration: 5s 
-        factor: 2 
-        maxDuration: 1m
-
-The Provider is defined like this in the Argo spec.source.path:
-apiVersion: pkg.crossplane.io/v1
-kind: Provider
-metadata:
-  name: provider-aws-s3
-spec:
-  package: xpkg.upbound.io/upbound/provider-aws-ec2:v1.1.1
-  packagePullPolicy: Always
-  revisionActivationPolicy: Automatic
-  revisionHistoryLimit: 1
-
-Now as a new Crossplane provider version provider-aws-ec2:v1.2.1 got released, we saw the following issue: The provider gets in the Degraded state:
-
-And as an event we got the following error:
-cannot apply package revision: cannot create object: ProviderRevision.pkg.crossplane.io ""provider-aws-ec2-150095bdd614"" is invalid: metadata.ownerReferences: Invalid value: []v1.OwnerReference{v1.OwnerReference{APIVersion:""pkg.crossplane.io/v1"", Kind:""Provider"", Name:""provider-aws-ec2"", UID:""30bda236-6c12-412c-a647-b96368eff8b6"", Controller:(*bool)(0xc02afeb38c), BlockOwnerDeletion:(*bool)(0xc02afeb38d)}, v1.OwnerReference{APIVersion:""pkg.crossplane.io/v1"", Kind:""Provider"", Name:""provider-aws-ec2"", UID:""ee890f53-7590-4957-8f81-e92b931c4e8d"", Controller:(*bool)(0xc02afeb38e), BlockOwnerDeletion:(*bool)(0xc02afeb38f)}}: Only one reference can have Controller set to true. Found ""true"" in references for Provider/provider-aws-ec2 and Provider/provider-aws-ec2
-
-Looking into kubectl get providerrevisions we saw, that the new Provider got already installed (without us doing anything) and the 'old' Provider beeing not HEALTHY anymore:
-kubectl get providerrevisions
-NAME                                       HEALTHY   REVISION   IMAGE                                                STATE      DEP-FOUND   DEP-INSTALLED   AGE
-provider-aws-ec2-3d66ea2d7903              Unknown   1          xpkg.upbound.io/upbound/provider-aws-ec2:v1.1.1      Active     1           1               5m31s
-provider-aws-ec2-3d66ea2d7903              Unknown   1          xpkg.upbound.io/upbound/provider-aws-ec2:v1.2.1      Active     1           1               5m31s
-upbound-provider-family-aws-7cc64a779806   True      1          xpkg.upbound.io/upbound/provider-family-aws:v1.2.1   Active                                 30m
-
-What can we do to prevent the Provider Upgrades breaking our setup?
-","1. As ArgoCD is doing the GitOps part in this setup, we need to let it take the lead in applying changes that have to be made in Git. With the current Provider setup, Crossplane automatically upgrades Providers without ArgoCD knowing anything about it. And thus trying to reconcile the state to what's stated in Git. Thus both mechanisms will get into an ongoing 'fight'.
-To get ArgoCD into the lead of Provider upgrades through Git commits, we should configure the packagePullPolicy to IfNotPresent instead of Always, which means ""Check for new packages every minute and download any matching package that isn’t in the cache"" as the docs state:
-apiVersion: pkg.crossplane.io/v1
-kind: Provider
-metadata:
-  name: provider-aws-s3
-spec:
-  package: xpkg.upbound.io/upbound/provider-aws-ec2:v1.1.1
-  packagePullPolicy: IfNotPresent
-  revisionActivationPolicy: Automatic
-  revisionHistoryLimit: 1
-
-BUT interestingly we need to leave the revisionActivationPolicy to Automatic! Since otherwise, the Provider will never get active and healty! I found the docs aren't that clear on this point here.
-TLDR; with packagePullPolicy: IfNotPresent Crossplane will not automatically pull new Provider versions and only a git commit with a Provider version change will trigger the download - and also the upgrade through revisionActivationPolicy: Automatic`.
-Remember to be a bit patient for the upgrade to run through - it will take up to a few minutes and depends on what the Provider to upgrade has to do right now (we waited to short and thus thought this configuration is wrong, but it is not).
-",Crossplane
-"I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
-We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
-The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
-This is what we have tried so for:
-The provider-config in the example:
-apiVersion: helm.crossplane.io/v1beta1
-kind: ProviderConfig
-metadata:
-  name: helm-provider
-spec:
-  credentials:
-    source: InjectedIdentity
-
-...which works but installs everything into the same cluster as crossplane.
-and the other example:
-apiVersion: helm.crossplane.io/v1beta1
-kind: ProviderConfig
-metadata:
-  name: default
-spec:
-  credentials:
-    source: Secret
-    secretRef:
-      name: cluster-credentials
-      namespace: crossplane-system
-      key: kubeconfig
-
-...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: "" PodUnschedulable Cannot schedule pods: gvisor}).
-I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
-Am I thinking completely wrong or am I missing something obvious?
-The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
-","1. As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
-To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
-So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
-The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
-As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
-Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
-Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147
-",Crossplane
-"I am working on claim that will be used by about 8 services org wide, how do i pass the array of env variables to the composition. There seems to be no way of doing this
-Here is an example of my claim
-apiVersion: app.org.io/v1
-kind: XClaim
-metadata:
-  name: test-app
-spec:
-  parameters:
-    name: test-app
-    envVariables:
-    - variables:
-        foo: bar
-        name: precious
-        age: 15
-
-Here is an example of my CRD
-apiVersion: apiextensions.crossplane.io/v1
-kind: CompositeResourceDefinition
-metadata:
-  name: applambdas.app.org.io
-  namespace: crossplane-system
-spec:
-    group: app.org.io
-    names:
-        kind: AppLambda
-        plural: applambdas
-    versions:
-        - name: v1
-          served: true
-          referenceable: true
-          schema:
-            openAPIV3Schema:
-              type: object
-              properties:
-                spec:
-                  type: object
-                  properties:
-                    parameters:
-                      type: object
-                      properties:
-                        env:
-                          type: string
-                        envVariables:
-                          type: array
-                        name:
-                          type: string
-
-    claimNames:
-      kind: XClaim
-      plural: xclaims
-
-Here is an example of my composition
-apiVersion: apiextensions.crossplane.io/v1
-kind: Composition
-metadata:
-  name: lambda
-spec:
-  compositeTypeRef:
-    apiVersion: app.org.io/v1
-    kind: AppLambda
-  resources:
-    - name: lambda-function
-      base:
-        apiVersion: lambda.aws.upbound.io/v1beta1
-        kind: Function
-        metadata:
-          annotations:
-            uptest.upbound.io/timeout: ""3600""
-          name: lambda
-        spec:
-          providerConfigRef:
-            name: aws-config
-          forProvider:
-            handler: index.lambda_handler
-            packageType: Zip
-            region: eu-west-1
-            role: arn:aws:iam::xxxxxx:role/crossplane-lambda-test-role
-            runtime: python3.9
-            s3Bucket: testappbucket-upbound-provider-test-data
-            s3Key: function.zip
-            timeout: 60
-            environment: []
-      patches:
-        - fromFieldPath: spec.parameters.envVariables[variables]
-          toFieldPath: spec.forProvider.environment[variables]
-
-The spec.forProvider.environment dosen't seem to get patched, I have been on this all week, please i need help
-","1. In this case, the environment variables are not actually an array. You can see from the crd that variables should be the key to an object, stored underneath a single value environment array.
-spec:
-  forProvider:
-    environment:
-    - variables:
-        key: value
-
-So with some small tweaks to your definition and composition, this should be possible:
-apiVersion: apiextensions.crossplane.io/v1
-kind: CompositeResourceDefinition
-
-...
-    envVariables:
-      type: object
-      additionalProperties:
-        type: string
-...
-
-apiVersion: apiextensions.crossplane.io/v1
-kind: Composition
-...
-    patches:
-      - fromFieldPath: spec.parameters.envVariables
-        toFieldPath: spec.forProvider.environment[0].variables
-...
-
-This will let you create a claim like this:
-apiVersion: app.org.io/v1
-kind: XClaim
-metadata:
-  name: test-app
-spec:
-  parameters:
-    name: test-app
-    envVariables:
-      foo: bar
-      name: precious
-      age: ""15""
-
-Resulting in a function with the appropriate environment variables set.
-AWS Console Showing Environment Variables
-Note: Environment Variable values must be strings, which is the reason for the validation in the schema and the quotes in the claim.
-",Crossplane
-"I followed the Crossplane docs about creating a Configuration Package and created the following crossplane.yaml:
-apiVersion: meta.pkg.crossplane.io/v1alpha1
-kind: Configuration
-metadata:
-  name: crossplane-eks-cluster
-spec:
-  dependsOn:
-    - provider: xpkg.upbound.io/upbound/provider-aws-ec2
-      version: "">=v1.1.1""
-    - provider: xpkg.upbound.io/upbound/provider-aws-iam
-      version: "">=v1.1.1""
-    - provider: xpkg.upbound.io/upbound/provider-aws-eks
-      version: "">=v1.1.1""
-  crossplane:
-    version: "">=v1.15.1-0""
-
-I have a Composition and a XRD in the apis directory, but when I run crossplane xpkg build --package-root=/apis I get the following error:
-$ crossplane xpkg build --package-root=apis/
-crossplane: error: failed to build package: not exactly one package meta type
-
-The docs state nothing what I can do and also google didn't help.
-","1. Luckily I tested the other option I found to create a Configuration: There are templates one could use to create the crossplane.yaml using the crossplane CLI's new beta xpkg init command. I ran the following:
-crossplane beta xpkg init crossplane-eks-cluster configuration-template
-
-The resulting crossplane.yaml had multiple metadata.annotations like this:
-apiVersion: meta.pkg.crossplane.io/v1
-kind: Configuration
-metadata:
-  name: your-configuration
-  annotations:
-    # Set the annotations defining the maintainer, source, license, and description of your Configuration
-    meta.crossplane.io/maintainer: You <myself@me.io>
-    meta.crossplane.io/source: github.com/your-organization/your-repo
-    # Set the license of your Configuration
-    meta.crossplane.io/license: Apache-2.0
-    meta.crossplane.io/description: |
-      This is where you can describe your configuration.
-    meta.crossplane.io/readme: |
-      This is where you can add a readme for your configuration.
-spec:
-  # (Optional) Set the minimum version of Crossplane that this Configuration is compatible with
-  crossplane:
-    version: "">=v1.14.1-0""
-  # Add your dependencies here
-  dependsOn:
-    - provider: xpkg.upbound.io/crossplane-contrib/provider-kubernetes
-      version: ""v0.12.1""
-    - function: xpkg.upbound.io/crossplane-contrib/function-patch-and-transform
-      version: ""v0.3.0""
-
-So simply add the annotations and the error is gone.
-This makes a lot of sence, since a Configuration's user should be able to know, who's responsible for the CRDs. But the error message should maybe be enhanced.
-",Crossplane
-"We have a Crossplane setup to create a ResourceGroup and StorageAccount in Azure (see fullblown example project on GitHub).
-We use the official Azure Provider (meaning: the new Upbound split up provider families) provider-azure-storage and create the following crossplane manifests:
-The Provider defintion:
-apiVersion: pkg.crossplane.io/v1
-kind: Provider
-metadata:
-  name: provider-azure-storage
-spec:
-  package: xpkg.upbound.io/upbound/provider-azure-storage:v0.39.0
-  packagePullPolicy: Always
-  revisionActivationPolicy: Automatic
-  revisionHistoryLimit: 1
-
-The ProviderConfig:
-apiVersion: azure.upbound.io/v1beta1
-kind: ProviderConfig
-metadata:
-  name: default
-spec:
-  credentials:
-    source: Secret
-    secretRef:
-      namespace: crossplane-system
-      name: azure-account-creds
-      key: creds
-
-The azure-account-creds are generated as described in the getting started guide.
-Our CompositeResourceDefinition:
-apiVersion: apiextensions.crossplane.io/v1
-kind: CompositeResourceDefinition
-metadata:
-  name: xstoragesazure.crossplane.jonashackt.io
-spec:
-  group: crossplane.jonashackt.io
-  names:
-    kind: XStorageAzure
-    plural: xstoragesazure
-  claimNames:
-    kind: StorageAzure
-    plural: storagesazure
-  
-  defaultCompositionRef:
-    name: storageazure-composition
-
-  versions:
-  - name: v1alpha1
-    served: true
-    referenceable: true
-    schema:
-      openAPIV3Schema:
-        type: object
-        properties:
-          spec:
-            type: object
-            properties:
-              parameters:
-                type: object
-                properties:
-                  location:
-                    type: string
-                  resourceGroupName:
-                    type: string
-                  storageAccountName:
-                    type: string
-                required:
-                  - location
-                  - resourceGroupName
-                  - storageAccountName
-
-Our Composition:
-apiVersion: apiextensions.crossplane.io/v1
-kind: Composition
-metadata:
-  name: storageazure-composition
-  labels:
-    crossplane.io/xrd: xstoragesazure.crossplane.jonashackt.io
-    provider: azure
-spec:
-  compositeTypeRef:
-    apiVersion: crossplane.jonashackt.io/v1alpha1
-    kind: XStorageAzure
-  
-  writeConnectionSecretsToNamespace: crossplane-system
-  
-  resources:
-    - name: storageaccount
-      base:
-        apiVersion: storage.azure.upbound.io/v1beta1
-        kind: Account
-        metadata: {}
-        spec:
-          forProvider:
-            accountKind: StorageV2
-            accountTier: Standard
-            accountReplicationType: LRS
-      patches:
-        - fromFieldPath: spec.parameters.storageAccountName
-          toFieldPath: metadata.name
-        - fromFieldPath: spec.parameters.resourceGroupName
-          toFieldPath: spec.forProvider.resourceGroupName
-        - fromFieldPath: spec.parameters.location
-          toFieldPath: spec.forProvider.location
-          
-    - name: resourcegroup
-      base:
-        apiVersion: azure.upbound.io/v1beta1
-        kind: ResourceGroup
-        metadata: {}
-      patches:
-        - fromFieldPath: spec.parameters.resourceGroupName
-          toFieldPath: metadata.name
-        - fromFieldPath: spec.parameters.location
-          toFieldPath: spec.forProvider.location
-
-And finally our Claim:
-apiVersion: crossplane.jonashackt.io/v1alpha1
-kind: StorageAzure
-metadata:
-  namespace: default
-  name: managed-storage-account
-spec:
-  compositionRef:
-    name: storageazure-composition
-  parameters:
-    location: West Europe
-    resourceGroupName: rg-crossplane
-    storageAccountName: account4c8672f
-
-Everything is applied via kubectl apply and doesn't throw any errors.
-Although the StorageAccount has a READY: False when looking at kubectl get crossplane output.
-How can we inspect the crossplane resources, so that we're able to trace down the error. The StorageAccount doesn't appear to be created in the Azure Portal.
-","1. There's a great documentation about how to troubleshoot your crossplane resources in the docs, which mainly focusses on running kubectl describe and kubectl get event on your crossplane resources.
-But from crossplane 1.14 on there are new features included into the crossplane CLI. One of these features is the crossplane beta trace command, which allows to inspect any crossplane resources & see their respective status and aims to streamline the troubleshooting process:
-
-trace to examine live resources and find the root cause of issues
-quickly
-
-Now using the command in our example requires installation of the crossplane CLI first. According to the docs this could be achieved like this:
-curl -sL ""https://raw.githubusercontent.com/crossplane/crossplane/master/install.sh"" | sh
-sudo mv crossplane /usr/local/bin
-
-Now having crossplane CLI in place we could use it to inspect our Claim's status via the following command:
-crossplane beta trace StorageAzure managed-storage-account -o wide
-
-After the trace you need to provide the name of your Claim (which is StorageAzure in our case) followed by the metadata.name (managed-storage-account here). The appended -o wide helps to see the full error messages if any.
-Now this should provide you with valuable insights. We had this case where our Azure credentials were broken and now got a hint finally:
-$ crossplane beta trace StorageAzure managed-storage-account -o wide
-NAME                                             SYNCED   READY   STATUS                                                                                
-StorageAzure/managed-storage-account (default)   True     False   Waiting: ...resource claim is waiting for composite resource to become Ready          
-└─ XStorageAzure/managed-storage-account-6g2xn   True     False   Creating: Unready resources: resourcegroup, storageaccount                            
-   ├─ Account/account4c8672d                     False    -       ReconcileError: ...r_uri"":""https://login.microsoftonline.com/error?code=7000215""}:    
-   └─ ResourceGroup/rg-crossplane                False    -       ReconcileError: ...r_uri"":""https://login.microsoftonline.com/error?code=7000215""}: 
-
-",Crossplane
-"There are numerous questions about creating an eraser tool in CoreGraphics. I cannot find one that matches ""pixelated"".
-Here's the situation. I'm playing with a simple drawing project. The pen tools work fine. The eraser tool is horribly pixelated. Here's a screen shot of what I mean:
-
-Here's the drawing code I'm using:
- //  DrawingView
-//  
-//
-//  Created by David DelMonte on 12/9/16.
-//  Copyright © 2016 David DelMonte. All rights reserved.
-//
-
-
-import UIKit
-
-
-public protocol DrawingViewDelegate {
-    func didBeginDrawing(view: DrawingView)
-    func isDrawing(view: DrawingView)
-    func didFinishDrawing(view: DrawingView)
-    func didCancelDrawing(view: DrawingView)
-}
-
-
-
-open class DrawingView: UIView {
-    
-    //initial settings
-    public var lineColor: UIColor = UIColor.black
-    public var lineWidth: CGFloat = 10.0
-    public var lineOpacity: CGFloat = 1.0
-    //public var lineBlendMode: CGBlendMode = .normal
-    
-    //used for zoom actions
-    public var drawingEnabled: Bool = true
-    
-    public var delegate: DrawingViewDelegate?
-    
-    private var currentPoint: CGPoint = CGPoint()
-    private var previousPoint: CGPoint = CGPoint()
-    private var previousPreviousPoint: CGPoint = CGPoint()
-    
-    private var pathArray: [Line] = []
-    private var redoArray: [Line] = []
-    
-    var toolType: Int = 0
-    
-    let π = CGFloat(M_PI)
-    private let forceSensitivity: CGFloat = 4.0
-    
-    
-    private struct Line {
-        var path: CGMutablePath
-        var color: UIColor
-        var width: CGFloat
-        var opacity: CGFloat
-        //var blendMode: CGBlendMode
-        
-        init(path : CGMutablePath, color: UIColor, width: CGFloat, opacity: CGFloat) {
-            self.path = path
-            self.color = color
-            self.width = width
-            self.opacity = opacity
-            //self.blendMode = blendMode
-        }
-    }
-    
-    override public init(frame: CGRect) {
-        super.init(frame: frame)
-        self.backgroundColor = UIColor.clear
-    }
-    
-    required public init?(coder aDecoder: NSCoder) {
-        super.init(coder: aDecoder)
-        self.backgroundColor = UIColor.clear
-    }
-    
-    override open func draw(_ rect: CGRect) {
-        let context : CGContext = UIGraphicsGetCurrentContext()!
-        
-        for line in pathArray {
-            context.setLineWidth(line.width)
-            context.setAlpha(line.opacity)
-            context.setLineCap(.round)
-            
-            switch toolType {
-            case 0: //pen
-                
-                context.setStrokeColor(line.color.cgColor)
-                context.addPath(line.path)
-                context.setBlendMode(.normal)
-                
-                break
-                
-            case 1: //eraser
-                
-                context.setStrokeColor(UIColor.clear.cgColor)
-                context.addPath(line.path)
-                context.setBlendMode(.clear)
-                
-                break
-                
-            case 3: //multiply
-                
-                context.setStrokeColor(line.color.cgColor)
-                context.addPath(line.path)
-                context.setBlendMode(.multiply)
-                
-                break
-                
-            default:
-                break
-            }
-            
-            context.beginTransparencyLayer(auxiliaryInfo: nil)
-            context.strokePath()
-            context.endTransparencyLayer()
-        }
-    }
-    
-    
-    
-    
-    override open func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
-        guard drawingEnabled == true else {
-            return
-        }
-        
-        self.delegate?.didBeginDrawing(view: self)
-        if let touch = touches.first as UITouch! {
-            //setTouchPoints(touch, view: self)
-            previousPoint = touch.previousLocation(in: self)
-            previousPreviousPoint = touch.previousLocation(in: self)
-            currentPoint = touch.location(in: self)
-            
-            let newLine = Line(path: CGMutablePath(), color: self.lineColor, width: self.lineWidth, opacity: self.lineOpacity)
-            newLine.path.addPath(createNewPath())
-            pathArray.append(newLine)
-        }
-    }
-    
-    override open func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
-        guard drawingEnabled == true else {
-            return
-        }
-        
-        self.delegate?.isDrawing(view: self)
-        if let touch = touches.first as UITouch! {
-            //updateTouchPoints(touch, view: self)
-            previousPreviousPoint = previousPoint
-            previousPoint = touch.previousLocation(in: self)
-            currentPoint = touch.location(in: self)
-            
-            let newLine = createNewPath()
-            if let currentPath = pathArray.last {
-                currentPath.path.addPath(newLine)
-            }
-        }
-    }
-    
-    override open func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
-        guard drawingEnabled == true else {
-            return
-        }
-        
-        
-    }
-    
-    override open func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
-        guard drawingEnabled == true else {
-            return
-        }
-        
-        
-    }
-    
-    
-    
-    public func canUndo() -> Bool {
-        if pathArray.count > 0 {return true}
-        return false
-    }
-    
-    public func canRedo() -> Bool {
-        return redoArray.count > 0
-    }
-
-    
-    public func undo() {
-        if pathArray.count > 0 {
-            
-            redoArray.append(pathArray.last!)
-            pathArray.removeLast()
-        }
-        
-        setNeedsDisplay()
-    }
-    
-    public func redo() {
-        if redoArray.count > 0 {
-            pathArray.append(redoArray.last!)
-            redoArray.removeLast()
-        }
-        setNeedsDisplay()
-    }
-    
-    public func clearCanvas() {
-        pathArray = []
-        setNeedsDisplay()
-    }
-
-
-
-    private func createNewPath() -> CGMutablePath {
-        //print(#function)
-        let midPoints = getMidPoints()
-        let subPath = createSubPath(midPoints.0, mid2: midPoints.1)
-        let newPath = addSubPathToPath(subPath)
-        return newPath
-    }
-    
-    private func calculateMidPoint(_ p1 : CGPoint, p2 : CGPoint) -> CGPoint {
-        //print(#function)
-        return CGPoint(x: (p1.x + p2.x) * 0.5, y: (p1.y + p2.y) * 0.5);
-    }
-    
-    private func getMidPoints() -> (CGPoint,  CGPoint) {
-        //print(#function)
-        let mid1 : CGPoint = calculateMidPoint(previousPoint, p2: previousPreviousPoint)
-        let mid2 : CGPoint = calculateMidPoint(currentPoint, p2: previousPoint)
-        return (mid1, mid2)
-    }
-    
-    private func createSubPath(_ mid1: CGPoint, mid2: CGPoint) -> CGMutablePath {
-        //print(#function)
-        let subpath : CGMutablePath = CGMutablePath()
-        subpath.move(to: CGPoint(x: mid1.x, y: mid1.y))
-        subpath.addQuadCurve(to: CGPoint(x: mid2.x, y: mid2.y), control: CGPoint(x: previousPoint.x, y: previousPoint.y))
-        return subpath
-    }
-    
-    private func addSubPathToPath(_ subpath: CGMutablePath) -> CGMutablePath {
-        //print(#function)
-        let bounds : CGRect = subpath.boundingBox
-        
-        let drawBox : CGRect = bounds.insetBy(dx: -0.54 * lineWidth, dy: -0.54 * lineWidth)
-        self.setNeedsDisplay(drawBox)
-        return subpath
-    }
-}
-
-UPDATE:
-I notice that each eraser touch is square. Please see the second image to show in more detail:
-
-I then rewrote some code as suggested by Pranal Jaiswal:
-override open func draw(_ rect: CGRect) {
-        print(#function)
-        let context : CGContext = UIGraphicsGetCurrentContext()!
-        
-        if isEraserSelected {
-            for line in undoArray {
-                //context.beginTransparencyLayer(auxiliaryInfo: nil)
-                context.setLineWidth(line.width)
-                context.addPath(line.path)
-                context.setStrokeColor(UIColor.clear.cgColor)
-                context.setBlendMode(.clear)
-                context.setAlpha(line.opacity)
-                context.setLineCap(.round)
-                context.strokePath()
-
-            }
-        } else {
-            for line in undoArray {
-                context.setLineWidth(line.width)
-                context.setLineCap(.round)
-                context.addPath(line.path)
-                context.setStrokeColor(line.color.cgColor)
-                context.setBlendMode(.normal)
-                context.setAlpha(line.opacity)
-                context.strokePath()
-            }
-            
-        }
-    }
-
-I'm still getting the same result. What can I try next?
-","1. I couldn't exactly look at your code. But I had done something similar in Swift 2.3 a while ago (I do understand you are looking at Swift 3 but right now this is version that I have).
-Here is how the drawing class works looks like.
-import Foundation
-import UIKit
-import QuartzCore
-
-class PRSignatureView: UIView
-
-{
-
-var drawingColor:CGColorRef = UIColor.blackColor().CGColor //Col
-var drawingThickness:CGFloat = 0.5
-var drawingAlpha:CGFloat = 1.0
-
-var isEraserSelected: Bool
-
-private var currentPoint:CGPoint?
-private var previousPoint1:CGPoint?
-private var previousPoint2:CGPoint?
-
-private var path:CGMutablePathRef = CGPathCreateMutable()
-
-var image:UIImage?
-
-required init?(coder aDecoder: NSCoder) {
-    //self.backgroundColor = UIColor.clearColor()
-    self.isEraserSelected = false
-    super.init(coder: aDecoder)
-    self.backgroundColor = UIColor.clearColor()
-}
-
-override func drawRect(rect: CGRect)
-{
-    self.isEraserSelected ? self.eraseMode() : self.drawingMode()
-}
-
-private func drawingMode()
-{
-    if (self.image != nil)
-    {
-        self.image!.drawInRect(self.bounds)
-    }
-    let context:CGContextRef = UIGraphicsGetCurrentContext()!
-    CGContextAddPath(context, path)
-    CGContextSetLineCap(context, CGLineCap.Round)
-    CGContextSetLineWidth(context, self.drawingThickness)
-    CGContextSetStrokeColorWithColor(context, drawingColor)
-    CGContextSetBlendMode(context, CGBlendMode.Normal)
-    CGContextSetAlpha(context, self.drawingAlpha)
-    CGContextStrokePath(context);
-}
-
-private func eraseMode()
-{
-    if (self.image != nil)
-    {
-        self.image!.drawInRect(self.bounds)
-    }
-    let context:CGContextRef = UIGraphicsGetCurrentContext()!
-    CGContextSaveGState(context)
-    CGContextAddPath(context, path);
-    CGContextSetLineCap(context, CGLineCap.Round)
-    CGContextSetLineWidth(context, self.drawingThickness)
-    CGContextSetBlendMode(context, CGBlendMode.Clear)
-    CGContextStrokePath(context)
-    CGContextRestoreGState(context)
-}
-
-
-
-
-private func midPoint (p1:CGPoint, p2:CGPoint)->CGPoint
-{
-    return CGPointMake((p1.x + p2.x) * 0.5, (p1.y + p2.y) * 0.5)
-}
-
-private func finishDrawing()
-{
-    UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, 0.0);
-    drawViewHierarchyInRect(self.bounds, afterScreenUpdates: true)
-    self.image = UIGraphicsGetImageFromCurrentImageContext()
-    UIGraphicsEndImageContext()
-}
-
-func clearSignature()
-{
-    path = CGPathCreateMutable()
-    self.image = nil;
-    self.setNeedsDisplay();
-}
-
-// MARK: - Touch Delegates
-override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
-    path = CGPathCreateMutable()
-    let touch = touches.first!
-    previousPoint1 = touch.previousLocationInView(self)
-    currentPoint = touch.locationInView(self)
-}
-override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
-    let touch = touches.first!
-    previousPoint2 = previousPoint1
-    previousPoint1 = touch.previousLocationInView(self)
-    currentPoint = touch.locationInView(self)
-    
-    let mid1 = midPoint(previousPoint2!, p2: previousPoint1!)
-    let mid2 = midPoint(currentPoint!, p2: previousPoint1!)
-    
-    let subpath:CGMutablePathRef = CGPathCreateMutable()
-    CGPathMoveToPoint(subpath, nil, mid1.x, mid1.y)
-    CGPathAddQuadCurveToPoint(subpath, nil, previousPoint1!.x, previousPoint1!.y, mid2.x, mid2.y)
-    CGPathAddPath(path, nil, subpath);
-    self.setNeedsDisplay()
-}
-override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) {
-    self.touchesMoved(touches, withEvent: event)
-    self.finishDrawing()
-}
-override func touchesCancelled(touches: Set<UITouch>?, withEvent event: UIEvent?) {
-    self.touchesMoved(touches!, withEvent: event)
-    self.finishDrawing()
-}
-
-}
-
-Source Code for test app I created using the above code
-Edit: Converting few lines code to swift 3 as requested
-subpath.move(to: CGPoint(x: mid1.x, y: mid1.y))
-subpath.addQuadCurve(to:CGPoint(x: mid2.x, y: mid2.y) , control: CGPoint(x: previousPoint1!.x, y: previousPoint1!.y))
-path.addPath(subpath)
-
-Edit: In response to the updated Question
-Here is the updated Drawing Class that must solve the issue for sure. https://drive.google.com/file/d/0B5nqEBSJjCriTU5oRXd5c2hRV28/view?usp=sharing&resourcekey=0-8ZE92CSD3j7xxB5jGvgj2w
-Issues addressed:
-
-Line Struct did not hold the tool type associated. Whenever setNeedsDislpay() is called you redraw all the objects in pathArray and all Objects were getting redrawn with the current selected tool. I have added a new variable associatedTool to address the issue.
-Use of function beginTransparencyLayer will set the blend mode to kCGBlendModeNormal. As this was common for all cases related to tooltype this was causing the mode to be set to normal.  I have removed these two lines
-
-
-//context.beginTransparencyLayer(auxiliaryInfo: nil)
-//context.endTransparencyLayer()
-
-
-2. Try this it has no error while erasing and it can be us for drawing erasing and clearing your screen. you can even increase or decrease ur size of the pencil and eraser. also u may change color accordingly.
-hope this is helpfull for u.....
-import UIKit
-class DrawingView: UIView {
-
-var lineColor:CGColor = UIColor.black.cgColor 
-var lineWidth:CGFloat = 5
-var drawingAlpha:CGFloat = 1.0
-
-var isEraserSelected: Bool
-
-private var currentPoint:CGPoint?
-private var previousPoint1:CGPoint?
-private var previousPoint2:CGPoint?
-
-private var path:CGMutablePath = CGMutablePath()
-
-var image:UIImage?
-
-required init?(coder aDecoder: NSCoder) {
-    //self.backgroundColor = UIColor.clearColor()
-    self.isEraserSelected = false
-    super.init(coder: aDecoder)
-    self.backgroundColor = UIColor.clear
-}
-
-override func draw(_ rect: CGRect)
-{
-    self.isEraserSelected ? self.eraseMode() : self.drawingMode()
-}
-
-private func drawingMode()
-{
-    if (self.image != nil)
-    {
-        self.image!.draw(in: self.bounds)
-    }
-    let context:CGContext = UIGraphicsGetCurrentContext()!
-    context.addPath(path)
-    context.setLineCap(CGLineCap.round)
-    context.setLineWidth(self.lineWidth)
-    context.setStrokeColor(lineColor)
-    context.setBlendMode(CGBlendMode.normal)
-    context.setAlpha(self.drawingAlpha)
-    context.strokePath();
-}
-
-private func eraseMode()
-{
-    if (self.image != nil)
-    {
-        self.image!.draw(in: self.bounds)
-    }
-    let context:CGContext = UIGraphicsGetCurrentContext()!
-    context.saveGState()
-    context.addPath(path);
-    context.setLineCap(CGLineCap.round)
-    context.setLineWidth(self.lineWidth)
-    context.setBlendMode(CGBlendMode.clear)
-    context.strokePath()
-    context.restoreGState()
-}
-
-private func midPoint (p1:CGPoint, p2:CGPoint)->CGPoint
-{
-    return CGPoint(x: (p1.x + p2.x) * 0.5, y: (p1.y + p2.y) * 0.5);
-}
-
-private func finishDrawing()
-{
-    UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, 0.0);
-    drawHierarchy(in: self.bounds, afterScreenUpdates: true)
-    self.image = UIGraphicsGetImageFromCurrentImageContext()
-    UIGraphicsEndImageContext()
-}
-
-func clearSignature()
-{
-    path = CGMutablePath()
-    self.image = nil;
-    self.setNeedsDisplay();
-}
-
-// MARK: - Touch Delegates
-override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
-    path = CGMutablePath()
-    let touch = touches.first!
-    previousPoint1 = touch.previousLocation(in: self)
-    currentPoint = touch.location(in: self)
-}
-override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
-    let touch = touches.first!
-    previousPoint2 = previousPoint1
-    previousPoint1 = touch.previousLocation(in: self)
-    currentPoint = touch.location(in: self)
-
-    let mid1 = midPoint(p1: previousPoint2!, p2: previousPoint1!)
-    let mid2 = midPoint(p1: currentPoint!, p2: previousPoint1!)
-
-    let subpath:CGMutablePath = CGMutablePath()
-    subpath.move(to: CGPoint(x: mid1.x, y: mid1.y), transform: .identity)
-    subpath.addQuadCurve(to: CGPoint(x: mid2.x, y: mid2.y), control: CGPoint(x: (previousPoint1?.x)!, y: (previousPoint1?.y)!))
-    path.addPath(subpath, transform: .identity)
-
-    self.setNeedsDisplay()
-}
-override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
-    self.touchesMoved(touches, with: event)
-    self.finishDrawing()
-}
-override func touchesCancelled(_ touches: Set<UITouch>?, with event: UIEvent?) {
-    self.touchesMoved(touches!, with: event)
-    self.finishDrawing()
-}
-}
-
-",Eraser
-"I have a sample code in c#
-using System;
-using System.Collections.Generic;
-using Fluid;
-
-public class StrObj
-{
-    public string Str {get;set;}
-}
-
-public class TestObj
-{
-    public List<StrObj> StrObjects {get;set;}
-}
-
-public static class Program
-{   
-    public static void Main()
-    {
-        var templateText = ""{% for item in StrObjects %} String: {{ item.Str }} {% endfor %}"";
-        
-        var testObj = new TestObj();
-        testObj.StrObjects = new List<StrObj>();
-        testObj.StrObjects.Add(new StrObj { Str = ""test1"" });
-        testObj.StrObjects.Add(new StrObj { Str = ""test2"" });
-        testObj.StrObjects.Add(new StrObj { Str = ""test3"" });
-        testObj.StrObjects.Add(new StrObj { Str = ""test4"" });
-        
-        var parser = new FluidParser();
-        if (parser.TryParse(templateText, out IFluidTemplate template, out string error))
-        {
-            var ctx = new Fluid.TemplateContext(testObj);
-            var html = template.Render(ctx);
-
-            Console.WriteLine(html);
-        }
-        else
-        {
-            Console.WriteLine($""Error in html template parser! {error}"");
-        }
-    }
-}
-
-This code should return something like this
-String: test1 String: test2 String: test3 String: test4
-however, it returns
-String: String: String: String:
-it writes 4 times ""String:"" that means for loop in the template works, but why I can't see the values?
-I add this sample to dotnetfiddle, too.
-https://dotnetfiddle.net/wIq9mS
-Thanks in advance!
-","1. I found the problem, somehow Fluid doesn't see the inner objects. So we need to register them before calling the parser.
-Here is the solution:
-using System;
-using System.Collections.Generic;
-using Fluid;
-
-public class StrObj
-{
-    public string Str {get;set;}
-}
-
-public class TestObj
-{
-    public List<StrObj> StrObjects {get;set;}
-}
-
-public static class Program
-{   
-    public static void Main()
-    {
-        var templateText = ""{% for item in StrObjects %} String: {{ 
-item.Str }} {% endfor %}"";
-    
-    var testObj = new TestObj();
-    testObj.StrObjects = new List<StrObj>();
-    testObj.StrObjects.Add(new StrObj { Str = ""test1"" });
-    testObj.StrObjects.Add(new StrObj { Str = ""test2"" });
-    testObj.StrObjects.Add(new StrObj { Str = ""test3"" });
-    testObj.StrObjects.Add(new StrObj { Str = ""test4"" });
-    
-    var parser = new FluidParser();
-    if (parser.TryParse(templateText, out IFluidTemplate template, out string error))
-    {
-        /* Following lines necessary if you have a nested object */
-        var options = new TemplateOptions();
-        options.MemberAccessStrategy.Register<StrObj>();
-        
-        var ctx = new Fluid.TemplateContext(testObj, options);
-        var html = template.Render(ctx);
-
-        Console.WriteLine(html);
-    }
-    else
-    {
-        Console.WriteLine($""Error in html template parser! {error}"");
-    }
-    }
-}
-
-",Fluid
-"I created a custom content element with a ""media"" field.
-Here is my Data Processor Class:
-class CustomCeProcessor implements DataProcessorInterface
-{
-
-    /**
-     * Process data for the content element ""My new content element""
-     *
-     * @param ContentObjectRenderer $cObj The data of the content element or page
-     * @param array $contentObjectConfiguration The configuration of Content Object
-     * @param array $processorConfiguration The configuration of this processor
-     * @param array $processedData Key/value store of processed data (e.g. to be passed to a Fluid View)
-     * @return array the processed data as key/value store
-     */
-    public function process(
-        ContentObjectRenderer $cObj,
-        array $contentObjectConfiguration,
-        array $processorConfiguration,
-        array $processedData
-    )
-    {
-        $processedData['foo'] = 'This variable will be passed to Fluid';
-        return $processedData;
-    }
-}
-
-$processedData contains the value for every fields expect the ""media field"" which is an empty array.
-Here is how my TCA looks like:
-$GLOBALS['TCA']['tt_content']['types']['custom_ce'] = [
-    'showitem'         => '
-            --palette--;' . $frontendLanguageFilePrefix . 'palette.general;general,
-            --linebreak--, header;LLL:EXT:frontend/Resources/Private/Language/locallang_ttc.xlf:header_formlabel,
-            --linebreak--, date;Datum,
-            --linebreak--, media;Media,
-            --linebreak--, bodytext;txt,
-    '
-];
-
-How can I access the media file in the DataProcess in order to pass it to fluid?
-","1. The TYPO3\CMS\Frontend\DataProcessing\FilesProcessor can do that. It is not neccessary to write an own DataProcessor.
-Your file should show up as my_pdf when you activate the debug viewhelper.
-Please verify that your file is visible with the Fluid debug viewhelper.
-",Fluid
-"I'm trying to configure a Keda Scaler with gcp-storage as trigger, using workload identity as authentication. I have verified my service account has both Storage Admin & Storage Object Admin Roles in my IAM roles. Here is the YAML File with the TriggerAuthentication and ScaledJob
-I'm maintaining the values file to fetch the service account details and other key values.
-TriggerAuthentication
-apiVersion: keda.sh/v1alpha1
-kind: TriggerAuthentication
-metadata:
-  name: keda-trigger-auth-gcp-credentials
-spec:
-  podIdentity:
-    provider: gcp
-
-ScaledJob
-apiVersion: keda.sh/v1alpha1
-kind: ScaledJob
-metadata:
-  name: sample-scaled-job
-  namespace: default
-  labels:
-      {{- include ""app.labels"" . | nindent 4 }}
-spec:
-  jobTargetRef:
-    template:
-      metadata:
-        labels:
-          app.kubernetes.io/name: sample-scaled-job
-          app.kubernetes.io/instance: sample-scaled-job
-      spec:
-        imagePullSecrets: {{ .Values.deployment.imagePullSecrets | toYaml | nindent 8 }}
-        serviceAccountName: {{ .Values.serviceaccount.name }}
-        containers:
-          - name: sample-job-container
-            image: nginx
-            imagePullPolicy: Always
-            command: [""echo"",""Mukesh""]
-  pollingInterval:  5                    # Optional. Default: 5 seconds
-  minReplicaCount:  0                   # Optional. Default: 0
-  maxReplicaCount:  2                    # Optional. Default: 100
-  successfulJobsHistoryLimit: 2
-  failedJobsHistoryLimit: 2
-  rollout:
-    strategy: gradual
-    propagationPolicy: foreground
-  triggers:
-  - type: gcp-storage
-    authenticationRef:
-      name: keda-trigger-auth-gcp-credentials
-    metadata:
-      bucketName: ""ccon-ap-core-pilot-us-east4-gcs""
-      targetObjectCount: ""5""
-      blobPrefix: ""inputs/""
-
-I'm getting the following error:
- Type     Reason              Age                    From           Message                                                                                          │
-│   ----     ------              ----                   ----           -------                                                                                          │
-│   Normal   KEDAScalersStarted  38m                    scale-handler  Started scalers watch                                                                            │
-│   Warning  KEDAScalerFailed    38m                    scale-handler  context canceled                                                                                 │
-│   Warning  KEDAScalerFailed    38m                    scale-handler  scaler with id 0 not found, len = 0, cache has been probably already invalidated                 │
-│   Normal   ScaledJobReady      36m (x3 over 38m)      keda-operator  ScaledJob is ready for scaling                                                                   │
-│   Warning  KEDAScalerFailed    3m44s (x420 over 38m)  scale-handler  googleapi: Error 403: Caller does not have storage.objects.list access to the Google Cloud Stora │
-│ ge bucket. Permission 'storage.objects.list' denied on resource (or it may not exist)., forbidden          
-
-","1. Maybe the service account linked to keda doesn't have permission for storage, or the storage API is off.
-",KEDA
-"I have applied keda scaledobject for my deployment, now i want to manage changes for git. So i tried to apply flux to this scaledobject but i am getting error like below
-**flux error for scaledobject : ScaledObject/jazz-keda dry-run failed (Forbidden): scaledobjects.keda.sh ""jazz-keda"" is forbidden: User ""system:serviceaccount:crystal-test:default-login-serviceaccount"" cannot patch resource ""scaledobjects"" in API group ""keda.sh"" at the cluster scope**
-
-Is it not possible to apply flux concept to keda object? i don't have admin permission to change anything in the cluster, someone help to figure it out.
-","1. As per the error, it seems Service Account which is associated with flux does not have sufficient permissions to modify KEDA ScaledObjects in your Kubernetes cluster, that's why you're facing this error.
-This error can be resolved by adding ClusterRole with required permissions to the service account which is associated with the Flux. As you do not have Admin permissions, you will have to request these below steps to your cluster administrator:
-
-Creating ClusterRole with appropriate permissions.
-
-Binding above ClusterRole to the flux service account, you will need to create ClusterRoleBinding for achieving this binding.
-
-
-Refer to this official Kubernetes document on Using RBAC Authorization, which allows you to dynamically configure policies through the Kubernetes API. The RBAC API declares ClusterRole and ClusterRoleBinding of Kubernetes objects.
-Then after applying above configurations with “kubectl apply -f<file>.yaml” above error may get resolved, as you do not have admin permissions this will be managed by Cluster Admin.
-Note : KEDA requires specific RBAC rules to allow service accounts to create, modify, and delete ScaledObjects. Use this command : kubectl auth can-i to check the permissions of your service account, refer to official kubernetes doc on how to use this command for more information.
-",KEDA
-"I am trying to create a KEDA scaled job based on RabbitMQ queue trigger but encountered an issue when pods are not scaling at all.
-I have created a following Scaled job and lined up messages in the queue but no pods are created. I see this message: Scaling is not performed because triggers are not active
-What could be reason that pods are not scaling at all? Thanks for help.
-And in Keda logs I see:
-2021-12-29T13:50:19.738Z    INFO    scalemetrics    Checking if ScaleJob Scalers are active {""ScaledJob"": ""celery-rabbitmq-scaledjob-2"", ""isActive"": false, ""maxValue"": 0, ""MultipleScalersCalculation"": """"}
-2021-12-29T13:50:19.738Z    INFO    scaleexecutor   Scaling Jobs    {""scaledJob.Name"": ""celery-rabbitmq-scaledjob-2"", ""scaledJob.Namespace"": ""sandbox-dev"", ""Number of running Jobs"": 0}
-2021-12-29T13:50:19.738Z    INFO    scaleexecutor   Scaling Jobs    {""scaledJob.Name"": ""celery-rabbitmq-scaledjob-2"", ""scaledJob.Namespace"": ""sandbox-dev"", ""Number of pending Jobs "": 0}
-
---
-apiVersion: keda.sh/v1alpha1
-kind: ScaledJob
-metadata:
-  annotations:
-    kubectl.kubernetes.io/last-applied-configuration: >
-      {""apiVersion"":""keda.sh/v1alpha1"",""kind"":""ScaledJob"",""metadata"":{""annotations"":{},""name"":""celery-rabbitmq-scaledjob-2"",""namespace"":""sandbox-dev""},""spec"":{""failedJobsHistoryLimit"":5,""jobTargetRef"":{""activeDeadlineSeconds"":3600,""backoffLimit"":6,""completions"":1,""parallelism"":5,""template"":{""spec"":{""containers"":[{""command"":[""/bin/bash"",""-c"",""CELERY_BROKER_URL=amqp://$RABBITMQ_USERNAME:$RABBITMQ_PASSWORD@rabbitmq.sandbox-dev.svc.cluster.local:5672
-      celery worker -A test_data.tasks.all --loglevel=info -c 1 -n
-      worker.all""],""env"":[{""name"":""APP_CFG"",""value"":""test_data.config.dev""},{""name"":""C_FORCE_ROOT"",""value"":""true""},{""name"":""RABBITMQ_USERNAME"",""valueFrom"":{""secretKeyRef"":{""key"":""rabbitmq-user"",""name"":""develop""}}},{""name"":""RABBITMQ_PASSWORD"",""valueFrom"":{""secretKeyRef"":{""key"":""rabbitmq-pass"",""name"":""develop""}}}],""image"":""111.dkr.ecr.us-east-1.amazonaws.com/test:DEV-TEST"",""imagePullPolicy"":""IfNotPresent"",""lifecycle"":{""postStart"":{""exec"":{""command"":[""/bin/sh"",""-c"",""echo
-      startup \u003e\u003e
-      /tmp/startup.log""]}},""preStop"":{""exec"":{""command"":[""/bin/sh"",""-c"",""echo
-      shutdown \u003e\u003e
-      /tmp/shutdown.log""]}}},""name"":""celery-backend"",""resources"":{""limits"":{""cpu"":""1700m"",""memory"":""3328599654400m""},""requests"":{""cpu"":""1600m"",""memory"":""3Gi""}},""securityContext"":{""allowPrivilegeEscalation"":false,""privileged"":false,""readOnlyRootFilesystem"":false},""terminationMessagePath"":""/tmp/termmsg.log"",""terminationMessagePolicy"":""File"",""volumeMounts"":[{""mountPath"":""/tmp"",""name"":""temp""},{""mountPath"":""/var/run/secrets/kubernetes.io/serviceaccount"",""name"":""default-token-s7vl6"",""readOnly"":true}]}]}}},""maxReplicaCount"":100,""pollingInterval"":5,""rolloutStrategy"":""gradual"",""successfulJobsHistoryLimit"":5,""triggers"":[{""metadata"":{""host"":""amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost"",""mode"":""QueueLength"",""queueName"":""celery"",""value"":""1""},""type"":""rabbitmq""}]}}
-  creationTimestamp: '2021-12-29T13:11:15Z'
-  finalizers:
-    - finalizer.keda.sh
-  generation: 3
-  managedFields:
-    - apiVersion: keda.sh/v1alpha1
-      fieldsType: FieldsV1
-      fieldsV1:
-        f:metadata:
-          f:finalizers: {}
-        f:spec:
-          f:jobTargetRef:
-            f:template:
-              f:metadata:
-                .: {}
-                f:creationTimestamp: {}
-          f:scalingStrategy: {}
-        f:status:
-          .: {}
-          f:conditions: {}
-      manager: keda
-      operation: Update
-      time: '2021-12-29T13:11:15Z'
-    - apiVersion: keda.sh/v1alpha1
-      fieldsType: FieldsV1
-      fieldsV1:
-        f:metadata:
-          f:annotations:
-            .: {}
-            f:kubectl.kubernetes.io/last-applied-configuration: {}
-        f:spec:
-          .: {}
-          f:failedJobsHistoryLimit: {}
-          f:jobTargetRef:
-            .: {}
-            f:activeDeadlineSeconds: {}
-            f:backoffLimit: {}
-            f:completions: {}
-            f:template:
-              .: {}
-              f:spec:
-                .: {}
-                f:containers: {}
-          f:maxReplicaCount: {}
-          f:pollingInterval: {}
-          f:rolloutStrategy: {}
-          f:successfulJobsHistoryLimit: {}
-          f:triggers: {}
-      manager: kubectl-client-side-apply
-      operation: Update
-      time: '2021-12-29T13:11:15Z'
-    - apiVersion: keda.sh/v1alpha1
-      fieldsType: FieldsV1
-      fieldsV1:
-        f:spec:
-          f:jobTargetRef:
-            f:parallelism: {}
-      manager: node-fetch
-      operation: Update
-      time: '2021-12-29T13:37:11Z'
-  name: celery-rabbitmq-scaledjob-2
-  namespace: sandbox-dev
-  resourceVersion: '222981509'
-  selfLink: >-
-    /apis/keda.sh/v1alpha1/namespaces/sandbox-dev/scaledjobs/celery-rabbitmq-scaledjob-2
-  uid: 9013295a-6ace-48ba-96d3-8810efde1b35
-status:
-  conditions:
-    - message: ScaledJob is defined correctly and is ready to scaling
-      reason: ScaledJobReady
-      status: 'True'
-      type: Ready
-    - message: Scaling is not performed because triggers are not active
-      reason: ScalerNotActive
-      status: 'False'
-      type: Active
-    - status: Unknown
-      type: Fallback
-spec:
-  failedJobsHistoryLimit: 5
-  jobTargetRef:
-    activeDeadlineSeconds: 3600
-    backoffLimit: 6
-    completions: 1
-    parallelism: 1
-    template:
-      metadata:
-        creationTimestamp: null
-      spec:
-        containers:
-          - command:
-              - /bin/bash
-              - '-c'
-              - >-
-                CELERY_BROKER_URL=amqp://$RABBITMQ_USERNAME:$RABBITMQ_PASSWORD@rabbitmq.sandbox-dev.svc.cluster.local:5672
-                celery worker -A test_data.tasks.all --loglevel=info -c 1
-                -n worker.all
-            env:
-              - name: APP_CFG
-                value: test_data.config.dev
-              - name: C_FORCE_ROOT
-                value: 'true'
-              - name: RABBITMQ_USERNAME
-                valueFrom:
-                  secretKeyRef:
-                    key: rabbitmq-user
-                    name: develop
-              - name: RABBITMQ_PASSWORD
-                valueFrom:
-                  secretKeyRef:
-                    key: rabbitmq-pass
-                    name: develop
-            image: >-
-              111.dkr.ecr.us-east-1.amazonaws.com/test-data-celery:DEV-2021.12.27.0
-            imagePullPolicy: IfNotPresent
-            lifecycle:
-              postStart:
-                exec:
-                  command:
-                    - /bin/sh
-                    - '-c'
-                    - echo startup >> /tmp/startup.log
-              preStop:
-                exec:
-                  command:
-                    - /bin/sh
-                    - '-c'
-                    - echo shutdown >> /tmp/shutdown.log
-            name: celery-backend
-            resources:
-              limits:
-                cpu: 1700m
-                memory: 3328599654400m
-              requests:
-                cpu: 1600m
-                memory: 3Gi
-            securityContext:
-              allowPrivilegeEscalation: false
-              privileged: false
-              readOnlyRootFilesystem: false
-            terminationMessagePath: /tmp/termmsg.log
-            terminationMessagePolicy: File
-            volumeMounts:
-              - mountPath: /tmp
-                name: temp
-              - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
-                name: default-token-s7vl6
-                readOnly: true
-  maxReplicaCount: 100
-  pollingInterval: 5
-  rolloutStrategy: gradual
-  scalingStrategy: {}
-  successfulJobsHistoryLimit: 5
-  triggers:
-    - metadata:
-        host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
-        mode: QueueLength
-        queueName: celery
-        value: '1'
-      type: rabbitmq
-
-","1. mode may not work in some cases.
-Try changing
-- metadata:
-    host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
-    mode: QueueLength
-    queueName: celery
-    value: '1'
-
-to
-- metadata:
-    host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
-    queueName: files
-    queueLength: '1'
-
-",KEDA
-"I have a Kubernetes deployment that deploys a pod that will pull down a single message from a RabbitMQ queue. I'm also using KEDA to scale the deployment based on the RabbitMQ messages currently in the queue. It correctly scales to 0, and then to 1 whenever there is a message, but the deployment never scales above 1. My current deployment YAML file:
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: scale
-  labels:
-    app: scale
-spec:
-  replicas: 1
-  selector: 
-    matchLabels:
-      app: scale
-  template:
-    metadata:
-      labels:
-        app: scale
-    spec:
-      containers:
-        - name: scale-deployment
-          image: bharper7/scale:v1
-          imagePullPolicy: Never
-
-My KEDA YAML file:
-apiVersion: keda.sh/v1alpha1
-kind: ScaledObject
-metadata:
-  name: scale-keda-deployment
-  labels:
-    app: scale
-    deploymentName: scale
-spec:
-  scaleTargetRef:
-    name: scale
-  pollingInterval: 5
-  minReplicaCount: 0
-  maxReplicaCount: 10
-  cooldownPeriod: 60
-  triggers:
-  - type: rabbitmq
-    metadata:
-      host: amqp://EmZn4ScuOPLEU1CGIsFKOaQSCQdjhzca:dJhLl2aVF78Gn07g2yGoRuwjXSc6tT11@192.168.49.2:30861
-      mode: QueueLength
-      value: '1'
-      queueName: scaleTest
-
-The KEDA operator log files:
-2021-04-28T19:25:39.846Z        INFO    scaleexecutor   Successfully updated ScaleTarget        {""scaledobject.Name"": ""scale-keda-deployment"", ""scaledObject.Namespace"": ""default"", ""scaleTarget.Name"": ""scale"", ""Original Replicas Count"": 0, ""New Replicas Count"": 1}
-2021-04-28T19:25:40.272Z        INFO    controllers.ScaledObject        Reconciling ScaledObject        {""ScaledObject.Namespace"": ""default"", ""ScaledObject.Name"": ""scale-keda-deployment""}
-
-I know everything as far as the RabbitMQ connection is working, and that KEDA knows what deployment to look at, as well as what queue. All of this is proven by the fact that the pod scales to 0 and 1. But for some reason it never goes beyond 1 even when I have 50 messages in the queue.
-So far I've tried messing around with the pollingInterval and cooldownPeriod tags but neither seem to have an effect. Any ideas?
-Edit:
-I removed the replicas value from the deployment YAML file as suggested below. And also looked at the HPA logs.
-The generated HPA logs:
-Name:                                           keda-hpa-scale-keda-deployment
-Namespace:                                      default
-Labels:                                         app=scale
-                                                app.kubernetes.io/managed-by=keda-operator
-                                                app.kubernetes.io/name=keda-hpa-scale-keda-deployment
-                                                app.kubernetes.io/part-of=scale-keda-deployment
-                                                app.kubernetes.io/version=2.1.0
-                                                deploymentName=scale
-                                                scaledObjectName=scale-keda-deployment
-Annotations:                                    <none>
-CreationTimestamp:                              Wed, 28 Apr 2021 11:24:15 +0100
-Reference:                                      Deployment/scale
-Metrics:                                        ( current / target )
-  ""rabbitmq-scaleTest"" (target average value):  4 / 20
-Min replicas:                                   1
-Max replicas:                                   10
-Deployment pods:                                1 current / 1 desired
-Conditions:
-  Type            Status  Reason              Message
-  ----            ------  ------              -------
-  AbleToScale     True    ReadyForNewScale    recommended size matches current size
-  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from external metric rabbitmq-scaleTest(&LabelSelector{MatchLabels:map[string]string{scaledObjectName: scale-keda-deployment,},MatchExpressions:[]LabelSelectorRequirement{},})
-  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
-Events:
-  Type     Reason             Age                From                       Message
-  ----     ------             ----               ----                       -------
-  Warning  FailedGetScale     22m (x6 over 23m)  horizontal-pod-autoscaler  deployments/scale.apps ""scale"" not found
-  Normal   SuccessfulRescale  15m                horizontal-pod-autoscaler  New size: 1; reason: All metrics below target
-
-This is after sending 5 messages into the queue. For some reason it only thinks 1 pod is needed even though I set value to 1 in the KEDA YAML file.
-","1. In case anyone else stumbles across this issue, I managed to find a work around. According the docs, the use of queueLength is deprecated and mode should be used instead. But changing back to the deprecated tags worked for me, for some reason the new tags don't work. Not really a proper fix but at least it got my deployment scaling as expected. My KEDA deployment file now looks like this:
-apiVersion: keda.sh/v1alpha1
-kind: ScaledObject
-metadata:
-  name: keda-deployment
-  labels:
-    apps: ffmpeg
-    deploymentName: ffmpeg
-spec:
-  scaleTargetRef:
-    name: ffmpeg
-  pollingInterval: 10
-  cooldownPeriod: 1200
-  maxReplicaCount: 50
-  triggers:
-  - type: rabbitmq
-    metadata:
-      host: amqp://EmZn4ScuOPLEU1CGIsFKOaQSCQdjhzca:dJhLl2aVF78Gn07g2yGoRuwjXSc6tT11@192.168.49.2:30861
-      queueName: files
-      queueLength: '1'
-
-
-2. Please check the KEDA version once. If you are using vesion below or equal to 2.1, then you need to add queueLength parameter. The default value of queueLength is 20, which means, one pod can execute 20 messages. ScaledObject will not increase the pods till 21 messages. ou have to set queueLength to 1 in order to increase the pod count by one for every message.
-Link to the doc: https://keda.sh/docs/2.1/scalers/rabbitmq-queue/
-The mode and value parameters were added in KEDA version 2.2.
-
-3. I was trying to configure rabbitmq's excludeUnacknowledged parameter, so i had to use http protocol instead of ampq.
-KEDA was doing nothing even though messages are in rabbitmq, no error logs in operator pod
-port is different in URI for ampq and http, that was the mistake i did, once i corrected KEDA works well.
-amqp://guest:password@localhost:5672/vhost
-http://guest:password@localhost:15672/path/vhost
-https://keda.sh/docs/2.13/scalers/rabbitmq-queue/#example
-",KEDA
-"Is it possible to have Knative automatically create K8s Ingress resources?
-Hello all,
-Based on the following lines from documentation, I was wondering if I can have Knative automatically create the Ingress resources for my service? I haven't found details on this in the documentation.
-After the service has been created, Knative performs the following tasks:
-
-- Creates a new immutable revision for this version of the app.
-- Performs network programming to create a route, ingress, service, and load balancer for your app.
-- Automatically scales your pods up and down based on traffic, including to zero active pods.
-
-Example:
-Taking the Service and Ingress definition below, would it be possible to abstract away the Ingress yaml and have knative take care of its creation automatically for services?
-apiVersion: serving.knative.dev/v1
-kind: Service
-metadata:
-  name: hello
-  namespace: knative
-spec:
-  template:
-    metadata:
-      labels:
-        app: nonprofit
-      annotations:
-        queue.sidecar.serving.knative.dev/resourcePercentage: ""10""
-        autoscaling.knative.dev/class: ""kpa.autoscaling.knative.dev""
-        autoscaling.knative.dev/target: ""40""
-        autoscaling.knative.dev/min-scale: ""1""
-        autoscaling.knative.dev/max-scale: ""3""
-    spec:
-      containers:
-        - image: gcr.io/knative-samples/helloworld-java
-          resources:
-            requests:
-              cpu: 50m
-              memory: 100M
-            limits:
-              cpu: 200m
-              memory: 200M
-          ports:
-            - containerPort: 8080
-          env:
-            - name: TARGET
-              value: ""Sunny Day""
-  traffic:
-  - tag: latest
-    latestRevision: true
-    percent: 100
----
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  name: knative-hello-ingress
-  namespace: knative
-  annotations:
-    nginx.ingress.kubernetes.io/upstream-vhost: ""hello.knative""
-spec:
-  ingressClassName: ""ingress-generic""
-  rules:
-  - host: ""hello-knative.com""
-    http:
-      paths:
-      - pathType: Prefix
-        path: ""/""
-        backend:
-          service:
-            name: hello
-            port:
-              number: 80
-
-Thank you,
-Haven't tried anything as I haven't found details in the documentation regarding this.
-","1. Unfortunately, the v1 Ingress API in Kubernetes does not have sufficient capabilities to express Knative's routing requirements.  Knative does support several ingress implementations (including Istio, Contour, and the Gateway API), but no one has written a plugin for the Nginx Ingress annotations.
-Some of the capabilities that are missing from the Kubernetes Ingress API which are needed by Knative include:
-
-Backend traffic splits / weights
-Setting request headers to the backend server
-Requesting HTTP/2 or websockets protocol support
-
-If you're willing to use bets software, the Gateway API plugin is mostly feature complete and should plug into a variety of ingress providers.  Unfortunately, Nginx does not appear to be on that list.
-
-2. But we can route nginx ingress to kourier ingress:
-apiVersion: operator.knative.dev/v1beta1
-kind: KnativeServing
-metadata:
-  name: knative-serving
-  namespace: knative-serving
-spec:
-  ingress:
-    kourier:
-      enabled: true
-  config:
-    network:
-      ingress-class: ""kourier.ingress.networking.knative.dev""
-    deployment:
-      registries-skipping-tag-resolving: ""kind.local,ko.local,dev.local,docker.io,<our acr>""
-
-And your ingress rule annotation should be:
-nginx.ingress.kubernetes.io/upstream-vhost: hello.knative.svc.cluster.local
-
-For me it works since one hour ;).
-",Knative
-"I'm using Quarkus framework and the library called Funq with knative event bindings... Meaning that, I'm listening to Kubernetes cloud events in my Quarkus application and it's working properly.
-Just to illustrate there is a piece of code:
-@Funq
-@CloudEventMapping(trigger = ""fakeEvent"")
-fun fakeEventListener(payload: String) {
-    log.info(""$payload"")
-
-    return ""received""
-}
-
-curl -v ""http://broker-ingress.knative-eventing.svc.cluster.local/brokerName/default"" -X POST -H ""Ce-Id: 1234"" -H ""Ce-Specversion: 1.0"" -H ""Ce-Type: fakeEvent"" -H ""Ce-Source: curl"" -H ""Content-Type: application/json"" -d '""{}""'
-
-Is there aNY way to send a curl to the broker and instead of receiving a 202 status code, I get the response for the function?
-
-I know that I can use the return of the **fakeEventListener** to trigger another cloud event, but instead I need to have the response of this information in the caller (in the curl request for this example, or any http client library).
-If that is not possible how to provide be able to synchronously get the response from funq POST calls?
-","1. It is not possible to get this when curling the Broker. The Broker allows to decouple the components and send events asynchrnously. Also there could be multiple Triggers behind the Broker.
-When you need the response of your Knative Function synchronously, you need to call the function directly (e.g. via curl).
-In case you don't want to lose the Broker and simply need the response of the Function somewhere, you can make use of the reply event of your Broker-Trigger: The response of your Triggers subscriber (the Function in your case) is sent back as a new event to the Broker. So you could add another Trigger (which filters on the response event type) and send this event somewhere, where the responses are handled.
-",Knative
-"I configured my kubeflow gpu server recently, but the GPU does not get detected in there - the reason is cluster using wrong runtime.
-I prefer not to change the default runtime system-wide and tried modifying the ""PodDefault"" list out there in manifest to add gpu runtime option like that:
-apiVersion: kubeflow.org/v1alpha1
-kind: PodDefault
-metadata:
-  name: gpu-runtime-nv
-  namespace: kubeflow
-spec:
-  selector:
-    matchLabels:
-      gpu-runtime-nv: ""true""
-  desc: ""Select nvidia runtime""
-  runtimeClassName: nvidia
-
-
-I expected, since there was an example with volumeMounts, that any standard part of spec could be injected that way before starting a notebook, but I am getting an error indicating that the custom resource definition for PodDefault doesn't understand spec named runtimeClassName. I don't quite understand CRD of that magnitude so I don't know how to make it work, but perhaps there is a simpler way of achieving what I want.
-Bottom line:
-how to specify runtimeClassName for kubeflow notebooks when starting them?
-","1. As you mentioned, the PodDefault CRD doesn't allow runtimeClassName in the spec.
-Luckily, Kubeflow ships with a Notebook CRD which allows you to specify the runtimeClassName like so:
-apiVersion: kubeflow.org/v1
-kind: Notebook
-metadata:
-  name: nvidia-runtime-notebook
-  namespace: kubeflow-user-example-com
-spec:
-  template:
-    spec:
-      runtimeClassName: nvidia
-      containers:
-        - name: notebook
-          image: kubeflownotebookswg/jupyter-scipy:v1.8.0
-          resources:
-            limits:
-              nvidia.com/gpu: 1
-          volumeMounts:
-            - name: workspace
-              mountPath: /home/jovyan
-      volumes:
-        - name: workspace
-          persistentVolumeClaim:
-            claimName: my-pvc
-
-If you're unsure which image to use (or any other value, e.g. CPU) you can go through the Kubeflow UI for creating a Notebook and use the available values there as a guideline (as shown in the below screenshot).
-
-However, I would recommend using the kubeflownotebookswg/jupyter-tensorflow-cuda-full:v1.8.0 image if you're planning on using any GPU features, or you may receive errors like Could not find cuda drivers on your machine, GPU will not be used.
-",Kubeflow
-"I am trying to use the kubeflow v2 following the document below but but I am not able to run this succesfully. Below is my code snippet, Could you please let me know if something wrong.
-https://www.kubeflow.org/docs/components/pipelines/v2/components/containerized-python-components/#1-source-code-setup
-kfp                       2.7.0
-kfp-kubernetes            1.2.0
-kfp-pipeline-spec         0.3.0
-kfp-server-api            2.0.5
-
-kfp component build src/ --component-filepattern my_component.py
-
-~/Doc/S/d/s/python-containerized ❯ tree                                  
-.
-├── kpf-test.ipynb
-├── kubeflow-demo.yaml
-└── src
-    ├── Dockerfile
-    ├── __pycache__
-    │   └── my_component.cpython-312.pyc
-    ├── component_metadata
-    │   └── dataset_download.yaml
-    ├── kfp_config.ini
-    ├── my_component.py
-    └── runtime-requirements.txt
-
-#my_component.py
-from kfp import dsl
-
-@dsl.component(base_image='python:3.10',
-               target_image='mohitverma1688/my-component:v0.4',
-               packages_to_install=['pathlib','boto3','requests','kfp-kubernetes'])
-
-
-
-def dataset_download(url: str, base_path:str, input_bucket:str):
-
-    import os
-    import requests
-    import zipfile
-    from pathlib import Path
-    import argparse
-    import boto3
-    from botocore.client import Config
-    
-    s3 = boto3.client(
-        ""s3"",
-        endpoint_url=""http://minio-service.kubeflow:9000"",
-        aws_access_key_id=""minio"",
-        aws_secret_access_key=""minio123"",
-        config=Config(signature_version=""s3v4""),
-    )
-    # Create export bucket if it does not yet exist
-    response = s3.list_buckets()
-    input_bucket_exists = False
-    for bucket in response[""Buckets""]:
-        if bucket[""Name""] == input_bucket:
-            input_bucket_exists = True
-            
-    if not input_bucket_exists:
-        s3.create_bucket(ACL=""public-read-write"", Bucket=input_bucket)
-
-
-    # Save zip files to S3 import_bucket
-    data_path = Path(base_path)
-    
-    if data_path.is_dir():
-      print(f""{data_path} directory exists."")
-    else:
-      print(f""Did not find {data_path} directory, creating one..."")
-      data_path.mkdir(parents=True,exist_ok=True)
-
-
-    # Download pizza , steak and sushi data
-    with open(f""{data_path}/data.zip"", ""wb"") as f:
-        request = requests.get(f""{url}"")
-        print(f""Downloading data from {url}..."")
-        f.write(request.content)
-        for root, dir, files in os.walk(data_path):
-            for filename in files:
-                local_path = os.path.join(root,filename)
-                s3.upload_file(
-                   local_path,
-                   input_bucket,
-                   ""data.zip"",
-                   ExtraArgs={""ACL"": ""public-read""},
-                 )
-
- 
-    with zipfile.ZipFile(data_path/""data.zip"", ""r"") as zip_ref:
-      print(""Unzipping data..."")
-      zip_ref.extractall(data_path)
-
-    if __name__ == ""__main__"":
-        download(url, base_path, input_bucket)
-
-#%%writefile pipeline.py
-
-import src.my_component
-
-BASE_PATH=""/data""
-URL=""https://github.com/mrdbourke/pytorch-deep-learning/raw/main/data/pizza_steak_sushi.zip""
-INPUT_BUCKET=""datanewbucket""
-
-@dsl.pipeline(name='CNN-TinyVG-Demo',
-              description='This pipeline is a demo for training,evaluating and deploying Convutional Neural network',
-              display_name='Kubeflow-MlFLow-Demo')
-
-
-
-def kubeflow_pipeline(base_path: str = BASE_PATH,
-                     url:str = URL,
-                     input_bucket:str = INPUT_BUCKET):
-    pvc1 = kubernetes.CreatePVC(
-        # can also use pvc_name instead of pvc_name_suffix to use a pre-existing PVC
-        pvc_name='kubeflow-pvc3',
-        access_modes=['ReadWriteOnce'],
-        size='500Mi',
-        storage_class_name='standard',
-    )
-    task1 = dataset_download(base_path=base_path,
-                            url=url,
-                            input_bucket=input_bucket)
-    task1.set_caching_options(False)
-    kubernetes.mount_pvc(
-        task1,
-        pvc_name=pvc1.outputs['name'],
-        mount_path='/data',
-    )
-
-compiler.Compiler().compile(kubeflow_pipeline, 'kubeflow-demo.yaml')
-
-from kfp.client import Client
-
-client = Client(host='http://localhost:8002')
-run = client.create_run_from_pipeline_package(
-    'kubeflow-demo.yaml',
-     )
-
-0524 12:18:56.775872      30 cache.go:116] Connecting to cache endpoint 10.96.39.24:8887
-WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: 
-https://pip.pypa.io/warnings/venv
-[KFP Executor 2024-05-24 12:19:19,641 INFO]: --component_module_path is not specified. Looking for component `dataset_download` in config file `kfp_config.ini` instead
-/usr/local/lib/python3.10/site-packages/kfp/dsl/kfp_config.py:69: UserWarning: No existing KFP Config file found
-  warnings.warn('No existing KFP Config file found')
-Traceback (most recent call last):
-  File ""/usr/local/lib/python3.10/runpy.py"", line 196, in _run_module_as_main
-    return _run_code(code, main_globals, None,
-  File ""/usr/local/lib/python3.10/runpy.py"", line 86, in _run_code
-    exec(code, run_globals)
-  File ""/usr/local/lib/python3.10/site-packages/kfp/dsl/executor_main.py"", line 109, in <module>
-    executor_main()
-  File ""/usr/local/lib/python3.10/site-packages/kfp/dsl/executor_main.py"", line 75, in executor_main
-    raise RuntimeError('No components found in `kfp_config.ini`')
-RuntimeError: No components found in `kfp_config.ini`
-I0524 12:19:19.748954      30 launcher_v2.go:151] publish success.
-F0524 12:19:19.749580      30 main.go:49] failed to execute component: exit status 1
-time=""2024-05-24T12:19:20.313Z"" level=info msg=""sub-process exited"" argo=true error=""<nil>""
-Error: exit status 1
-time=""2024-05-24T12:19:20.602Z"" level=info msg=""sub-process exited"" argo=true error=""<nil>""
-Error: exit status 1
-
-
-I am wondering if I have understood the document or not. Is this right way to run? I am using jupyter notebook.
-","1. I found the erero in my configuration. I have to remove below code
-if name == ""main"":
-download(url, base_path, input_bucket)
-",Kubeflow
-"I'm exploring Kubeflow as an option to deploy and connect various components of a typical ML pipeline. I'm using docker containers as Kubeflow components and so far I've been unable to successfully use ContainerOp.file_outputs object to pass results between components.
-Based on my understanding of the feature, creating and saving to a file that's declared as one of the file_outputs of a component should cause it to persist and be accessible for reading by the following component.
-This is how I attempted to declare this in my pipeline python code:
-import kfp.dsl as dsl 
-import kfp.gcp as gcp
-
-@dsl.pipeline(name='kubeflow demo')
-def pipeline(project_id='kubeflow-demo-254012'):
-    data_collector = dsl.ContainerOp(
-        name='data collector', 
-        image='eu.gcr.io/kubeflow-demo-254012/data-collector',
-        arguments=[ ""--project_id"", project_id ],
-        file_outputs={ ""output"": '/output.txt' }
-    )   
-    data_preprocessor = dsl.ContainerOp(
-        name='data preprocessor',
-        image='eu.gcr.io/kubeflow-demo-254012/data-preprocessor',
-        arguments=[ ""--project_id"", project_id ]
-    )
-    data_preprocessor.after(data_collector)
-    #TODO: add other components
-if __name__ == '__main__':
-    import kfp.compiler as compiler
-    compiler.Compiler().compile(pipeline, __file__ + '.tar.gz')
-
-In the python code for the data-collector.py component I fetch the dataset then write it to output.txt. I'm able to read from the file within the same component but not inside data-preprocessor.py where I get a FileNotFoundError.
-Is the use of file_outputs invalid for container-based Kubeflow components or am I incorrectly using it in my code? If it's not an option in my case, is it possible to programmatically create Kubernetes volumes inside the pipeline declaration python code and use them instead of file_outputs?
-","1. Files created in one Kubeflow pipeline component are local to the container. To reference it in the subsequent steps, you would need to pass it as:
-data_preprocessor = dsl.ContainerOp(
-        name='data preprocessor',
-        image='eu.gcr.io/kubeflow-demo-254012/data-preprocessor',
-        arguments=[""--fetched_dataset"", data_collector.outputs['output'],
-                   ""--project_id"", project_id,
-                  ]
-
-Note: data_collector.outputs['output'] will contain the actual string contents of the file /output.txt (not a path to the file). If you want for it to contain the path of the file, you'll need to write the dataset to shared storage (like s3, or a mounted PVC volume) and write the path/link to the shared storage to  /output.txt. data_preprocessor can then read the dataset based on the path.
-
-2. There are three main steps:
-
-save a outputs.txt file which will include data/parameter/anything that you want to pass to next component.
-Note: it should be at the root level i.e /output.txt
-pass file_outputs={'output': '/output.txt'} as arguments as shown is example.
-inside a container_op which you will write inside dsl.pipeline pass argument (to respective argument of commponent which needs output from earlier component) as comp1.output (here comp1 is 1st component which produces output & stores it in /output.txt) 
-
-import kfp
-from kfp import dsl
-
-def SendMsg(
-    send_msg: str = 'akash'
-):
-    return dsl.ContainerOp(
-        name = 'Print msg', 
-        image = 'docker.io/akashdesarda/comp1:latest', 
-        command = ['python', 'msg.py'],
-        arguments=[
-            '--msg', send_msg
-        ],
-        file_outputs={
-            'output': '/output.txt',
-        }
-    )
-
-def GetMsg(
-    get_msg: str
-):
-    return dsl.ContainerOp(
-        name = 'Read msg from 1st component',
-        image = 'docker.io/akashdesarda/comp2:latest',
-        command = ['python', 'msg.py'],
-        arguments=[
-            '--msg', get_msg
-        ]
-    )
-
-@dsl.pipeline(
-    name = 'Pass parameter',
-    description = 'Passing para')
-def  passing_parameter(send_msg):
-    comp1 = SendMsg(send_msg)
-    comp2 = GetMsg(comp1.output)
-
-
-if __name__ == '__main__':
-  import kfp.compiler as compiler
-  compiler.Compiler().compile(passing_parameter, __file__ + '.tar.gz')
-
-
-3. You don't have to write the data to shared storage, you can use kfp.dsl.InputArgumentPath to pass an output from a Python function to the input of a container op.
-import os
-import kfp
-
-def download_archive_step(s3_src_path):
-    # Download step here
-    # ...
-
-@dsl.pipeline(
-    name = 'Build Model Server Pipeline',
-    description = 'Build a kserve model server pipeline.'
-)
-def build_model_server_pipeline(s3_src_path):
-    download_s3_files_task = download_archive_step(s3_src_path)
-    tarball_path = '/tmp/artifact.tar'
-    artifact_tarball = kfp.dsl.InputArgumentPath(download_s3_files_task.outputs['output_tarball'],
-                                                 path=tarball_path) 
-    build_container = kfp.dsl.ContainerOp(name = 'build_container',
-                                          image = 'python:3.8',
-                                          command = ['sh', '-c'],
-                                          arguments = ['ls -l ' + tarball_path + ';'],
-                                          artifact_argument_paths = [artifact_tarball])
-
-if __name__ == '__main__':
-    # Extract the filename without the extension and compile the pipeline
-    filename = os.path.splitext(__file__)[0]
-    kfp.compiler.Compiler().compile(build_model_server_pipeline, filename + '.yaml')
-
-",Kubeflow
-"I am writing a Kubeflow component which reads an input query and creates a dataframe, roughly as:
-from kfp.v2.dsl import component 
-
-@component(...)
-def read_and_write():
-    # read the input query 
-    # transform to dataframe 
-    sql.to_dataframe()
-
-I was wondering how I can pass this dataframe to the next operation in my Kubeflow pipeline.
-Is this possible? Or do I have to save the dataframe in a csv or other formats and then pass the output path of this?
-Thank you
-","1. You need to use the concept of the Artifact. Quoting:
-
-Artifacts represent large or complex data structures like datasets or models, and are passed into components as a reference to a file path.
-
-
-2. The solution I found:
-
-It's a good practice while using big set of data to transfert them from components to components through Artifact, which means share a commun file to read (Intput) and save (Output) data.
-
-Here is one possible way to do it for Dataset, the component will load data from BigQuery and then save it to the output in order to use the df from the next component:
-@component(packages_to_install=[""pandas"",""google-cloud-bigquery"", ""db-dtypes"", ""typing""])
-def load_df_from_bq_to_csv(bq_source: str, DATASET_DISPLAY_NAME: str, location: str, PROJECT_ID: str, df_output: Output[Dataset]):
-    from google.cloud import bigquery
-    
-    print(""bq_source: "", bq_source)
-    client = bigquery.Client(project=PROJECT_ID)
-    QUERY = f""""""SELECT * FROM `{bq_source}` LIMIT 1000""""""
-    df = client.query(QUERY).to_dataframe()
-    print(""Nbr of elements: "", df.shape[0])
-    
-    print(""Writing df to df_ml_features file output"")
-    df.to_csv(df_output.path, header=True,  index=False)
-
-
-@component(packages_to_install=[""pandas"", ""db-dtypes"", ""typing""])
-def print_df_nbr_rows_component(df_input: Input[Dataset]):
-    import pandas as pd
-    
-    df = pd.read_csv(df_input.path, header=0)
-    print(""df nbr of elements: "", df.shape[0])
-
-
-@kfp_pipeline(name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT)
-def pipeline(
-    bq_source: str,
-    DATASET_DISPLAY_NAME: str,
-    location: str = REGION,
-    PROJECT_ID: str = PROJECT_ID,
-):
-    load_df_operation = load_df_from_bq_to_csv(bq_source=bq_source, DATASET_DISPLAY_NAME=DATASET_DISPLAY_NAME, location=location, PROJECT_ID=PROJECT_ID)
-    print_df_nbr_rows_component(df_input=load_df_operation.outputs[""df_output""])
-
-
-",Kubeflow
-"I am very very new to both AI and MLOP's, please forgive me if my question is dumb. I am trying to learn about kubeflow but there is too much information on kubeflow documentation, then there are version >2 and <2 , these things adds to the complexity. I have python scripts to train a TinyVGG CNN model and then evaluate a model which works perfectly fine. I want to use those scripts to convert them into a pipeling, however I am not getting how to create kfp components from those files.  Please help me to guide , how I can break these scripts into components and then use them in the pipeline. One of my major problem is how to pass complex data type  liketorch.utils.data.dataloader.DataLoader from one component to other. I have checked kubeflow document , but it gives a simple example.
-https://www.kubeflow.org/docs/components/pipelines/v2/components/containerized-python-components/
-This is how my scripts setup looks.
-.
-├── __pycache__
-│   ├── data_setup.cpython-310.pyc
-│   ├── engine.cpython-310.pyc
-│   ├── model_builder.cpython-310.pyc
-│   └── utils.cpython-310.pyc
-├── data_setup.py
-├── engine.py
-├── get_data.py
-├── model_builder.py
-├── predict.py
-├── train.py
-└── utils.py
-
-To train, I simple run the train.py module with the arguments,
-python modular_ML_code/going_modular_argparse/train.py  --num_epochs 10 --hidden_units=128 --train-dir=<path to train dir> --test-dir=<path to test_dir> 
-
-In my train.py file, I have to import modules like below
-import data_setup, engine, model_builder, utils
-...
-
-train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(
-    train_dir=train_dir,
-    test_dir=test_dir,
-    transform=data_transform,
-    batch_size=BATCH_SIZE
-)
-...
-
-
-%%writefile modular_ML_code/going_modular_argparse/train.py
-""""""
-Trains a PyTorch image classification model using device-agnostic code.
-""""""
-
-import os
-import argparse
-import torch
-import data_setup, engine, model_builder, utils
-
-from torchvision import transforms
-
-#Create a parser
-
-parser = argparse.ArgumentParser(description=""Get some hyperparameters"")
-
-# Get an arg for num_epochs
-
-parser.add_argument(""--num_epochs"", 
-                     type=int,
-                     help=""the number of epochs to train for"",
-                     default=10)
-
-# Get an arg for batch_size
-
-parser.add_argument(""--batch_size"", 
-                    default=32,
-                    type=int,
-                    help=""number of samples per batch"")
-
-                    
-# Get an arg for hidden_units
-
-parser.add_argument(""--hidden_units"",
-                    default=10,
-                    type=int,
-                    help=""number of hidden units in hidden layers"")
-
-
-# Get an arge fpr learning_rate 
-
-parser.add_argument(""--learning_rate"",
-                    default=0.001,
-                    type=float,
-                    help=""learning rate use for model"")
-
-# Create a arg for the training directory
-
-parser.add_argument(""--train_dir"",
-                    default=""modular_ML_code/data/pizza_steak_sushi/train"",
-                    type=str,
-                    help=""Directory file path to training data in standard image classification format"")
-
-
-# Create a arg for the testing directory
-
-parser.add_argument(""--test_dir"",
-                    default=""modular_ML_code/data/pizza_steak_sushi/test"",
-                    type=str,
-                    help=""Directory file path to testing data in standard image classification format"")
-
-
-
-# Get out arguments from the parser
-
-args = parser.parse_args()
-
-
-
-
-
-# Setup hyperparameters
-NUM_EPOCHS = args.num_epochs
-BATCH_SIZE = args.batch_size
-HIDDEN_UNITS = args.hidden_units
-LEARNING_RATE = args.learning_rate
-
-
-# Setup directories
-train_dir = args.train_dir
-test_dir = args.test_dir
-
-# Setup target device
-device = ""cuda"" if torch.cuda.is_available() else ""cpu""
-
-# Create transforms
-data_transform = transforms.Compose([
-  transforms.Resize((64, 64)),
-  transforms.ToTensor()
-])
-
-# Create DataLoaders with help from data_setup.py
-train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(
-    train_dir=train_dir,
-    test_dir=test_dir,
-    transform=data_transform,
-    batch_size=BATCH_SIZE
-)
-
-# Create model with help from model_builder.py
-model = model_builder.TinyVGG(
-    input_shape=3,
-    hidden_units=HIDDEN_UNITS,
-    output_shape=len(class_names)
-).to(device)
-
-# Set loss and optimizer
-loss_fn = torch.nn.CrossEntropyLoss()
-optimizer = torch.optim.Adam(model.parameters(),
-                             lr=LEARNING_RATE)
-
-# Start training with help from engine.py
-engine.train(model=model,
-             train_dataloader=train_dataloader,
-             test_dataloader=test_dataloader,
-             loss_fn=loss_fn,
-             optimizer=optimizer,
-             epochs=NUM_EPOCHS,
-             device=device)
-
-# Save the model with help from utils.py
-utils.save_model(model=model,
-                 target_dir=""models"",
-                 model_name=""05_going_modular_script_mode_tinyvgg_model.pth"")
-
-The output returns like this
-0%|                                                    | 0/10 [00:00<?, ?it/s]Epoch: 1 | train_loss: 1.1460 | train_acc: 0.2891 | test_loss: 1.0858 | test_acc: 0.2708
- 10%|████▍                                       | 1/10 [00:07<01:07,  7.45s/it]Epoch: 2 | train_loss: 1.0724 | train_acc: 0.2930 | test_loss: 1.0242 | test_acc: 0.3201
- 20%|████████▊                                   | 2/10 [00:14<00:58,  7.32s/it]Epoch: 3 | train_loss: 1.0318 | train_acc: 0.5586 | test_loss: 1.0779 | test_acc: 0.3930
- 30%|█████████████▏                              | 3/10 [00:22<00:51,  7.38s/it]Epoch: 4 | train_loss: 1.0128 | train_acc: 0.5000 | test_loss: 1.2437 | test_acc: 0.4034
- 40%|█████████████████▌                          | 4/10 [00:29<00:43,  7.32s/it]Epoch: 5 | train_loss: 1.0905 | train_acc: 0.4336 | test_loss: 1.0099 | test_acc: 0.5852
- 50%|██████████████████████                      | 5/10 [00:36<00:36,  7.30s/it]Epoch: 6 | train_loss: 0.9365 | train_acc: 0.6367 | test_loss: 1.0314 | test_acc: 0.4025
- 60%|██████████████████████████▍                 | 6/10 [00:43<00:29,  7.26s/it]Epoch: 7 | train_loss: 0.9894 | train_acc: 0.5195 | test_loss: 1.0308 | test_acc: 0.4025
- 70%|██████████████████████████████▊             | 7/10 [00:51<00:21,  7.30s/it]Epoch: 8 | train_loss: 1.0748 | train_acc: 0.5156 | test_loss: 1.0312 | test_acc: 0.3939
- 80%|███████████████████████████████████▏        | 8/10 [00:58<00:14,  7.37s/it]Epoch: 9 | train_loss: 0.9672 | train_acc: 0.3867 | test_loss: 0.9279 | test_acc: 0.6250
- 90%|█████████��█████████████████████████████▌    | 9/10 [01:05<00:07,  7.33s/it]Epoch: 10 | train_loss: 0.8184 | train_acc: 0.6758 | test_loss: 0.9775 | test_acc: 0.4233
-100%|███████████████████████████████████████████| 10/10 [01:13<00:00,  7.31s/it]
-[INFO] Saving model to: models/05_going_modular_script_mode_tinyvgg_model.pth
-
-
-What I want is to break these steps in to different components and create a MLOPS pipeline. Could you please help.
-Below is what I tried:-
-import kfp
-from kfp import dsl
-from kfp import compiler
-from kfp.dsl import (Artifact, Dataset, Input, InputPath, Model, Output, OutputPath, ClassificationMetrics,
-                        Metrics, component)
-
-
-@component(
-    base_image=""python:3.10"",
-    packages_to_install=[""boto3"",""requests"",""pathlib""]
-)
-def download_dataset(input_bucket: str, data_path: str
-                    ):
-
-    """"""Download the custom data set to the Kubeflow Pipelines volume to share it among all steps""""""
-    import os
-    import zipfile
-    import requests
-    from pathlib import Path
-    import boto3
-    from botocore.client import Config
-    
-    s3 = boto3.client(
-        ""s3"",
-        endpoint_url=""http://minio-service.kubeflow:9000"",
-        aws_access_key_id=""minio"",
-        aws_secret_access_key=""minio123"",
-        config=Config(signature_version=""s3v4""),
-    )
-    # Create export bucket if it does not yet exist
-    response = s3.list_buckets()
-    input_bucket_exists = False
-    for bucket in response[""Buckets""]:
-        if bucket[""Name""] == input_bucket:
-            input_bucket_exists = True
-            
-    if not input_bucket_exists:
-        s3.create_bucket(ACL=""public-read-write"", Bucket=input_bucket)
-
-    # Save zip files to S3 import_bucket
-    data_path = Path(data_path)
-    
-    if data_path.is_dir():
-      print(f""{data_path} directory exists."")
-    else:
-      print(f""Did not find {data_path} directory, creating one..."")
-      data_path.mkdir(parents=True,exist_ok=True)
-
-    # Download pizza , steak and sushi data
-    with open(data_path/ ""pizza_steak_sushi.zip"", ""wb"") as f:
-        request = requests.get(""https://github.com/mrdbourke/pytorch-deep-learning/raw/main/data/pizza_steak_sushi.zip"")
-        print(""Downloading pizza, steak, sushi data..."")
-        f.write(request.content)
-        for root, dir, files in os.walk(data_path):
-            for filename in files:
-                local_path = os.path.join(root,filename)
-                s3.upload_file(
-                   local_path,
-                   input_bucket,
-                   f""{local_path}"",
-                   ExtraArgs={""ACL"": ""public-read""},
-                 )  
-             
-
-@component(
-    base_image=""python:3.10"",
-    packages_to_install=[""torch"",""torchvision"",""boto3"",""pathlib"",""requests""]
-)
-def process_data(
-    input_bucket: str,
-    data_path: str,
-    train_dir: str, 
-    test_dir: str, 
-    batch_size: int, 
-    num_workers: int=0,
-     ):
-    
-  
-    import os
-    from torchvision import datasets, transforms
-    from torch.utils.data import DataLoader
-    import boto3
-    import zipfile
-    from pathlib import Path
-    from botocore.client import Config
-    import requests
-
-  
-    def create_dataloaders(
-      train_dir: str, 
-      test_dir: str, 
-      transform: transforms.Compose, 
-      batch_size: int, 
-      num_workers: int=num_workers
-        
-      ):
-      
-      # Use ImageFolder to create dataset(s)
-      train_data = datasets.ImageFolder(train_dir, transform=transform)
-      test_data = datasets.ImageFolder(test_dir, transform=transform)
-  
-      # Get class names
-      class_names = train_data.classes
-  
-      # Turn images into data loaders
-      train_dataloader = DataLoader(
-          train_data,
-          batch_size=batch_size,
-          shuffle=True,
-          num_workers=num_workers,
-          pin_memory=True,
-      )
-      test_dataloader = DataLoader(
-          test_data,
-          batch_size=batch_size,
-          shuffle=False, # don't need to shuffle test data
-          num_workers=num_workers,
-          pin_memory=True,
-      )
-  
-      return train_dataloader, test_dataloader, class_names
-  
-    data_transform =  transforms.Compose([
-                  transforms.Resize((64, 64)),
-                  transforms.ToTensor()
-                  ])
-    
-    data_path = Path(data_path)
-    image_path = data_path / ""pizza_steak_sushi""
-    
-    if image_path.is_dir():
-      print(f""{image_path} directory exists."")
-    else:
-      print(f""Did not find {image_path} directory, creating one..."")
-      image_path.mkdir(parents=True,exist_ok=True)
-
-    with open(data_path/ ""pizza_steak_sushi.zip"", ""wb"") as f:
-      request = requests.get(""https://github.com/mrdbourke/pytorch-deep-learning/raw/main/data/pizza_steak_sushi.zip"")
-      print(""Downloading pizza, steak, sushi data..."")
-      f.write(request.content)
-
-    #Unzip the data
-
-    with zipfile.ZipFile(data_path/""pizza_steak_sushi.zip"", ""r"") as zip_ref:
-      print(""Unzipping pizza, steak, sushi data..."")
-      zip_ref.extractall(image_path)
-
-    # Remove the zip file 
-    os.remove(data_path / ""pizza_steak_sushi.zip"")
-        
-   
-
-    # Create DataLoaders with help from data_setup.py
-    train_dataloader, test_dataloader, class_names = create_dataloaders(
-                                                            train_dir=train_dir,
-                                                            test_dir=test_dir,
-                                                            transform=data_transform,
-                                                            batch_size=batch_size
-                                                            )
-
-  
-
-
-from kfp import compiler
-
-INPUT_BUCKET = ""inputdata""
-DATA_PATH = ""dataset""
-TRAIN_DIR = ""dataset/pizza_steak_sushi/train""
-TEST_DIR = ""dataset/pizza_steak_sushi/test""
-BATCH_SIZE = 32
-HIDDEN_UNITS = 10
-
-@dsl.pipeline(
-        name=""End-to-End-MNIST"",
-        description=""A sample pipeline to demonstrate multi-step model training, evaluation, export, and serving"",
-    )   
-    
-def my_pipeline(input_bucket: str = INPUT_BUCKET,
-                 data_path: str = DATA_PATH,
-                 train_dir: str = TRAIN_DIR,
-                 test_dir: str = TEST_DIR,
-                 batch_size: int = BATCH_SIZE,
-                 hidden_units: int = HIDDEN_UNITS
-                ):
-    
-    import_data_op =  download_dataset(input_bucket=INPUT_BUCKET,
-                     data_path=DATA_PATH)
-
-   
-    create_data_loader_op = process_data(input_bucket=INPUT_BUCKET,
-                                         data_path=DATA_PATH,
-                                         train_dir=TRAIN_DIR,
-                                         test_dir=TEST_DIR,
-                                         batch_size=BATCH_SIZE,                                   
-                                         )
-    create_model_op = create_model(hidden_units=HIDDEN_UNITS)
-    
-    create_data_loader_op.after(import_data_op)
-    create_model_op.after(create_data_loader_op)
-    
-
-if __name__ == '__main__':
-    compiler.Compiler().compile( my_pipeline,'pipeline2.yaml')
-
-from kfp.client import Client
-
-client = Client(host='http://localhost:8002')
-run = client.create_run_from_pipeline_package(
-    'pipeline2.yaml',
-)
-
-Now I want to pass this class_names assingment in the other step, which is to create a model->
-@component(
-    base_image=""python:3.10"",
-    packages_to_install=[""torch"",""torchvision"",""boto3"",""pathlib"",""requests""]
-)
-
-def train_model(
-    hidden_units: int, 
-     ):
-  
-  import torch
-  from torch import nn 
-  
-  class TinyVGG(nn.Module):
-
-    def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -> None:
-        super().__init__()
-        self.conv_block_1 = nn.Sequential(
-            nn.Conv2d(in_channels=input_shape, 
-                      out_channels=hidden_units, 
-                      kernel_size=3, 
-                      stride=1, 
-                      padding=0),  
-            nn.ReLU(),
-            nn.Conv2d(in_channels=hidden_units, 
-                      out_channels=hidden_units,
-                      kernel_size=3,
-                      stride=1,
-                      padding=0),
-            nn.ReLU(),
-            nn.MaxPool2d(kernel_size=2,
-                          stride=2)
-        )
-        self.conv_block_2 = nn.Sequential(
-            nn.Conv2d(hidden_units, hidden_units, kernel_size=3, padding=0),
-            nn.ReLU(),
-            nn.Conv2d(hidden_units, hidden_units, kernel_size=3, padding=0),
-            nn.ReLU(),
-            nn.MaxPool2d(2)
-        )
-        self.classifier = nn.Sequential(
-            nn.Flatten(),
-            # Where did this in_features shape come from? 
-            # It's because each layer of our network compresses and changes the shape of our inputs data.
-            nn.Linear(in_features=hidden_units*13*13,
-                      out_features=output_shape)
-        )
-  
-    def forward(self, x: torch.Tensor):
-        x = self.conv_block_1(x)
-        x = self.conv_block_2(x)
-        x = self.classifier(x)
-        return x
-        # return self.classifier(self.conv_block_2(self.conv_block_1(x))) # <- leverage the benefits of operator fusion
-  device = ""cuda"" if torch.cuda.is_available() else ""cpu"" 
-  model = TinyVGG(
-      input_shape=3,
-      hidden_units=hidden_units,
-      output_shape=len(class_names)
-  ).to(device)
-
-","1. I was able to create a pipeline using v2.0 method by containerising modules.
-I have breakdown each step.
-── Dockerfile
-├── build.sh
-├── src
-│   ├── gen.py
-│   ├── requirements.txt
-
-
-from kfp import dsl
-
-@dsl.container_component
-def model_train():
-    return dsl.ContainerSpec(image='mohitverma1688/model_train:v0.1.1', 
-                             command=['/bin/sh'], args=['-c' ,' python3 train.py --num_epochs 10  --batch_size 32 --hidden_units 10 --train_dir /data/train  --learning_rate 0.01 --test_dir /data/train --target_dir /data/models '])
-
-
-",Kubeflow
-"When I run commands ""kubectl get pod"" its showing error:
-E0529 06:34:46.052414   11652 memcache.go:265] couldn't get current server API group list: Get ""http://localhost:8080/api?timeout=32s"": dial tcp 127.0.0.1:8080: connect: connection refused. The connection to the server localhost:8080 was refused
-did you specify the right host or port?
-how to resolve this error?
-i done below steps on master node.
-mkdir -p $HOME/.kube
-sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-sudo chown $(id -u):$(id -g) $HOME/.kube/config
-export KUBECONFIG=/etc/kubernetes/admin.conf
-master is working fine but worker-node is showing error .
-","1. Copy the kubeconfig(config) file from master to worker node.  The config file will be there in ""$HOME/.kube/"".  More details of cluster access are explained in k8s documentation.
-",Kubernetes
-"I have an issue for a while now but I really want to solve it.
-It's very important for me to do my job and is frustrated me for a while now. Please help:
-We use aws-azure-login on our MAC in order to connect to our cloud environments.
-(see this link for reference: https://www.npmjs.com/package/aws-azure-login)
-as you can see in this documentation, there's an option to login without inputing your password every time. This does not work for me even do I completed these steps several times.
-Also part of the main issue, I can't connect using the simple # aws-azure-login command in the cli,
-when i try the following happens:
-@JO-AVlocal ~> aws-azure-login
-Logging in with profile 'default'...
-Using AWS SAML endpoint XXXX
-? Password: [hidden]
-Open your Authenticator app, and enter the number shown to sign in.
-27
-Unable to recognize page state! A screenshot has been dumped to aws-azure-login-unrecognized-state.png. If this problem persists, try running with --mode=gui or --mode=debug
-
-(Attaching the image for reference)
-it reads ""Set up your device to get access""
-then follows ""<company_name> requires you to secure this device before you access <company_name> email, files, and data""
-now, it's worth mentioning that my computer is in fact secure and I have access to my email, company network and files. I can connect just fine to any secured resource.
-running in debug mode isn't helpful as when I add the debug flag it opens the gui interface, which does work for me. But my issue is with my aws-azure-login in the cli.
-When i run: @JO-AVlocal ~> aws-azure-login -p <profile_name> -m gui
-I connect just fine!
-I have found several places trying to solve this issue but nothing worked for me so far:
-#61
-going to https://mysignins.microsoft.com/security-info and changing my MFA did not help.
-I thought this has something to do with chromium on some level because noticing logs provided from IT, when I try to log in without the gui flag it seems my computer uses an old version of chromium. But I try completely deleting any version I have and only installing the newest one and it did not help at all.
-General knowledge that might be relevant
-I use on my computer: anaconda3, node js, homebrew and fish shell.
-Troubleshooting steps takes:
-I re-installed chromium, anaconda3, aws-azure-login, node and npm.
-Tried to update, upgrade everything several times.
-tried to set a different path to the newest version of chromium
-Re-configure the aws login details and re-configure the MFA.
-This issue bothers me a lot. Please help.
-","1. I have been suffering the same issue for the past few months
-",Kubernetes
-"i have a problem with slurm every job i execute keeps pending
-and i dont know what to do (im new to the field)
-scontrol: show job
-JobId=484 JobName=Theileiria_project
-   UserId=dhamer(1037) GroupId=Bio-info(1001) MCS_label=N/A
-   Priority=4294901741 Nice=0 Account=(null) QOS=normal
-   JobState=PENDING Reason=BeginTime Dependency=(null)
-   Requeue=1 Restarts=481 BatchFlag=1 Reboot=0 ExitCode=0:0
-   RunTime=00:00:00 TimeLimit=01:00:00 TimeMin=N/A
-   SubmitTime=2022-04-19T08:47:58 EligibleTime=2022-04-19T08:49:59
-   AccrueTime=2022-04-19T08:49:59
-   StartTime=2022-04-19T08:49:59 EndTime=2022-04-19T09:49:59 Deadline=N/A
-   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2022-04-19T08:47:58
-   Partition=defq AllocNode:Sid=omix:377206
-   ReqNodeList=(null) ExcNodeList=(null)
-   NodeList=(null)
-   BatchHost=omics001
-   NumNodes=1 NumCPUs=30 NumTasks=30 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
-   TRES=cpu=30,mem=32G,node=1,billing=30
-   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
-   MinCPUsNode=1 MinMemoryNode=32G MinTmpDiskNode=0
-   Features=(null) DelayBoot=00:00:00
-   OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)
-   Command=/home/dhamer/test.sh
-   WorkDir=/home/dhamer
-   StdErr=/home/dhamer/Theileiria_project.log
-   StdIn=/dev/null
-   StdOut=/home/dhamer/Theileiria_project.log
-   Power=
-
-Submission file:
-#!/bin/bash #SBATCH --job-name=serial_job_test 
-#SBATCH --mail-type=END,FAIL 
-#SBATCH --mail-user=test@gmail.com # Where to send mail 
-#SBATCH --ntasks=1 # Run on a single CPU 
-#SBATCH --mem=1gb # Job memory request 
-#SBATCH --time=00:05:00 # Time limit hrs:min:sec 
-#SBATCH --output=serial_test_%j.log # Standard output and error log 
-
-pwd; hostname; date module load python 
-echo ""Running plot script on a single CPU core"" 
-python /data/training/SLURM/plot_template.py date
-
-","1. Reason=BeginTime in the scontrol output means (according to man squeue) that ""The job's earliest start time has not yet been reached."" This is usually because the queue is full, or your job has low priority in the queue.
-I would check with your systems administrators or your HPC helpdesk.
-By the way, the submission command in your comment doesn't match the scontrol output, since in the script you set the timelimit to 5 minutes, but the output indicates a timelimit of 1 hour.
-
-2. To check the running and pending jobs in the SLURM queue, you can run something like the following in the bash command:
-squeue --format=""%.18i %.9P %.30j %.8u %.8T %.10M %.9l %.6D %R"" --states=""PENDING,RUNNING""
-
-If you know the partition is named ""bigmem"" for example you can narrow down the list of jobs returned by entering the following into the command line:
-squeue --format=""%.18i %.9P %.30j %.8u %.8T %.10M %.9l %.6D %R"" --partition=""bigmem"" --states=""PENDING,RUNNING""
-
-Which will return something like:
-             JOBID PARTITION        NAME     USER    STATE       TIME TIME_LIMI  NODES NODELIST(REASON)
-           2714947    bigmem    step2.sh    user1  PENDING       0:00  12:00:00      1 (Resources)
-           2206052    bigmem    mcca_jhs    user2  RUNNING 8-22:52:18 11-00:00:00    1 t0601
-
-",Slurm
-"I have a cluster of 6 compute nodes and 1 master node for academic research purposes. I am trying to test my cluster and make sure that they can complete an assortment of sbatch jobs submmited. I want to use the sbcast command to copy over a file from master to the compute node, and then eventually execute that copied file.
-I am running sbatch test_job, here is my bash script:
-#!/bin/bash
-
-#SBATCH --job-name=totaltestjob
-#SBATCH --output=newoutput.out
-#SBATCH --error=error1.txt
-#SBATCH --exclusive
-#SBATCH --nodes=1
-
-
-sbcast pscript.py  ~
-python3 pscript.py
-
-However after submitting the job, the error1.txt file on my compute node reads:
-sbcast: error: Can't open 'data.txt': No such file or directory. 
-
-I have tried giving the pscript.py file 777 permissions. I have tried multiple paths for the source and destination parameters, like home/user/pscript.py. Nothing seems to get rid of the error message above. The cluster is up and the nodes are commmunicating with each other, and I have successfully submitted sbatch script without the sbcast command. Open to any suggestions.
-Thank you for your time.
-I am back a year later trying to solve this issue and unfortunately, the given solution does solve it.
-My compute node is unable to access the file on the master node that I am trying to transfer over. For example:
-`#!/bin/bash
-#SBATCH --job-name=sbcastjob
-#SBATCH --output=sbcast_%j.out
-#SBATCH --error=sbcasterror_%j.txt
-
-sbcast test.py  ~/test.py
-srun ~/test.py`
-
-When executed with the line
-sbatch --nodes=1 sbcasttestscript.sh 
-Submitted batch job 102
-Will give the error:
-sbcast: error: Can't open test.py: No such file or  directory srun: error: temple-comp01: task 0: Exited with exit code 2 slurmstepd: error: execve(): /home/user/test.py: No such file  or directory
-This happens regardless of how I specify the path. I am assuming this issue is larger, such as a configuration issue and I am hoping to find a solution. I also may be misunderstanding how slurm works. I am executing my sbatch command with the bash script on my master node, in hopes of the python script being transferred to my compute node and then executed.
-","1. I would try the name of the file even in destination e.g
-sbcast pscript.py  ~/pscript.py
-
-Hope it helps
-",Slurm
-"I am attempting to get RabbitMQ to run in Nomad using Docker. However I have stumbled into some problems related to permissions. When attempting to run the Job in Nomad I either get this error:
-sed: preserving permissions for ‘/etc/rabbitmq/sedpR1m3w’: Operation not permitted
-sed: preserving permissions for ‘/etc/rabbitmq/sedEc0Idz’: Operation not permitted
-/usr/local/bin/docker-entrypoint.sh: line 250: /etc/rabbitmq/rabbitmq.conf: Permission denied
-touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Permission denied
-
-WARNING: '/etc/rabbitmq/rabbitmq.conf' is not writable, but environment variables have been provided which request that we write to it
-  We have copied it to '/tmp/rabbitmq.conf' so it can be amended to work around the problem, but it is recommended that the read-only source file should be modified and the environment variables removed instead.
-
-/usr/local/bin/docker-entrypoint.sh: line 250: /tmp/rabbitmq.conf: Permission denied
-
-
-or this error:
-chmod: changing permissions of '/var/lib/rabbitmq/.erlang.cookie': Operation not permitted
-
-I have setup volumes so that RabbitMQ data can be preserved. These volumes is pointing to an SMB share on a Windows Server box elsewhere on the network.
-I have added the following to /etc/ftstab for auto mounting:
-//DC02/Nomad /mnt/winshare cifs credentials=/home/linuxnomad/.smbcreds,uid=995,gid=993,file_mode=0777,dir_mode=0777 0 0
-
-This is what the Job spec looks like:
-job ""rabbitmq03"" {
-  datacenters = [""techtest""]
-  type        = ""service""
-  
-  constraint {
-    attribute = ""${attr.kernel.name}""
-    value     = ""linux""
-  }
-
-    constraint {
-    attribute = ""${attr.unique.hostname}""
-    value     = ""nomadlinux03""
-  }
-  
-  group ""rabbitmq"" {
-    network {
-        mode = ""cni/prod""
-      hostname = ""RabbitMqNOMAD03""
-    }
-    
-    service {
-      name         = ""${JOB}""
-      port         = 5672
-      address_mode = ""alloc""
-      check {
-        type         = ""http""
-            port         = 15672
-        path         = ""/api/health/checks/local-alarms""
-        interval     = ""3s""
-        timeout      = ""2s""
-        address_mode = ""alloc""
-        header {
-          Authorization = [""Basic Z3Vlc3Q6Z3Vlc3Q=""]
-        }
-      }
-    }
-
-    task ""rabbitmq"" {
-      driver = ""docker""
-      
-      config {
-        privileged     = false
-        image          = ""rabbitmq:3.8.12-management""
-        auth_soft_fail = true
-        
-        volumes = [
-          ""/mnt/winshare/RabbitMQ03/data:/var/lib/rabbitmq/mnesia"",
-          ""/mnt/winshare/RabbitMQ03/config:/etc/rabbitmq"",
-          ""/mnt/winshare/RabbitMQ03/log:/var/log/rabbitmq""
-        ]
-      }
-
-      env {
-        HOSTNAME = ""RabbitMqNOMAD""
-        RABBITMQ_DEFAULT_USER = ""guest""
-        RABBITMQ_DEFAULT_PASS = ""guest""
-        RABBITMQ_ERLANG_COOKIE = ""testsecret""    
-      }
-
-      resources {
-        cpu    = 1001
-        memory = 6144
-      }
-    }
-  }
-}
-
-
-I did make sure to mount the SMB share with the Nomad user rights, so my expectation would be that it's fine, but perhaps I'm missing something?
-","1. 
-I did make sure to mount the SMB share with the Nomad user rights
-
-You are running a docker container. Nomad user rights are irrelevant, as long as it can access docker daemon.
-
-perhaps I'm missing something?
-
-Samba and cifs has it's own permissions, and you are forcing uid=995,gid=993,file_mode=0777,dir_mode=0777.
-Research docker containers and what is user virtualization. Your error is unrelated to Nomad. Research samba permisions and the specific docker container and application you are running, i.e. the rabbitmq:3.8.12-management docker continaer, for what permissions it expects. Additionally, research the standard linux file permission model.
-(Also, I think, bind-mounting a subdirectory of CIFS mount might not work as expected, but this is a guess.)
-The container changes to rabbitmq user on entrypoint https://github.com/docker-library/rabbitmq/blob/master/docker-entrypoint.sh#L10 .
-",Nomad
-"I have the following issue and I am not sure if I am doing something not right or it is not working as expected. 
-
-I have a consul cluster with ACL enabled.
-ACL default policy is set to DENY (""acl_default_policy"": ""deny"",)
-For now I am always using the main management CONSUL token for communication. 
-I also have VAULT and NOMAD configured with the management token and ""vault.service.consul"" and ""nomad.service.consul"" are registering in consul
-I specifically configured NOMAD with the consul stanza with the consul management token to be able to communicate with consul and register itself. 
-
-consul {
-      address = ""127.0.0.1:8500""
-      token   = ""management token""
-    }
-I am using NOMAD to schedule Docker containers.  Those docker containers need to populate configuration files from CONSUL KV store and I made that work with consul template (when no ACL is enabled). 
-Now my issue is that when I have  ACL enabled in CONSUL - the docker containers are NOT able to get the values from CONSUL KV store with 403 errors (permission deny) because of the ACL. I thought that since I have configured the consul stanza in NOMAD like:
-consul {
-  address = ""127.0.0.1:8500""
-  token   = ""management token""
-}
-
-all the jobs scheduled with NOMAD will be able to use that management token and the Docker containers will be able to communicate with CONSUL KV ?!
-If I place the management token as a Docker environment variable in the NOMAD job description - than it works:
-env {
-      ""CONSUL_HTTP_TOKEN"" = ""management token""
-    }
-
-However I do not want to place that management token in the Job description as they will be checked in git. 
-Am I doing something wrong or this simply does not work like that ?
-Thank you in advance. 
-","1. Why would setting consul token in the Nomad service configuration file export the token automatically for jobs? That would be a security hole, any job could wipe all consul configuration.
-There are proper solution for managing secrets of any kind. The most advanced is to integrate with Hashicorp vault https://developer.hashicorp.com/nomad/docs/integrations/vault#dynamic-configuration-with-secrets then you can use templates to insert environment variables of secrets stored in vault.
-A simple solution is to store the token in Nomad variables and then use templates to set environment variable from it. By using the proper path to Nomad variable, you can restrict job access to it. See https://developer.hashicorp.com/nomad/docs/concepts/variables .
-Update: with Nomad 1.7 we now have identities that jobs can request, and that identity can request a consul token for the job. See https://developer.hashicorp.com/nomad/docs/concepts/workload-identity#workload-identity-for-consul-and-vault .
-",Nomad
-"Is there a way to make templates optional in Nomad job files?
-If test.data does not exist in Consul KV I want the job to simply ignore the template.
-Below example using keyOrDefault nearly does what I want but still creates an empty file in my container ""testfile"".
-I don't want any file to be created at all if key does not exist.
-  template {
-    destination = ""local/testfile""
-    perms       = ""100""
-    data        = ""{{ keyOrDefault \""service/foo/bar/test.data\"" \""\"" }}""
-  }
-
-If possible I would like to include the entire template in an if statement.
-","1. 
-Is there a way to make templates optional in Nomad job files?
-
-No.
-",Nomad
-"Metadata
-
-Nomad v1.7.2
-OS: Windows
-Scheduler: service
-Driver: exec / raw_exec
-
-Architecture
-I have 10 windows servers running nomad client nodes.
-I want to run 3 instances of a stateful service using Nomad. Each instance (01, 02, 03) writes checkpoints and data to filesystem (eg. /tmp/instance01, /tmp/instance02, /tmp/instance03). When the instance restarts, it will continue from the latest checkpoint. Each instance can be allocated to any host. However, each instance should be configured to use the same directory as the previously failed instance.
-So basically:
-
-01 <--> /tmp/instance01
-02 <--> /tmp/instance02
-03 <--> /tmp/instance03
-
-For simplicity, assume these 3 directories are created in NAS, and the NAS is mounted on all servers running nomad client node. Also assume all groups / tasks has RW access to these 3 directories.
-Issue
-There are a few ways I can configure the directory that the service uses to read/write state data:
-
-Command Line Argument
-Configuration File, via environment variable
-Template block, via Nomad template interpolation
-
-How can I pass a different value to the same task running in different instances of a group?
-i.e. How do I give each instance a unique and consistent tag, so it can be reliably identified?
-What I’ve considered
-
-Parameterized Block --> Does not work for Service jobs
-Template Block -> Every task instance will receive the same data
-Env Var --> Every task instance will receive the same data
-Meta Block --> Every task instance will receive the same data
-Variable Block --> Every task instance will receive the same data
-Dynamic Block --> Possible solution, but this is essentially repeating Group Block
-Repeating Group Block --> Trying to avoid this
-Multiple Job Spec --> Trying to avoid this
-
-It seems like it's possible to loop within a nomad task, but not across tasks.
-Any solution appreciated, even hacky ones. TIA!
-Job Spec
-job ""app-write"" {
-  datacenters = [""dc1""]
-  type = ""service""
-  node_pool = ""default""
-
-  # Write
-  group ""app-write"" {
-    count = 3
-    
-    # /tmp/instance01
-    volume ""app01"" {
-      type = ""host""
-      read_only = false
-      source = ""tmpapp01""
-    }
-
-    # /tmp/instance02
-    volume ""app02"" {
-      type = ""host""
-      read_only = false
-      source = ""tmpapp02""
-    }
-
-    # /tmp/instance03
-    volume ""app03"" {
-      type = ""host""
-      read_only = false
-      source = ""tmpapp03""
-    }
-    
-    network {
-      port ""http"" { }  // 3100
-      port ""grpc"" { } // 9095
-      port ""gossip"" { }  // 7946
-      # port ""lb"" { static = 8080 }
-    }
-
-    service {
-      name = ""app-write""
-      address_mode = ""host""
-      port = ""http""
-      tags = [""http""]
-      provider = ""nomad""
-    }
-
-    service {
-      name = ""app-write""
-      address_mode = ""host""
-      port = ""grpc""
-      tags = [""grpc""]
-      provider = ""nomad""
-    }
-
-    service {
-      name = ""app-write""
-      address_mode = ""host""
-      port = ""gossip""
-      tags = [""gossip""]
-      provider = ""nomad""
-    }
-
-    task ""app-write"" {
-      driver = ""exec"" # or ""raw_exec""
-      
-      volume_mount {
-        volume = ""app01""
-        destination = ""/tmp/app01""
-        read_only = false
-      }
-
-      volume_mount {
-        volume = ""app02""
-        destination = ""/tmp/app02""
-        read_only = false
-      }
-
-      volume_mount {
-        volume = ""app03""
-        destination = ""/tmp/app03""
-        read_only = false
-      }
-
-      config {
-        command = ""/usr/bin/app""
-        args = [
-          ""-config.file=local/app/config.yaml"",
-          ""-working-directory=/tmp/app01"" # <-- Need this to change for each instance
-          ]
-      }
-
-      resources {
-        cpu = 100
-        memory = 128
-      }
-
-      # Can change this for each instance too
-      template {
-        source = ""/etc/app/config.yaml.tpl""
-        destination = ""local/app/config.yaml""
-        change_mode = ""restart"" // restart
-      }
-    }
-  }
-}
-
-","1. 
-How do I configure task in group to read a different volume for different instances?
-
-Consider just writing the 3 different groups. They are different.
-job ""app-write"" {
-  group ""app-write1"" {
-    volume ""app01"" {
-    }
-  }
-  group ""app-write2"" {
-    volume ""app02"" {
-    }
-  }
-  group ""app-write3"" {
-    volume ""app03"" {
-    }
-  }
-}
-
-
-""-working-directory=/tmp/app01"" # <-- Need this to change for each instance
-
-
-That is rather simple. You would need a shell to calculate it.
-   command = ""sh""
-   args = [ ""-xc"", <<EOF
-      idx=$(printf ""%02d"" ""$((NOMAD_ALLOC_INDEX + 1))"")
-      /usr/bin/app \
-         -config.file=local/app/config.yaml \
-         -working-directory=/tmp/app$idx
-     EOF
-  ]
-
-",Nomad
-"Please, I am installing Minione, Openneubla on Ubuntu on my machine, but I encounter a problem each time at the step ""Setting initial password for the current user and oneadmin FAILED."" Can someone help me, please?
-chmod +x minione 
-./minione --force 
-
-","1. Looking at the function in the minione script that reports the error:
-set_init_password() {
-    [[ ! -d $HOME/.one ]] && { mkdir ""$HOME""/.one || return 1; }
-    echo ""oneadmin:$PASSWORD"" >""$HOME""/.one/one_auth || return 1
-    echo ""oneadmin:$PASSWORD"" >/var/lib/one/.one/one_auth
-}
-
-This function is trying the following:
-
-Create the .one directory where stores the user credentials in $HOME (your current home path) and in the /var/lib/one path (OpenNebula's home path).
-Create the one_auth file with the credentials in each directory, this way oneadmin and your current user will be able to access to OpenNebula as oneadmin user (main OpenNebula admin user).
-
-So it looks like a permission issue. I recommend executing the script with sudo privileges as indicated in the README file of the repository.
-",OpenNebula
-"I configured ovs-br0 openvswitch from physical port ens1f1, but then after restarting the network, I encountered an error Failed to start LSB: Bring up / down networking.
-enter image description here
-ip add
-enter image description here
-Please. Can you help me!
-","1. Your server looks to be an RHEL/CentOS 7 one. Check if the file /etc/sysconfig/network exists on the server, if not just simply create one with touch command and try restarting or checking status post that.
-touch /etc/sysconfig/network
-
-
-2. I had similar after rebooting my CentOs 7.
-First needed to see what was going on:
-journalctl -u network.service
-
-For me it was:
-Bringing up interface enp2s0f0:  Error: Connection activation failed: No suitable device found for this connection (device eth0 not available because profile is not compatible with device (permanent MAC address doesn't match)).
-
-Then I go to /etc/sysconfig/network-scripts I noticed existence of some files like ifcfg-* , I deleted them, I have only left ifcg-eth0
-Then I tried again
-systemctl restart network
-
-and it has started
- network.service - LSB: Bring up/down networking
-   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
-   Active: active (exited) since Fri 2023-11-03 00:01:11 CET; 10min ago
-     Docs: man:systemd-sysv-generator(8)
-
-",OpenNebula
-"I'm creating a new VM and get this error. What should I do?
-
-Mon Jan 21 13:06:41 2019 [Z0][ReM][D]: Req:2080 UID:0 one.vmpool.info
-  invoked , -2, 0, -200, -1 Mon Jan 21 13:06:41 2019 [Z0][ReM][D]:
-  Req:2080 UID:0 one.vmpool.info result SUCCESS,
-  ""69<..."" Mon Jan 21 13:06:41 2019 [Z0][ReM][D]:
-  Req:8720 UID:0 one.user.info invoked , 0 Mon Jan 21 13:06:41 2019
-  [Z0][ReM][D]: Req:8720 UID:0 one.user.info result SUCCESS,
-  ""0
-  
-  Mon Jan 21 13:06:43 2019 [Z0][VMM][D]: Message received: LOG I 103
-  Successfully execute network driver operation: pre.
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG I 103
-  Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
-  '/var/lib/one//datastores/101/103/deployment.0'
-  'fast.sense.dcc.ufmg.br' 103 fast.sense.dcc.ufmg.br
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG I 103
-  error: Failed to create domain from
-  /var/lib/one//datastores/101/103/deployment.0
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG I 103
-  error: internal error: process exited while connecting to monitor:
-  2019-01-21T15:06:44.029263Z qemu-system-x86_64: -drive
-  file=/var/lib/one//datastores/101/103/disk.1,format=qcow2,if=none,id=drive-virtio-disk0,cache=none:
-  Could not open '/var/lib/one//datastores/101/103/disk.1': Permission
-  denied
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG E 103
-  Could not create domain from
-  /var/lib/one//datastores/101/103/deployment.0
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG I 103
-  ExitCode: 255
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: LOG I 103
-  Failed to execute virtualization driver operation: deploy.
-Mon Jan 21 13:06:44 2019 [Z0][VMM][D]: Message received: DEPLOY
-  FAILURE 103 Could not create domain from
-  /var/lib/one//datastores/101/103/deployment.0
-
-","1. Resolved!
-I added oneadmin to sudo group:
-sudo adduser oneadmin sudo
-
-And, added: 
-user = ""root""
-group = ""root""
-dynamic_ownership = 0
-
-to /etc/libvirt/qemu.conf
-
-2. Installing the opennebula-node-kvm package on the VM host server would take care of all of this i.e
-
-creating oneadmin user,
-
-defining required sudo privileges to oneadmin user
-
-creating the /etc/libvirt/qemu.conf with the required configuration
-user = ""root""
-group = ""root""
-dynamic_ownership = 0
-
-
-
-3. AFAIK, this directory /var/lib/one/ needs to be owned by oneadmin.
-You don't need to add oneadmin users to the sudo group.
-Just make sure that the /var/lib/one/ directory is owned by oneadmin, and you are ready to go.
-You can check the ownership of the /var/lib/one/ directory with the following command:
-ls -ld /var/lib/one/
-drwxr-x--- 8 oneadmin oneadmin 253 Jan 12 2023 /var/lib/one/
-
-",OpenNebula
-"I am new to opennebula. Trying xml-rpc apis with postman but it says timeout connect could not be made. On the other hand when tried same url on browser it says “csrftoken”. can someone please share me a tutorial or any resources that help me fire an api with postman.
-","1. It looks like you are having issues with the XML-RPC APIs in OpenNebula. The ""csrftoken"" message you received in your browser comes from Sunstone. You need to interact with the 2616 port instead. For more information, I recommend checking out the OpenNebula documentation at https://docs.opennebula.io/6.6/integration_and_development/system_interfaces/api.html. In case you need some more help from the community, feel free to post your query in the community forum here: https://forum.opennebula.io/. Good luck!
-",OpenNebula
-"I have been trying to use Amazon Web services, specially EC2 and RDS. Nowadays most CMP (Cloud Management Platform) like Eucalyptus, OpenNebula, OpenStack, Nimbus and CloudStack all support Ec2 to a certain level, some do it better than others. 
-
-But when it comes to Amazon's RDS service I just can't seem to find any information. It's like no CMP supports it. On my research I came across a website that suggests the use of third-party software like HybridFox, RightScale enStratus to have an RDS like support but I don't get it.
-
-Can someone tell me if Eucalyptus, OpenNebula, OpenStack, Nimbus and CloudStack support RDS?
-If not, then how I can I use third-party software to access Amazon's RDS service using the previously mentioned CMPs?
-","1. RDS is a proprietary technology from Amazon. The equivalent Database as a Service in OpenStack is project Red Dwarf - https://wiki.openstack.org/wiki/Reddwarf  which is implemented as Cloud Databases at Rackspace (for MySQL)
-",OpenNebula
-"I plan to run Prefect 2 flows on a cron schedule - but is there any way to access, at runtime, the timestamp at which this particular run was scheduled?
-For example, if the flow is scheduled to run nightly at 2am, I want to use today's 2am timestamp to calculate the start and end boundaries for the data I need to retrieve and process in my run:
-
-Scheduled time 15 May 2024, 02:00
-Zero the hours to get the preceding midnight 15 May 2024, 00:00
-Subtract 1 day to get the midnight before that 14 May 2024, 00:00
-query the database for data between those two midnights
-process exactly 1 day's worth of data
-
-Importantly, if the flow fails and is retried at 3am or 4am, this clock time shift shouldn't affect the above calculation because it's the originally scheduled time that matters. We should still query for data between the previous midnights.
-Basing these calculations on the current clock time, rather than the scheduled time, would work in simple cases, but runs into problems if the actual execution is delayed until substantially after the scheduled execution time, or we are doing backfill runs etc. Especially if running much more frequently than nightly, so retried runs could overlap with new runs.
-But is that scheduled time accessible within the flow (as it is in similar engines such as Kestra, using execution.startTime)?
-I have tried accessing the run context:
-...
-from prefect.context import get_run_context
-
-@task(log_prints=True)
-def do_stuff():
-    ... do stuff ...
-    task_run_context = get_run_context()
-    task_run = task_run_context.task_run
-    print(f""Expected start = {task_run.expected_start_time}"")
-    print(f""Actual start = {task_run.start_time}"")
-
-
-But the logged ""expected start time"" (for a cron schedule running every minute) is not an exact on-the-minute timestamp; when I introduced delays and retries it was sometimes offset by multiple seconds, so is clearly not the time scheduled by cron pattern, which should be precise on-the-minute.
-","1. The approach tried in the question, of using the task context, is almost correct.
-The same function get_run_context() can be called within a function annotated as a top-level @flow rather than a @task, but here it returns a FlowRunContext rather than a TaskRunContext.
-The attributes within these contexts relating to start time are named the same, but the expected_start_time in the FlowRunContext appears to be the scheduled start time of the Flow, and has an exact to-the-minute timestamp regardless of retries.
-@flow(name=""My Flow"")
-def my_flow():
-    flow_run_context = get_run_context()
-    flow_run = flow_run_context.flow_run
-    expected_start = flow_run.expected_start_time
-    # perform calculations, pass results to tasks as required, e.g.
-    my_task(repo, expected_start)
-
-
-However, one difficulty is that the expected_start_time is also populated for manual (non-scheduled) runs, whereas one might expect it to be None in this situation. Manual runs may therefore need additional parameters to indicate that there is no scheduled time, and to supply explicit time bounds rather than attempting to calculate them from the start time.
-",Prefect
-"I have configured Prefect server on our Windows VM so that it runs on the machine's IP address. Here's how I did it.
-env\Scripts\activate.bat 
-(env) prefect config set PREFECT_SERVER_API_HOST='SERVER-IP-ADDRESS'
-(env) prefect server start
-
- ___ ___ ___ ___ ___ ___ _____ 
-| _ \ _ \ __| __| __/ __|_   _| 
-|  _/   / _|| _|| _| (__  | |  
-|_| |_|_\___|_| |___\___| |_|  
-
-Configure Prefect to communicate with the server with:
-
-    prefect config set PREFECT_API_URL=http://SERVER-IP-ADDRESS:4200/api
-
-View the API reference documentation at http://SERVER-IP-ADDRESS:4200/docs
-
-Check out the dashboard at http://SERVER-IP-ADDRESS:4200
-
-I used NSSM to run the server as a service on Windows. I used a batch file for this. Here's how the batch file looks like.
-cd C:\Users\username\Documents\Projects\prefect-server\env\Scripts
-call activate.bat
-prefect.exe server start
-
-I ran this batch file on Command Prompt just to test, and it gives the same output as above. It runs on the machine's IP address.
-I created the service using nssm install with the batch file as parameter. When I run the service either using nssm start or from Windows Services, Prefect server runs on the default IP address 127.0.0.1:4200 instead.
-What did I miss here?
-","1. I found the solution.
-Before posting this question, I have a hunch that nssm runs Prefect with a different environment regardless if you set it up before installing the service. nssm has an option to set environment variables.
-
-I had to re-install the service. On this tab, I added PREFECT_SERVER_API_HOST=SERVER-IP-ADDRESS. I ran the service, and Prefect now runs on the machine IP address.
-",Prefect
-"I'm trying to deploy my flow but I'don't know what I should do to completely deploy it (serverless).
-I'm using the free tier of Prefect Cloud and I have create a storage and process block.
-The step I have done :
-
-Build deployment
-
-$ prefect deployment build -n reporting_ff_dev-deployment flow.py:my_flow
-
-
-Apply configuration
-
-$ prefect deployment apply <file.yaml>
-
-
-Create block
-
-from prefect.filesystems import LocalFileSystem
-from prefect.infrastructure import Process
-
-#STORAGE
-my_storage_block = LocalFileSystem(
-    basepath='~/ff_dev'
-)
-my_storage_block.save(
-    name='ff-dev-storage-block',
-    overwrite=True)
-
-#INFRA
-my_process_infra = Process(
-    working_dir='~/_ff_dev_work',
-)
-my_process_infra.save(
-    name='ff-dev-process-infra',
-    overwrite=True)
-
-
-deploy block
-
-$ prefect deployment build -n <name>  -sb <storage_name> -ib <infra_name>  <entry_point.yml> -a
-
-I know that prefect cloud is a control system rather than a storage medium but as I understand,  a store block -> store the code and process code -> run the code. What is the next step to run the flow without local agent ?
-","1. Where are you looking for the code to be executed from?
-With a deployment registered, you can execute the following to spawn a flow run. A deployment just describes how and where -
-prefect deployment run /my_flow
-
-2. As of at least Dec 2023, Prefect has in public beta the ability to deploy on their own hosted worker pool: https://docs.prefect.io/latest/guides/managed-execution/ The free plan should give you 10h / month. When deploying in code via deploy(), don't forget to add any package dependencies as noted in their help page, e.g.
-my_flow.deploy(
-        name=""test-managed-flow"",
-        work_pool_name=""my-managed-pool"",
-        job_variables={""pip_packages"": [""pandas"", ""prefect-aws""]}
-    )
-
-",Prefect
-"I have a ubuntu 16.04 running in virtual box. I installed Kubernetes on it as a single node using kubeadm.
-But coredns pods are in Crashloopbackoff state.
-All other pods are running.
-Single interface(enp0s3) - Bridge Network
-Applied calico using
-kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
-output on kubectl describe pod: 
- Type     Reason     Age                  From               Message
-  ----     ------     ----                 ----               -------
-  Normal   Scheduled  41m                  default-scheduler  Successfully assigned kube-system/coredns-66bff467f8-dxzq7 to kube
-  Normal   Pulled     39m (x5 over 41m)    kubelet, kube      Container image ""k8s.gcr.io/coredns:1.6.7"" already present on machine
-  Normal   Created    39m (x5 over 41m)    kubelet, kube      Created container coredns
-  Normal   Started    39m (x5 over 41m)    kubelet, kube      Started container coredns
-  Warning  BackOff    87s (x194 over 41m)  kubelet, kube      Back-off restarting failed container
-
-","1. I did a kubectl logs <coredns-pod> and found error logs below and looked in the mentioned link
-As per suggestion, added resolv.conf = /etc/resolv.conf at the end of /etc/kubernetes/kubelet/conf.yaml and recreated the pod.
-kubectl logs coredns-66bff467f8-dxzq7 -n kube-system 
-.:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [FATAL] plugin/loop: Loop (127.0.0.1:34536 -> :53) detected for zone ""."", see coredns.io/plugins/loop#troubleshooting. Query: ""HINFO 8322382447049308542.5528484581440387393."" 
-root@kube:/home/kube# 
-
-
-2. Commented below line in /etc/resolv.conf (Host machine) and delete the coredns pods in kube-system namespace. 
-New pods came in running state :)
-
-#nameserver 127.0.1.1
-
-",CoreDNS
-"I use kubernetes v12, my system is ubuntu 16.
-I use the followed command to create DNS resource.
-wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
-
-wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
-bash deploy.sh -i 10.32.0.10 -r ""10.32.0.0/24"" -s -t coredns.yaml.sed | kubectl apply -f -
-
-After created coredns resource: I check the resources status.
-
-check coredns service
-
-root@master:~# kubectl get svc -n kube-system
-NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
-calico-typha   ClusterIP   10.32.0.10   <none>        5473/TCP   13h
-
-
-check code DNS pod endpoints
-
-root@master:~# kubectl get ep -n kube-system
-NAME                      ENDPOINTS   AGE
-calico-typha              <none>      13h
-kube-controller-manager   <none>      18d
-kube-scheduler            <none>      18d
-
-
-My DNS config:
-
-root@master:~# cat /etc/resolv.conf
-nameserver 183.60.83.19
-nameserver 183.60.82.98
-
-
-Check CoreDNS pod logs
-
-root@master:~# kubectl get po -n kube-system | grep coredns-7bbd44c489-5thlj
-coredns-7bbd44c489-5thlj   1/1     Running   0          13h
-root@master:~#
-root@master:~# kubectl logs -n kube-system pod/coredns-7bbd44c489-5thlj
-.:53
-2019-03-16T01:37:14.661Z [INFO] CoreDNS-1.2.6
-2019-03-16T01:37:14.661Z [INFO] linux/amd64, go1.11.2, 756749c
-CoreDNS-1.2.6
-linux/amd64, go1.11.2, 756749c
- [INFO] plugin/reload: Running configuration MD5 = 2e2180a5eeb3ebf92a5100ab081a6381
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:45913->183.60.83.19:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:42500->183.60.82.98:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:48341->183.60.82.98:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:33007->183.60.83.19:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:52968->183.60.82.98:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:48992->183.60.82.98:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:35016->183.60.83.19:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:58058->183.60.82.98:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:51709->183.60.83.19:53: i/o timeout
- [ERROR] plugin/errors: 2 526217177044940556.1623766979909084596. HINFO: unreachable backend: read udp 10.200.0.93:53889->183.60.82.98:53: i/o timeout
-root@master:~#
-
-I found CoreDNS pod ip cannot connected to node DNS server ip address.
-","1. You should check calico firewall policy if it block internet access from pod. 
-Another idea that you need to check which mode calico using: ipip / nat out going
-
-2. You are lacking kube-dns service.
-Input variable -i in deploy.sh sets IP for kube-dns service, and in your example, 10.32.0.10 is already assigned to calico-typha, you need to choose a different IP. 
-Moreover, it should be in a valid range, but kubectl will prompt if it won't be.
-You can always check it by running kubectl cluster-info dump | grep service-cluster-ip-range.
-
-3. Upon seeing these coredns issues, I was thinking this was a coredns/dns/resolv.conf issue. But was only able to find a solution when I found that my pods all seemed to not have internet access and I began thinking more than kube-proxy was involved.
-I turned to the iptables to see if there would anything blocking access and look to view the 10.96.0.10 iptables rules applied. I didnt find any iptables rules in my iptables (nft) but did find some in my iptables-legacy (Debian 10). I blamed calico and started my kubernetes cluster from scratch.
-kubeadm reset -f
-rm -rf /etc/cni /etc/kubernetes /var/lib/dockershim /var/lib/etcd /var/lib/kubelet /var/run/kubernetes ~/.kube/*
-iptables -F && iptables -X
-iptables -t raw -F && iptables -t raw -X
-iptables -t mangle -F && iptables -t mangle -X
-iptables -t nat -F && iptables -t nat -X
-
-iptables-legacy -F && iptables-legacy -X
-iptables-legacy -t raw -F && iptables-legacy -t raw -X
-iptables-legacy -t mangle -F && iptables-legacy -t mangle -X
-iptables-legacy -t nat -F && iptables-legacy -t nat -X
-
-systemctl restart docker
-
-
-To delete and restart.
-I started my kube cluster via
-sudo kubeadm init --config CLUSTER.yaml --upload-certs
-Checked iptables for nothing to be in iptables-legacy (my default was iptables nft)
-Pulled calico locally and added:
-            - name: FELIX_IPTABLESBACKEND
-              value: ""NFT""
-
-Also, if you are using a different podsubnet set in your CLUSTER.yaml update the CALICO_IPV4POOL_CIDR approriately in your calico file.
-Once you get kubectl working via copying the proper kube config
-kubectl apply -f calico.yaml
-
-Apply the updated file. Double check iptables again. And you should then be able to add your control-plane and worker nodes via the command that the original kubeadm init outputted.
-",CoreDNS
-"I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
-My cluster is a basic setup with one master and 2 worker nodes.
-What works from within the pod:
-
-curl to google.com from pod -- works
-
-curl to another service(kubernetes) from pod -- works
-
-curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
-
-curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
-
-
-What works from the node hosting pod:
-
-curl to google.com --works
-curl to another machine on same LAN via
-its IP address such as 192.168.x.x -- works
-curl to another machine
-on same LAN via its hostname such as wrkr1 -- works.
-
-
-Note: the pod cidr is completely different from the IP range used in
-LAN
-
-the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
-Kubernetes Version: 1.19.14
-Ubuntu Version: 18.04 LTS
-Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
-","1. What happens
-
-Need help as to whether this is normal behavior
-
-This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
-
-I read somewhere that a pod inherits its nodes DNS resolution so I've
-kept the entry
-
-This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
-
-""Default"": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
-""ClusterFirst"": Any DNS query that does not match the configured cluster domain suffix, such as ""www.kubernetes.io"", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
-
-The trickiest ever part is this (from the same link above):
-
-Note: ""Default"" is not the default DNS policy. If dnsPolicy is not
-explicitly specified, then ""ClusterFirst"" is used.
-
-That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
-For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
-How this can be resolved
-1. Add local domain to coreDNS
-This will require to add A records per host. Here's a part from edited coredns configmap:
-This should be within .:53 { block
-file /etc/coredns/local.record local
-
-This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
-local.record: |
-  local.            IN      SOA     sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
-  wrkr1.            IN      A      172.10.10.10
-  wrkr2.            IN      A      172.11.11.11
-
-Then coreDNS deployment should be added to include this file:
-$ kubectl edit deploy coredns -n kube-system
-      volumes:
-      - configMap:
-          defaultMode: 420
-          items:
-          - key: Corefile
-            path: Corefile
-          - key: local.record # 1st line to add
-            path: local.record # 2nd line to add
-          name: coredns
-
-And restart coreDNS deployment:
-$ kubectl rollout restart deploy coredns -n kube-system
-
-Just in case check if coredns pods are running and ready:
-$ kubectl get pods -A | grep coredns
-kube-system   coredns-6ddbbfd76-mk2wv              1/1     Running            0                4h46m
-kube-system   coredns-6ddbbfd76-ngrmq              1/1     Running            0                4h46m
-
-If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
-2. Set up DNS server in the network
-While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
-3. Deploy avahi to kubernetes cluster
-There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
-Useful links:
-
-Pods DNS policy
-Custom DNS entries for kubernetes
-
-
-2. I had this issue, and came up with this solution:
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: coredns-custom
-  namespace: kube-system
-data:
-  lan.server: |
-    lan:53 {
-      errors
-      cache 30
-      forward . 192.168.2.1
-    }
-
-192.168.2.1 is my local DNS (in my case my router) and lan is my hostname.
-",CoreDNS
-"I want to set up a cluster in Docker that contains citus + patroni and the corresponding etcd for Postgres
-I have this dockerfile:
-FROM postgres:16
-
-RUN apt-get update && \
-    apt-get install -y python3 python3-pip python3-venv build-essential libpq-dev
-
-RUN python3 -m venv /patroni-venv
-
-RUN /bin/bash -c ""source /patroni-venv/bin/activate && \
-    pip install patroni[etcd] psycopg2-binary behave coverage flake8>=3.0.0 mock pytest-cov pytest setuptools""
-
-
-COPY patroni.yml /etc/patroni.yml
-
-ENTRYPOINT [""/bin/bash"", ""-c"", ""source /patroni-venv/bin/activate && patroni /etc/patroni.yml""]
-
-
-This docker-compose:
-version: '3'
-
-services:
-  etcd:
-    image: quay.io/coreos/etcd:v3.5.0
-
-    container_name: etcd
-    networks:
-      - citus_network
-    ports:
-      - ""2389:2379""
-      - ""2390:2380""
-    command:
-      - /usr/local/bin/etcd
-      - --data-dir=/etcd-data
-      - --name=etcd0
-      - --listen-client-urls=http://0.0.0.0:2389
-      - --advertise-client-urls=http://etcd:2389
-      - --listen-peer-urls=http://0.0.0.0:2390
-      - --initial-advertise-peer-urls=http://etcd:2390
-      - --initial-cluster=etcd0=http://etcd:2390
-      - --enable-v2=true
-  citus:
-    image: citusdata/citus:10.2
-    container_name: citus
-    environment:
-      POSTGRES_PASSWORD: your_password
-    networks:
-      - citus_network
-    depends_on:
-      - etcd
-    ports:
-      - ""5433:5432""  # Cambiar puerto para evitar conflicto
-    volumes:
-      - citus_data:/var/lib/postgresql/data
-  patroni:
-    build:
-      context: .
-      dockerfile: Dockerfile
-    container_name: patroni
-    environment:
-      PATRONI_NAME: citus
-      PATRONI_ETCD_HOSTS: etcd:2389
-      PATRONI_POSTGRESQL_DATA_DIR: /var/lib/postgresql/data
-      PATRONI_POSTGRESQL_PGPASSWORD: your_password
-      PATRONI_POSTGRESQL_LISTEN: 0.0.0.0:5432
-      PATRONI_POSTGRESQL_CONNECT_ADDRESS: patroni:5432
-      PATRONI_SUPERUSER_USERNAME: postgres
-      PATRONI_SUPERUSER_PASSWORD: your_password
-    networks:
-      - citus_network
-    depends_on:
-      - etcd
-      - citus
-    ports:
-      - ""8009:8008""  # Cambiar puerto para evitar conflicto
-    volumes:
-      - patroni_data:/var/lib/postgresql/data
-
-networks:
-  citus_network:
-    driver: bridge
-
-volumes:
-  citus_data:
-  patroni_data:
-
-
-And this is my patroni.yml:
-scope: citus
-namespace: /db/
-name: citus
-
-restapi:
-  listen: 0.0.0.0:8008
-  connect_address: patroni:8008
-
-etcd:
-  host: etcd:2389
-  protocol: http
-  version: ""v3""
-
-bootstrap:
-  dcs:
-    ttl: 30
-    loop_wait: 10
-    retry_timeout: 10
-    maximum_lag_on_failover: 1048576
-    postgresql:
-      use_pg_rewind: true
-      parameters:
-        wal_level: replica
-        hot_standby: ""on""
-        wal_keep_segments: 8
-        max_wal_senders: 5
-        max_replication_slots: 5
-
-  initdb:
-    - encoding: UTF8
-    - data-checksums
-
-  pg_hba:
-    - host replication repl_user 0.0.0.0/0 md5
-    - host all all 0.0.0.0/0 md5
-
-  users:
-    admin:
-      password: admin_password
-      options:
-        - createrole
-        - createdb
-
-postgresql:
-  listen: 0.0.0.0:5432
-  connect_address: patroni:5432
-  data_dir: /var/lib/postgresql/data
-  bin_dir: /usr/lib/postgresql/16/bin
-  authentication:
-    superuser:
-      username: postgres
-      password: your_password
-    replication:
-      username: repl_user
-      password: repl_password
-  parameters:
-    unix_socket_directories: '/var/run/postgresql, /tmp'
-
-After several attempts I have not been able to solve this error:
-2024-05-26 21:04:02,630 INFO: Selected new etcd server http://etcd:2389
-2024-05-26 21:04:02,641 INFO: No PostgreSQL configuration items changed, nothing to reload.
-2024-05-26 21:04:02,648 INFO: Lock owner: None; I am citus
-2024-05-26 21:04:02,656 INFO: trying to bootstrap a new cluster
-The files belonging to this database system will be owned by user ""patroni_user"".
-This user must also own the server process.
-
-The database cluster will be initialized with locale ""en_US.utf8"".
-The default text search configuration will be set to ""english"".
-
-Data page checksums are enabled.
-
-initdb: error: could not change permissions of directory ""/var/lib/postgresql/data"": Operation not permitted
-2024-05-26 21:04:02,686 INFO: removing initialize key after failed attempt to bootstrap the cluster
-2024-05-26 21:04:02,687 INFO: renaming data directory to /var/lib/postgresql/data.failed
-2024-05-26 21:04:02,687 ERROR: Could not rename data directory /var/lib/postgresql/data
-Traceback (most recent call last):
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/postgresql/__init__.py"", line 1317, in move_data_directory
-    os.rename(self._data_dir, new_name)
-OSError: [Errno 16] Device or resource busy: '/var/lib/postgresql/data' -> '/var/lib/postgresql/data.failed'
-Process Process-1:
-Traceback (most recent call last):
-  File ""/usr/lib/python3.11/multiprocessing/process.py"", line 314, in _bootstrap
-    self.run()
-  File ""/usr/lib/python3.11/multiprocessing/process.py"", line 108, in run
-    self._target(*self._args, **self._kwargs)
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/__main__.py"", line 232, in patroni_main
-    abstract_main(Patroni, configfile)
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/daemon.py"", line 174, in abstract_main
-    controller.run()
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/__main__.py"", line 192, in run
-    super(Patroni, self).run()
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/daemon.py"", line 143, in run
-    self._run_cycle()
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/__main__.py"", line 201, in _run_cycle
-    logger.info(self.ha.run_cycle())
-                ^^^^^^^^^^^^^^^^^^^
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/ha.py"", line 1980, in run_cycle
-    info = self._run_cycle()
-           ^^^^^^^^^^^^^^^^^
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/ha.py"", line 1797, in _run_cycle
-    return self.post_bootstrap()
-           ^^^^^^^^^^^^^^^^^^^^^
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/ha.py"", line 1681, in post_bootstrap
-    self.cancel_initialization()
-  File ""/patroni-venv/lib/python3.11/site-packages/patroni/ha.py"", line 1674, in cancel_initialization
-    raise PatroniFatalException('Failed to bootstrap cluster')
-patroni.exceptions.PatroniFatalException: Failed to bootstrap cluster
-
-
-I am thankful for any kind of help
-I need to be able to set up a citus + patroni without using ssl to connect to the database.
-Is there any way to do it? Even if it means starting from 0
-","1. I finally managed to solve it.
-I leave the solution here in case it could help someone.
-My final documents look like this:
-DOCKERFILE
-FROM postgres:16
-
-# Instalar dependencias necesarias
-RUN apt-get update && \
-    apt-get install -y python3 python3-pip python3-venv build-essential libpq-dev
-
-# Crear un entorno virtual de Python
-RUN python3 -m venv /patroni-venv
-
-# Activar el entorno virtual y instalar las dependencias necesarias
-RUN /bin/bash -c ""source /patroni-venv/bin/activate && \
-    pip install patroni[etcd] psycopg2-binary behave coverage flake8>=3.0.0 mock pytest-cov pytest setuptools""
-
-# Copiar el archivo de configuración de Patroni
-COPY patroni.yml /etc/patroni.yml
-
-USER postgres
-
-# Establecer el comando de inicio predeterminado para usar el entorno virtual
-ENTRYPOINT [""/bin/bash"", ""-c"", ""source /patroni-venv/bin/activate && patroni /etc/patroni.yml""]
-
-docker-compose.yml
-version: '3'
-
-services:
-  etcd:
-    image: quay.io/coreos/etcd:v3.5.0
-    container_name: etcd
-    networks:
-      - citus_network
-    ports:
-      - ""2389:2379""
-      - ""2390:2380""
-    command:
-      - /usr/local/bin/etcd
-      - --data-dir=/etcd-data
-      - --name=etcd0
-      - --listen-client-urls=http://0.0.0.0:2389
-      - --advertise-client-urls=http://etcd:2389
-      - --listen-peer-urls=http://0.0.0.0:2390
-      - --initial-advertise-peer-urls=http://etcd:2390
-      - --initial-cluster=etcd0=http://etcd:2390
-      - --enable-v2=true
-  citus:
-    image: citusdata/citus:10.2
-    container_name: citus
-    environment:
-      POSTGRES_PASSWORD: your_password
-    networks:
-      - citus_network
-    depends_on:
-      - etcd
-    ports:
-      - ""5433:5432""  # Cambiar puerto para evitar conflicto
-    volumes:
-      - citus_data:/var/lib/postgresql/data
-  patroni:
-    build:
-      context: .
-      dockerfile: Dockerfile
-    container_name: patroni
-    environment:
-      PATRONI_NAME: citus
-      PATRONI_SCOPE: postgres
-      PATRONI_ETCD_HOSTS: etcd:2389
-      PATRONI_POSTGRESQL_DATA_DIR: /var/lib/postgresql/data
-      PATRONI_POSTGRESQL_PGPASSWORD: your_password
-      PATRONI_POSTGRESQL_LISTEN: 0.0.0.0:5432
-      PATRONI_POSTGRESQL_CONNECT_ADDRESS: patroni:5432
-      PATRONI_SUPERUSER_USERNAME: postgres
-      PATRONI_SUPERUSER_PASSWORD: your_password
-    networks:
-      - citus_network
-    depends_on:
-      - etcd
-      - citus
-    ports:
-      - ""8009:8008""  # Cambiar puerto para evitar conflicto
-    volumes:
-      - patroni_data:/var/lib/postgresql/data
-
-  patroni2:
-    build:
-      context: .
-      dockerfile: Dockerfile
-    container_name: patroni2
-    environment:
-      PATRONI_NAME: patroni2
-      PATRONI_SCOPE: postgres
-      PATRONI_ETCD_HOSTS: etcd:2389
-      PATRONI_POSTGRESQL_DATA_DIR: /var/lib/postgresql/data
-      PATRONI_POSTGRESQL_PGPASSWORD: your_password
-      PATRONI_POSTGRESQL_LISTEN: 0.0.0.0:5432
-      PATRONI_POSTGRESQL_CONNECT_ADDRESS: patroni:5432
-      PATRONI_SUPERUSER_USERNAME: postgres
-      PATRONI_SUPERUSER_PASSWORD: your_password
-    networks:
-      - citus_network
-    depends_on:
-      - etcd
-      - citus
-    ports:
-      - ""8010:8008""  # Cambiar puerto para evitar conflicto
-    volumes:
-      - patroni_data2:/var/lib/postgresql/data
-
-networks:
-  citus_network:
-    driver: bridge
-
-volumes:
-  citus_data:
-  patroni_data:
-  patroni_data2:
-
-patroni.yml
-scope: citus
-namespace: /db/
-name: citus
-
-restapi:
-  listen: 0.0.0.0:8008
-  connect_address: patroni:8008
-
-etcd:
-  host: etcd:2389
-  protocol: http
-  version: ""v3""
-
-bootstrap:
-  dcs:
-    ttl: 30
-    loop_wait: 10
-    retry_timeout: 10
-    maximum_lag_on_failover: 1048576
-    postgresql:
-      use_pg_rewind: true
-      parameters:
-        wal_level: replica
-        hot_standby: ""on""
-        wal_keep_segments: 8
-        max_wal_senders: 5
-        max_replication_slots: 5
-
-  initdb:
-    - encoding: UTF8
-    - data-checksums
-
-  pg_hba:
-    - host replication repl_user 0.0.0.0/0 md5
-    - host all all 0.0.0.0/0 md5
-
-  users:
-    admin:
-      password: admin_password
-      options:
-        - createrole
-        - createdb
-
-postgresql:
-  listen: 0.0.0.0:5432
-  connect_address: patroni:5432
-  data_dir: /var/lib/postgresql/data
-  bin_dir: /usr/lib/postgresql/16/bin
-  authentication:
-    superuser:
-      username: postgres
-      password: your_password
-    replication:
-      username: repl_user
-      password: repl_password
-  parameters:
-    unix_socket_directories: '/var/run/postgresql, /tmp'
-
-Additionally, before doing the build and up you have to create the volumes manually and change the permissions like this:
-Good luck!
-sudo mkdir -p /var/lib/docker/volumes/patroni_patroni_data/_data
-sudo mkdir -p /var/lib/docker/volumes/patroni_patroni_data2/_data
-sudo mkdir -p /var/lib/docker/volumes/citus_citus_data/_data
-sudo chown -R 1000:1000 /var/lib/docker/volumes/patroni_patroni_data/_data
-sudo chown -R 1000:1000 /var/lib/docker/volumes/patroni_patroni_data2/_data
-sudo chown -R 1000:1000 /var/lib/docker/volumes/citus_citus_data/_data
-sudo chmod -R 700 /var/lib/docker/volumes/patroni_patroni_data/_data
-sudo chmod -R 700 /var/lib/docker/volumes/patroni_patroni_data2/_data
-sudo chmod -R 700 /var/lib/docker/volumes/citus_citus_data/_data
-
-",etcd
-"I'm trying to write a xUnit test for HomeController, and some important configuration information is put into Nacos.
-The problem now is that I can't get the configuration information in nacos.
-Here is my test class for HomeController:
-using Xunit;
-
-namespace ApiTestProject
-
-    public class HomeControllerTest
-    {
-
-        // mock register services.  in this method, I can not access the nacos config strings
-        private void Init()
-        {
-            var builder = WebApplication.CreateBuilder();
-
-            // add appsettings.json and nacos
-            builder.Host.ConfigureAppConfiguration(cbuilder =>
-            {
-                cbuilder.AddJsonFile(""appsettings.Test.json"", optional: false, reloadOnChange: true);
-            });
-            var nacosconfig = builder.Configuration.GetSection(""NacosConfig"");
-            builder.Host.ConfigureAppConfiguration((context, builder) =>
-            {
-                // add nacos
-                builder.AddNacosV2Configuration(nacosconfig);
-            });
-
-
-
-            // Now I should have been able to get the config info in builder.Configuration
-            // try to get the ""DbConn"" in nacos, but connstr is null
-            string connstr = builder.Configuration[""DbConn""];
-
-            // other register logic... 
-        }
-    }
-
-And this is the appsettings.Test.json file:
-{
-  ""NacosConfig"": {
-    ""Listeners"": [
-      {
-        ""Optional"": false,
-        ""DataId"": ""global.dbconn"",
-        ""Group"": ""DEFAULT_GROUP""
-      }
-    ],
-    ""Namespace"": ""my-dev"",
-    ""ServerAddresses"": [
-      ""http://mynacos.url.address/""
-    ],
-    ""UserName"": ""dotnetcore"",
-    ""Password"": ""123456"",
-  }
-}
-
-Update: I've checked in detail to make sure there aren't any spelling mistakes, case sensitivity issues. And the code in Init() function works well in the Program.cs file in the tested API project, but in this xUnit project, it's not working at all.
-","1. Are you making sure that the appsettings.Test.json file is included in the output directory when you build?
-",Nacos
-"Occasionally a 3rd party library contains APIs that return non-public classes which you cannot reference directly. One such example is org.apache.avro.generic.GenericRecord.get() which can sometimes return a java.nio.HeapByteBuffer object. If I wanted to switch over that class like so I will get a compile error:
-Object recordValue = genericRecord.get(someField);
-switch (actualValue) {
-   case String avroString -> {
-      // do logic
-   }
-   case HeapByteBuffer avroBuffer -> {
-      // do logic
-   }
-   default -> log.warn(""Unknown type"");
-}
-
-If instead I try to use an extending class, the code will compile but will log the warning message ""Unknown type"":
-Object recordValue = genericRecord.get(someField);
-switch (actualValue) {
-   case String avroString -> {
-      // do logic
-   }
-   case ByteBuffer avroBuffer -> {
-      // do logic
-   }
-   default -> log.warn(""Unknown type"");
-}
-
-How can I use an enhanced switch for a private class?
-","1. You're operating under some serious misunderstandings. Here is a trivial example to show that your mental model of what is happening here is simply not correct:
-class Test {
-    public static void main(String[] args) {
-        Object i = Integer.valueOf(42);
-        switch (i) {
-            case Number n -> System.out.println(""N: "" + n.intValue());
-            default -> System.out.println(""NaN"");
-        }
-    }
-}
-
-This prints N: 42.
-In other words, your theory that you must name the precise, exact type that the thing is is simply not correct - you can stick a supertype in a case statement and it'll trigger just fine. Your 'workaround' of writing case Object i when i instanceof ByteBuffer is a silly way to write case ByteBuffer b.
-If you are observing your default case triggering, then either [A] you are not compiling/running the code you think you are, or [B] HeapByteBuffer is not a ByteBuffer, and your case Object i when i instanceof ByteBuffer wouldn't work either, because it isn't a bytebuffer.
-",Avro
-"I am receiving from a remote server Kafka Avro messages in Python (using the consumer of Confluent Kafka Python library), that represent clickstream data with json dictionaries with fields like user agent, location, url, etc. Here is what a message looks like:
-b'\x01\x00\x00\xde\x9e\xa8\xd5\x8fW\xec\x9a\xa8\xd5\x8fW\x1axxx.xxx.xxx.xxx\x02:https://website.in/rooms/\x02Hhttps://website.in/wellness-spa/\x02\xaa\x14\x02\x9c\n\x02\xaa\x14\x02\xd0\x0b\x02V0:j3lcu1if:rTftGozmxSPo96dz1kGH2hvd0CREXmf2\x02V0:j3lj1xt7:YD4daqNRv_Vsea4wuFErpDaWeHu4tW7e\x02\x08null\x02\nnull0\x10pageview\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x10Thailand\x02\xa6\x80\xc4\x01\x02\x0eBangkok\x02\x8c\xba\xc4\x01\x020*\xa9\x13\xd0\x84+@\x02\xec\xc09#J\x1fY@\x02\x8a\x02Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/58.0.3029.96 Chrome/58.0.3029.96 Safari/537.36\x02\x10Chromium\x02\x10Chromium\x028Google Inc. and contributors\x02\x0eBrowser\x02\x1858.0.3029.96\x02""Personal computer\x02\nLinux\x02\x00\x02\x1cCanonical Ltd.'
-
-How to decode it? I tried bson decode but the string was not recognized as UTF-8 as it's a specific Avro encoding I guess. I found https://github.com/verisign/python-confluent-schemaregistry but it only supports Python 2.7. Ideally I would like to work with Python 3.5+ and MongoDB to process the data and store it as it's my current infrastructure.
-","1. If you use Confluent Schema Registry and want to deserialize avro messages, just add message_bytes.seek(5) to the decode function, since Confluent adds 5 extra bytes before the typical avro-formatted data. 
-def decode(msg_value):
-    message_bytes = io.BytesIO(msg_value)
-    message_bytes.seek(5)
-    decoder = BinaryDecoder(message_bytes)
-    event_dict = reader.read(decoder)
-    return event_dict
-
-
-2. I thought Avro library was just to read Avro files, but it actually solved the problem of decoding Kafka messages, as follow: I first import the libraries and give the schema file as a parameter and then create a function to decode the message into a dictionary, that I can use in the consumer loop.
-import io
-
-from confluent_kafka import Consumer, KafkaError
-from avro.io import DatumReader, BinaryDecoder
-import avro.schema
-
-schema = avro.schema.Parse(open(""data_sources/EventRecord.avsc"").read())
-reader = DatumReader(schema)
-
-def decode(msg_value):
-    message_bytes = io.BytesIO(msg_value)
-    decoder = BinaryDecoder(message_bytes)
-    event_dict = reader.read(decoder)
-    return event_dict
-
-c = Consumer()
-c.subscribe(topic)
-running = True
-while running:
-    msg = c.poll()
-    if not msg.error():
-        msg_value = msg.value()
-        event_dict = decode(msg_value)
-        print(event_dict)
-    elif msg.error().code() != KafkaError._PARTITION_EOF:
-        print(msg.error())
-        running = False
-
-
-3. If you have access to a Confluent schema registry server, you can also use Confluent's own AvroDeserializer to avoid messing with their magic 5 bytes:
-from confluent_kafka.schema_registry import SchemaRegistryClient
-from confluent_kafka.schema_registry.avro import AvroDeserializer
-
-def process_record_confluent(record: bytes, src: SchemaRegistryClient, schema: str):
-    deserializer = AvroDeserializer(schema_str=schema, schema_registry_client=src)
-    return deserializer(record, None) # returns dict
-
-",Avro
-"I have implemented a server-streaming server based on grpc. The grpc::ServerWriteReactor class is used as stream writers. When a remote procedure is called by a client, a reactor is created. The data for is retrieved from a thread-safe queue: in the OnWriteDone method, the next Response is retrieved and passed to the StartWrite method.
-The queue is blocking. When there is no data in it, the reading thread (in my case it is the grpc library thread) blocks until data arrives in the queue.
-Are there any restrictions on how long a grpc thread can be blocked? Is there some kind of liveness probe inside grpc that, seeing that the thread is blocked, will take some action?
-","1. There are no restrictions on the thread usage. gRPC has several threads in the pool and will grow the pool if necessary.
-",gRPC
-"We are doing some POC where client is GO GRPC and server is C++ GRPC (Sync based server)
-
-GO GRPC client connected to C++ GRPC server on main thread.
-GO client calls one of the RPC method on C++ GRPC server which on main thread.
-
-Wanted to check if C++ GRPC server supports any in built multithreading so that any RPC call on main thread will be handled on worker threads without loading much on main thread so it will handle more requests.
-We tried one end-to-end call from client to server which is currently handled on main thread of c++ GRPC server, wanted to check if any built-in support in c++ GRPC server that will distribute load to other threads (worker) without much impacting load on main thread.
-","1. gRPC C++ makes an extensive use of multithreading and the requests will be handled by a thread from a thread pool.
-To further leverage the support, we recommend to use the callback API, e.g. see this example which would allow asynchronously finishing the request handling.
-",gRPC
-"I am rather new to spring security, I have used springframework, boot and others for a while. I am working on a project for myself. I have selected ory kratos as the IAM solution. I realize that ory had hydra for oauth2 but spring auth server might be easier for me to work with and integrate into my spring project.
-I see that you can configure authorization server to use jdbc or an in memory provider for user credentials.
-My question is: Is it possible to integrate spring auth server with kratos?
-","1. Ory Kratos is an OpenID Provider. About any other OpenID Provider can federate identities with it (either way).
-Spring-authorization-server being an OpenID Provider, yes, you can use Ory Kratos as identity provider, but no, it's probably not going to make your life easier. You probably need more OAuth2 background to understand why: an OAuth2 Authorization Server is not part of your app, it is a service (most frequently a ""standalone"" one) that your apps talk with (either as client or resource server). Spring-authorization-server is no exception.
-I suggest that you have a look at these tutorials I wrote to get minimal OAuth2 background and to get started with OAuth2 configuration in Spring apps.
-I suggest also that you compare pricing and features with other OpenID Providers (price per user can grow quickly with cloud providers).
-Last, I advise that you use only libs compatible with any OpenID Provider (spring-boot-starter-oauth2-client and spring-boot-starter-oauth2-resource-server, optionally with this one I maintain to make usage of spring-boot-starter-oauth2-X easier)
-",kratos
-"I am using ORY kratos v1.0.0 self-hosted. I am trying to get oidc connect to work with microsoft azure (Sign in with mircosoft). I completed the app registration on Azure B2C, have the correct redirect URL, a client secret and have a green check for PKCE. I then set this up in my kratos.yml like this at providers.config
-      - id: microsoft
-        microsoft_tenant: common
-        provider: microsoft
-        client_id: xxxxxxxxxx
-        client_secret: xxxxxxxxxxxxxx
-        mapper_url: file:///etc/config/kratos/oidc.jsonnet
-        scope:
-          - email
-
-The login always fails and I get this CORS issue returned by kratos:
-""Unable to complete OpenID Connect flow because the OpenID Provider returned error ""invalid_request"": Proof Key for Code Exchange is required for cross-origin authorization code redemption.""
-(My other OIDC configs with google and github work.)
-My kratos server lives on auth.mydomain.com my login screen lives on accounts.mydomain.com.
-Any hints what could be the issue here?
-","1. The error might occur if you configured the redirect URL under Single-page application platform like this:
-
-
-Note that, the redirect_uri of the External SSO / IDP should be registered as a ""Web"" instead of SPA.
-
-To resolve the error, remove SPA redirect URI and add it in Web platform of your app registration:
-
-Reference:
-How to fix AADSTS9002325: Proof Key for Code Exchange is required for cross-origin authorization code redemption - Microsoft Q&A by Camille
-",kratos
-"I've a Raspberry Pi with a setup including PiHole and caddy.
-PiHole serves as local DNS server for the network, and caddy is used as reverse proxy on a generic corresponding service.
-Example:
-
-(DNS record set) 192.168.2.71 nextcloud.foo.duckdns.org
-(Caddyfile) nextcloud.foo.duckdns.org { reverse_proxy localhost:11000 }
-
-$ curl --head --verbose https://nextcloud.foo.duckdns.org
-*   Trying [2a02:a458:814f:0:a438:1aab:a24:12f7]:443...
-*   Trying 192.168.2.71:443...
-* Connected to nextcloud.foo.duckdns.org (192.168.2.71) port 443 (#0)
-* ALPN: offers h2,http/1.1
-* TLSv1.3 (OUT), TLS handshake, Client hello (1):
-*  CAfile: /etc/ssl/certs/ca-certificates.crt
-*  CApath: /etc/ssl/certs
-* TLSv1.3 (IN), TLS handshake, Server hello (2):
-* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
-* TLSv1.3 (IN), TLS handshake, Certificate (11):
-* TLSv1.3 (IN), TLS handshake, CERT verify (15):
-* TLSv1.3 (IN), TLS handshake, Finished (20):
-* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
-* TLSv1.3 (OUT), TLS handshake, Finished (20):
-* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
-* ALPN: server accepted h2
-* Server certificate:
-*  subject: CN=nextcloud.foo.duckdns.org
-*  start date: May  1 05:58:44 2024 GMT
-*  expire date: Jul 30 05:58:43 2024 GMT
-*  subjectAltName: host ""nextcloud.foo.duckdns.org"" matched cert's ""nextcloud.foo.duckdns.org""
-*  issuer: C=US; O=Let's Encrypt; CN=R3
-*  SSL certificate verify ok.
-
-In the Caddyfile I also have a global option/block, { acme_dns duckdns <my-token> }, and I've also generated a valid certificate with Let's Encrypt using this guide: https://github.com/infinityofspace/certbot_dns_duckdns?tab=readme-ov-file#usage
-This certificate includes both ""foo.duckdns.org"" and ""*.foo.duckdns.org"" as domains.
-Now, however, I set up another service, home-assistant, and I added an entry in the Caddyfile, that is homeassistant.foo.duckdns.org { reverse_proxy localhost:8123 }.
-The issue is that the TLS handshake fails, and thus the HTTPS request.
-curl --head --verbose https://homeassistant.foo.duckdns.org
-*   Trying 192.168.2.71:443...
-* Connected to homeassistant.foo.duckdns.org (192.168.2.71) port 443 (#0)
-* ALPN: offers h2,http/1.1
-* TLSv1.3 (OUT), TLS handshake, Client hello (1):
-*  CAfile: /etc/ssl/certs/ca-certificates.crt
-*  CApath: /etc/ssl/certs
-* TLSv1.3 (IN), TLS alert, internal error (592):
-* OpenSSL/3.0.11: error:0A000438:SSL routines::tlsv1 alert internal error
-* Closing connection 0
-curl: (35) OpenSSL/3.0.11: error:0A000438:SSL routines::tlsv1 alert internal error
-
-I'm still able to access the service via the localhost interface, though.
-$ curl --verbose 192.168.2.71:8123
-*   Trying 192.168.2.71:8123...
-* Connected to 192.168.2.71 (192.168.2.71) port 8123 (#0)
-> GET / HTTP/1.1
-> Host: 192.168.2.71:8123
-> User-Agent: curl/7.88.1
-> Accept: */*
->
-< HTTP/1.1 200 OK
-< Content-Type: text/html; charset=utf-8
-< Referrer-Policy: no-referrer
-< X-Content-Type-Options: nosniff
-< Server:
-< X-Frame-Options: SAMEORIGIN
-< Content-Length: 4116
-< Date: Sat, 04 May 2024 15:19:24 GMT
-<
-< HTML response
-
-Why this difference? I also tried running sudo certbot renew but the tool says Certificate not yet due for renewal
-The caddy logs related to this request are:
-May 04 17:07:55 raspberrypi caddy[2499813]: {""level"":""debug"",""ts"":1714835275.669332,""logger"":""http.stdlib"",""msg"":""http: TLS handshake error from 172.24.0.8:52236: EOF""}
-May 04 17:07:56 raspberrypi caddy[2499813]: {""level"":""error"",""ts"":1714835276.0812566,""logger"":""tls.issuance.zerossl.acme_client"",""msg"":""cleaning up solver"",""identifier"":""homeassistant.foo.duckdns.org"",""challenge_type"":""dns-01"",""error"":""no memory of presenting a DNS record for \""_acme-challenge.homeassistant.foo.duckdns.org\"" (usually OK if presenting also failed)""}
-May 04 17:07:56 raspberrypi caddy[2499813]: {""level"":""debug"",""ts"":1714835276.4435985,""logger"":""tls.issuance.zerossl.acme_client"",""msg"":""http request"",""method"":""POST"",""url"":""https://acme.zerossl.com/v2/DV90/authz/PyhpCsoS_EhSSiEGi8UeIQ"",""headers"":{""Content-Type"":[""application/jose+json""],""User-Agent"":[""Caddy/2.7.6 CertMagic acmez (linux; arm64)""]},""response_headers"":{""Access-Control-Allow-Origin"":[""*""],""Cache-Control"":[""max-age=0, no-cache, no-store""],""Content-Length"":[""145""],""Content-Type"":[""application/json""],""Date"":[""Sat, 04 May 2024 15:07:56 GMT""],""Link"":[""<https://acme.zerossl.com/v2/DV90>;rel=\""index\""""],""Replay-Nonce"":[""igkSNJLHTzdfI2aSomwLOusaK7Fz3HQEHXL8NYrFcWA""],""Server"":[""nginx""],""Strict-Transport-Security"":[""max-age=15724800; includeSubDomains""]},""status_code"":200}
-May 04 17:07:56 raspberrypi caddy[2499813]: {""level"":""error"",""ts"":1714835276.4437563,""logger"":""tls.obtain"",""msg"":""could not get certificate from issuer"",""identifier"":""homeassistant.foo.duckdns.org"",""issuer"":""acme.zerossl.com-v2-DV90"",""error"":""[homeassistant.foo.duckdns.org] solving challenges: presenting for challenge: could not determine zone for domain \""_acme-challenge.homeassistant.foo.duckdns.org\"": unexpected response code 'REFUSED' for _acme-challenge.homeassistant.foo.duckdns.org. (order=https://acme.zerossl.com/v2/DV90/order/uQxNVkgga-pcqKOgZWOJiA) (ca=https://acme.zerossl.com/v2/DV90)""}
-May 04 17:07:56 raspberrypi caddy[2499813]: {""level"":""debug"",""ts"":1714835276.4437943,""logger"":""events"",""msg"":""event"",""name"":""cert_failed"",""id"":""9ff6162d-a94e-497f-aff9-5d068ddc2987"",""origin"":""tls"",""data"":{""error"":{},""identifier"":""homeassistant.foo.duckdns.org"",""issuers"":[""acme-v02.api.letsencrypt.org-directory"",""acme.zerossl.com-v2-DV90""],""renewal"":false}}
-May 04 17:07:56 raspberrypi caddy[2499813]: {""level"":""error"",""ts"":1714835276.4438388,""logger"":""tls.obtain"",""msg"":""will retry"",""error"":""[homeassistant.foo.duckdns.org] Obtain: [homeassistant.foo.duckdns.org] solving challenges: presenting for challenge: could not determine zone for domain \""_acme-challenge.homeassistant.foo.duckdns.org\"": unexpected response code 'REFUSED' for _acme-challenge.homeassistant.foo.duckdns.org. (order=https://acme.zerossl.com/v2/DV90/order/uQxNVkgga-pcqKOgZWOJiA) (ca=https://acme.zerossl.com/v2/DV90)"",""attempt"":13,""retrying_in"":1800,""elapsed"":9168.216986793,""max_duration"":2592000}
-
-","1. This is unrelated to certificate validation. It is not about the client complaining about the server certificate, but the server complaining about something coming from the client. Because of this the server sends an TLS alert back to the client:
-* TLSv1.3 (IN), TLS alert, internal error (592):
-* OpenSSL/3.0.11: error:0A000438:SSL routines::tlsv1 alert internal error
-
-
-I'm still able to access the service via the localhost interface, though.
-$ curl --verbose 192.168.2.71:8123
-
-
-This is not doing a HTTPS request, but a plain HTTP request. This means no TLS is done and therefore also no TLS related errors will happen.
-
-The caddy logs related to this request are:
-... ""tls.obtain"",""msg"":""could not get certificate from issuer"",""identifier"":""homeassistant.foo.duckdns.org"", ...
-
-
-This suggests that the internal server was setup to automatically get a certificate for this subdomain (i.e. not setup to use an existing wildcard) but was unable to retrieve the certificate.
-This also explains the handshake error: the server fails with the TLS handshake since it has no usable certificate for the requested domain.
-
-2. It looks like the issue lied in using PiHole as DNS server for the entire network (set it as only DNS server in the router settings).
-This thread in the caddy forum helped me with an unexpected option in the tls block: resolvers.
-Setting this to Google DNS, i.e., 8.8.8.8 8.8.4.4 was enough to let caddy solves the DNS challenge.
-",Caddy
-"
-I have several images with similar situations.I want to find the contours of the white regions within the gray blocks (they might be continuous or split into 2 parts).
-The main difficulty I encounter is that if I directly use cv2.findContours with RETR_TREE to find both inner and outer contours, I often find that the inner shapes may not be detected as contours. This is because the inner region are sometimes blurry. I am currently trying to solve this issue by using de-noise methods.I have tried another method, which involves first finding the outer contours and then the inner ones. However, the problem is how can I avoid the black background from interfering?
-","1. The findContours() function takes as input a binary image, meaning the pixels in the regions of your interest should have a value of 1 and everything else should be zero. Find a clear threshold between the color of your white regions and the color of your grey regions and use thresholding to convert your input image to a binary image where the white regions becomes 1s and everything else becomes 0s.
-",Contour
-"I have GitLab configuration with HAProxy as a reverse proxy server. With current settings haproxy redirects https requests to gitlab backend and it works fine. The problem is how to redirect ssh requests? Users can use https for commits but can`t use ssh in this configuration.
-How to make working https and ssh together?
-","1. As ssh is a complete different Protocol then HTTPS isn't there such a mechanism like ""redirect"" in ssh.
-As you mentioned that you use HTTPS & SSH on the same HAProxy instance could this blog article help you to create a ssh reverse proxy with HAProxy.
-https://www.haproxy.com/blog/route-ssh-connections-with-haproxy
-The steps are:
-
-Create a TCP Listener (frontend/listen)
-Use the ProxyCommand in SSH to connect to the reverse proxy.
-
-
-2. I`ve solved my problem by installing sslh on the server and adding frontend for tcp connections in haproxy
-",HAProxy
-"Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
-For that, I have created a deployment and service for NGINX as shown below, 
-As per my search, found that we have below to expose to outside world
-
-MetalLb
-Ingress NGINX
-Some HELM resources
-
-I would like to know all these 3 or any more approaches in such way it help me to learn new things.
-GOAL
-
-Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
-How Can I make my service has its own public IP to access from the outside cluster?
-
-","1. You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
-Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
-I have done a similar setup you can check it here in detail:
-
-https://developerdiary.me/lets-build-low-budget-aws-at-home/
-
-https://developerdiary.me/exposing-web-apps-running-in-our-raspberry-pi-cluster/
-
-
-
-2. You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
-One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
-MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.
-",MetalLB
-"I have a service that is using a load-balancer in order to expose externally a certain IP. I am using metallb because my cluster is bare metal.
-This is the configuration of the service: 
-Inside of the cluster the application running perform a binding to a zmq socket (TCP type) like:
-m_zmqSock->bind(endpoint);
-
-where endpoint = tcp://127.0.0.1:1234 and
-m_zmqSock = std::make_unique<zmq::socket_t>(*m_zmqContext,zmq::socket_type::pair);
-m_zmqSock->setsockopt(ZMQ_RCVTIMEO,1);
-
-Then from an application in my local computer (with access to the cluster) I am trying to connect and send data like:
-zmqSock->connect(zmqServer);
-
-where zmqServer = tcp://192.168.49.241:1234 and
-zmq::context_t ctx;
-auto zmqSock = std::make_unique<zmq::socket_t>(ctx,zmq::socket_type::pair);
-
-Any idea on how could I make the zmq socket connect from my host to send data to the application and receive response also?
-","1. 
-Q: ""Any idea on how could I make the zmq socket connect from my host to send data to the application and receive response also?""
-
-Let's sketch a work-plan:
-
-let's prove the ZeroMQ can be served with an end-to-end visibility working:
-
-
-for doing this, use PUSH-PULL pattern, being fed from the cluster-side by a aPushSIDE->send(...) with regularly spaced timestamped messages, using also a resources saving setup there, using aPushSIDE->setsockopt( ZMQ_COMPLETE,... ) and aPushSIDE->setsockopt( ZMQ_CONFLATE,... )
-
-
-once you can confirm your localhost's PULL-end recv()-s regular updates, feel free to also add an up-stream link from localhost towards the cluster-hosted code, again using a PUSH-PULL pattern in the opposite direction.
-
-Why a pair of PUSH-PULL-s here?
-First, it helps isolate the root-cause of the problem. Next, it allows you to separate concerns and control each of the flows independently of any other ( details on control loops with many interconnects, with different flows, different priority levels and different error handling procedures are so common to all have exclusively only the non-blocking forms of the recv()-methods & doing multi-level poll()-methods' soft-control of the maximum permitted time spent ( wasted ) on testing a new message arrival go beyond of the scope of this Q/A text - feel free to seek further in this formal event-handling framing and about using low-level socket-monitor diagnostics ).
-Last, but not least, the PAIR-PAIR archetype used to be reported in ZeroMQ native API documentation as ""experimental"" for the most of my ZeroMQ-related life ( since v2.1, yeah, so long ). Accepting that fact, I never used a PAIR archetype on any other Transport Class but for a pure-in-RAM, network-protocol stack-less inproc: ""connections"" ( that are not actually any connections, but a Zero-Copy, almost Zero-Latency smart pure pointer-to-memory-block passing trick among some of a same-process co-operating threads ).
-",MetalLB
-"I use MetalLB and Nginx-ingress controller to provide internet access to my apps.
-I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there.
-My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP?
-If so, is this how it is supposed to be, or is there a better way?
-","1. Metallb doesn't receive and forward any traffic, so
-
-from MetalLB to my ingress
-
-doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing:
-Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing.
-But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod.
-Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required.
-Metallb is a solution to implement kubernetes services of type LoadBalancer when there is no cloud provider to do that for you.
-So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type ClusterIP with an ingress-controller just use a service of type LoadBalancer. Metallb will give that service an external ip from your pool and announce it to it's peers.
-In that case, when traffic reaches the cluster it will only be load-balanced once.
-",MetalLB
-"I am trying to host my AngularJs app and my dotnet core api on nginx but I'm unable to access my api, following is my default.conf in /etc/nginx/conf.d:
-server {
-
-listen 80 default_server;
-listen [::]:80 default_server;
-
-root /var/www/dashboard;
-
-index index.html;
-
-server_name example.server.com;
-
-location / {
-     try_files $uri /index.html;
-}
-
-location /api {
- proxy_pass http://localhost:5000;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection keep-alive;
- proxy_set_header Host $host;
- proxy_cache_bypass $http_upgrade;
-}
-
-#error_page  404              /404.html;
-
-# redirect server error pages to the static page /50x.html
-#
-error_page   500 502 503 504  /50x.html;
-location = /50x.html {
-    root   /usr/share/nginx/html;
-}
-
-}
-
-","1. For the above issue, this could be the best solution.
-Nginx Configuration:
-location /api/ {
-    proxy_pass http://127.0.0.1:5000/;
-    proxy_http_version 1.1;
-    proxy_set_header Upgrade $http_upgrade;
-    proxy_set_header Connection keep-alive;
-    proxy_set_header Host $host;
-    proxy_cache_bypass $http_upgrade;
-    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    proxy_set_header X-Forwarded-Proto $scheme;
-}
-
-# Install nginx-extras to hide server name
-sudo apt install nginx-extras
-
-# Hide server name
-more_set_headers 'Server: abisec';
-
-Service Configuration (api.service):
-[Unit]
-Description=Service Description
-
-[Service]
-WorkingDirectory=/var/www/yourserviceapi/
-ExecStart=/usr/bin/dotnet /var/www/yourserviceapi/yourserviceapi.dll --urls=http://0.0.0.0:5000
-Restart=always
-RestartSec=10
-KillSignal=SIGINT
-SyslogIdentifier=dotnet-web-app
-Environment=ASPNETCORE_ENVIRONMENT=Production
-
-[Install]
-WantedBy=multi-user.target
-
-Manage Service:
-# Create and manage the service
-sudo nano /etc/systemd/system/api.service
-sudo systemctl enable api.service    # Enable the service
-sudo systemctl start api.service     # Start the service
-sudo systemctl status api.service    # Check the status of the service
-
-Port Specification:
-Specifying the port is considered best practice. If hosting only one API, it's not necessary, but for multiple APIs, define the ports in your service file.
-",NGINX
-"I'm using Docker Hub's official nginx image:
-https://hub.docker.com/_/nginx/
-The user of nginx (as defined in /etc/nginx/nginx.conf) is nginx. Is there a way to make nginx run as www-data without having to extend the docker image? The reason for this is, I have a shared volume, that is used by multiple containers - php-fpm that I'm running as www-data and nginx. The owner of the files/directories in the shared volume is www-data:www-data and nginx has trouble accessing that - errors similar to *1 stat() ""/app/frontend/web/"" failed (13: Permission denied)
-I have a docker-compose.yml and run all my containers, including the nginx one with docker-compose up.
-  ...
-  nginx:
-    image: nginx:latest
-    ports:
-      - ""80:80""
-    volumes:
-      - ./:/app
-      - ./vhost.conf:/etc/nginx/conf.d/vhost.conf
-    links:
-      - fpm
-
-  ...
-
-","1. FYI
-
-It is problem of php-fpm image
-It is not about usernames, it is about www-data user ID
-
-What to do
-Fix your php-fpm container and don't break good nginx container.
-Solutions
-
-Here is mine post with solution for docker-compose (nginx +
-php-fpm(alpine)): https://stackoverflow.com/a/36130772/1032085
-Here is mine post with solution for php-fpm(debian) container:
-https://stackoverflow.com/a/36642679/1032085
-Solution for Official php-fpm image. Create Dockerfile:
-FROM php:5.6-fpm
-RUN usermod -u 1000 www-data
-
-
-
-2. I know the OP asked for a solution that doesn't extend the nginx image, but I've landed here without that constraint. So I've made this Dockerfile to run nginx as www-data:www-data (33:33) :
-FROM nginx:1.17
-
-# Customization of the nginx user and group ids in the image. It's 101:101 in
-# the base image. Here we use 33 which is the user id and group id for www-data
-# on Ubuntu, Debian, etc.
-ARG nginx_uid=33
-ARG nginx_gid=33
-
-# The worker processes in the nginx image run as the user nginx with group
-# nginx. This is where we override their respective uid and guid to something
-# else that lines up better with file permissions.
-# The -o switch allows reusing an existing user id
-RUN usermod -u $nginx_uid -o nginx && groupmod -g $nginx_gid -o nginx
-
-It accepts a uid and gid on the command line during image build. To make an nginx image that runs as your current user id and group id for example:
-docker build --build-arg nginx_uid=$(id -u) nginx_uid=$(id -g) .
-
-The nginx user and group ids are currently hardcoded to 101:101 in the image.
-
-3. Another option is to grab the source from https://github.com/nginxinc/docker-nginx and change the docker file to support build args Ex: of changing the stable Dockerfile of buser release (https://github.com/nginxinc/docker-nginx/blob/master/stable/buster/Dockerfile). to have the nginx user/group uid/gid set as build args
-FROM debian:buster-slim
-
-LABEL maintainer=""NGINX Docker Maintainers <docker-maint@nginx.com>""
-
-ENV NGINX_VERSION   1.18.0
-ENV NJS_VERSION     0.4.3
-ENV PKG_RELEASE     1~buster
-
-#Change NGNIX guid/uid#
-ARG nginx_guid=101
-ARG nginx_uid=101
-
-RUN set -x \
-# create nginx user/group first, to be consistent throughout docker variants
-    && addgroup --system --gid $nginx_guid nginx \
-    && adduser --system --disabled-login --ingroup nginx --no-create-home --home /nonexistent --gecos ""nginx user"" --shell /bin/false --uid $nginx_uid nginx \
-
-This way is safer than just doing usermod since if something is done in other places like
-chown nginx:nginx  it will use the GUID/UID set
-",NGINX
-"I am trying to do a permanent redirect from a subpath location on a original server (https://my-server1.com/api/old/subpath) to the root of an external host while keeping the subpath (https://my-server2.com/subpath), the HTTP method (generally the request are POST requests) and the headers (the headers include authentication data).
-Following several similar issues on SO, I have been trying the following:
-1.
-location ^/api/old(.*)$ {
-    rewrite $scheme://my-server2.com$1 permanent;
-}
-
-...but I get a 404 when doing the POST request
-2.
-location /api/old {
-    rewrite ^/api/old(.*)$ $scheme://my-server2.com$1 permanent;
-5
-
-...but I get a 405 when doing the POST request, I believe because the POST request is changed into a GET
-3.
-location ~ ^/api/old(.*)$ {
-    return 308 $scheme://my-server2.com$1;
-}
-
-...but I get a 403 when doing the POST request, I believe the method is ok, but the auth headers is not forwarded
-4.
-location ~ ^/api/old(.*)$ {
-    proxy_pass $scheme://my-server2.com$1;
-}
-
-...but I get a 502 (not idea why exactly).
-Which one is the right method to do so, and what am I missing here?
-","1. I found how to have the method #4 working and why it produced a 502. It needed a DNS resolver to work. One can use OpenDNS (208.67.222.222) as the DNS resolver, for example.
-The directive would become:
-location ~ ^/api/spp(.*)$ {
-    proxy_pass    $scheme://spp.etalab.gouv.fr$1;
-    resolver      208.67.222.222;
-}
-
-This is not ideal though, as this is 100% transparent, and I still wonder if there a way to inform the client of the permanent redirect at the same time.
-",NGINX
-"I try to install OpenResty 1.13.6.1 under CentOS 7. When I try to run openresty I get this error:
-
-[root@flo ~]# openresty -s reload
-nginx: [error] open() ""/usr/local/openresty/nginx/logs/nginx.pid"" failed (2: No such file or directory)
-
-When I look at my logs, I only have 2 files:
-
-[root@flo ~]# ll /usr/local/openresty/nginx/logs/
-total 8
--rw-r--r--. 1 root root    0  1 mars  12:24 access.log
--rw-r--r--. 1 root root 4875  1 mars  16:03 error.log
-
-I do not see how to find a solution.
-///////////////////UPDATE//////////////////
-I try to do this to folow the instructions of this page : https://openresty.org/en/getting-started.html
-
-[root@flo ~]# PATH=/usr/local/openresty/nginx/sbin:$PATH
-[root@flo ~]# export PATH
-[root@flo ~]# nginx -p pwd/ -c conf/nginx.conf
-
-And I have this error :
-
-nginx: [alert] could not open error log file: open() ""/root/logs/error.log"" failed (2: No such file or directory)
-2018/03/02 09:02:55 [emerg] 30824#0: open() ""/root/conf/nginx.conf"" failed (2: No such file or directory)
-
-/////////////////UPDATE2//////////////:
-[root@nexus-chat1 ~]# cd /root/
-[root@nexus-chat1 ~]# ll
-total 4
--rw-------. 1 root root 1512  1 mars  11:05 anaconda-ks.cfg
-drwxr-xr-x. 3 root root   65  1 mars  11:36 openresty_compilation
-
-Where do I need to create these folders ?
-mkdir ~/work
-cd ~/work
-mkdir logs/ conf/
-
-In /usr/local/openresty/ ?
-","1. Very likely nginx cannot open a log file because folder doesn't exists or permission issue.
-You can see the reason within error.log file
-
-2. openresty -s reload is used to tell nginx to reload the currently running instance. That's why it's complaining about the missing pid file.
-Anyway, that's not the correct way to start openresty. Have a look at https://openresty.org/en/getting-started.html for instructions on how to get started.
-
-3. I was using an nginx.conf file from a standard nginx install, and it was setting the PID file location incorrectly.
-pid /run/nginx.pid;
-
-I changed it to /usr/local/openresty/nginx/logs/nginx.pid and it worked.
-I installed openresty on CentOS using yum install, and started the service with sudo service openresty start. https://openresty.org/en/linux-packages.html#centos
-",OpenResty
-"I have an nginx/openresty client to a keycloak server for authorization using openid.
-I am using lua-resty-openidc to allow access to services behind the proxy.
-The user can access his profile at
-https://<my-server>/auth/realms/<real-name>/account
-and logout through
-https://<my-server>/auth/realms/<real-name>/protocol/openid-connect/logout
-The problem is that, even after logout, the user can still access the services behind the server, basically it seems the token he gets from keycloak is still valid or something.... This is also a behaviour that has been observed by other users, see for example this question on how to logout from keycloak the comments from ch271828n
-How can I ensure that after logout the user will no longer be able to get access until he logs in anew?
-","1. I had to check the lua source code, but I think I have figured the logout behaviour out: Lua-resty-openidc establishes sessions, and they are terminated when a specific url access is detected (it is controlled by opts.logout_path which we will need to be set to an address in the path of service, e.g. .../service/logout)
-In essence, there are two urls that need to be hit, one for keycloak logout, and one for openresty session logout. Accessing the keycloak logout url https://<keycloak-server>/auth/realms/<my-realm>/protocol/openid-connect/logout is done by lua after we access the opts.logout_path at https://<our-nginx-server>/service/logout
-So after setting up everything correctly, all we have to do to logout is hit https://<our-nginx-server>/service/logout. This will destroy the session and log us out.
-I think we need to set opts.revoke_tokens_on_logout to true, Also note that from my experiments, for some reason, setting up a redirect_after_logout_uri may result in the user not signing out due to redirections.
-In order to redirect to e.g. foo.bar after logout we can do the ?redirect_uri=https://foo.bar/ part. We can also redirect back to our service page, in which case it will ask for authentication anew...
-Here is an example of what we need to have for nginx.conf to make this work....
-location /myservice/ {
-
-    access_by_lua_block {
-        local opts = {
-            redirect_uri_path = ""/myservice/auth"",
-            discovery = ""https://<keycloak-server>/auth/realms/<my-realm>/.well-known/openid-configuration"",
-            client_id = ""<my-client-id>"",
-            client_secret = ""<the-clients-secret>"",
-            logout_path = ""/service/logout"",
-            revoke_tokens_on_logout = true,
-            redirect_after_logout_uri = ""https://<keycloak-server>/auth/realms/<my-realm>/protocol/openid-connect/logout?redirect_uri=https://foo.bar/"",
-            session_contents = {id_token=true} -- this is essential for safari!
-        }
-        -- call introspect for OAuth 2.0 Bearer Access Token validation
-        local res, err = require(""resty.openidc"").authenticate(opts)
-
-        if err then
-            ngx.status = 403
-            ngx.say(err)
-            ngx.exit(ngx.HTTP_FORBIDDEN)
-        end
-    }
-
-    # I disbled caching so the browser won't cache the site.
-    expires           0;
-    add_header        Cache-Control private;
-
-    proxy_pass http://my-service-server.cloud:port/some/path/;
-    proxy_set_header Host $http_host;
-
-    proxy_http_version 1.1;
-    proxy_redirect off;
-    proxy_buffering off;
-    proxy_set_header Upgrade $http_upgrade;
-    proxy_set_header Connection ""upgrade"";
-}
-
-",OpenResty
-"I have this location block:
-location /somewhere/ {
-    access_by_lua_file /path/to/file.lua;
-    add_header X-debug-message ""Using location: /somewhere/"" always;
-    ...
-}
-
-and I would like to do two things:
-
-I would like to move the header setting line into the lua file.
-For that I have to read the location definition (matching string/regex/...) from a variable rather than typing it in as a static string (""/somewhere/"").
-
-So in the end it should just look like this with magic in the lua file (I know how to set a response header in lua).
-location /somewhere/ {
-    access_by_lua_file /path/to/file.lua;
-    ...
-}
-
-My problem: I have no clue...
-
-...if there is a variable storing the location desciption (""/somewhere/"") - and not the requested URI/URL/PATH and
-if so - where that variable can be found.
-
-So how do I access this information from within lua code?
-Example
-Called URL: https://mydomain.nowhere/somewhere/servicex/1234
-Location that matches: ""location /somewhere/ { ...""
-String I want to get: ""/somewhere/"" (so exactly the definition of the location block).
-","1. Nginx/Openresty is rooted at the nginx folder. So lua file name is with respect to that folder
-",OpenResty
-"I am trying to delete keys from redis sentinel.
-I keep seeing this error :
-Command 'UNLINK' is not a registered Redis command.
-","1. UNLINK was introduced with Redis 4.0. I can think of two ways you would get this error:
-
-You are using a version of Redis from before 4.0. This means you are on a very old version of Redis and you should consider upgrading.
-You are using a non-standard version of Redis such as a version ported to Windows or perhaps ""Redis-compatible"" software that some vendors provide.
-
-If you can't change the Redis version you are using, you can use the DEL command instead which was part of Redis 1.0.
-Hope this helps and best of luck!
-",Sentinel
-"I have C++ code written around March 2023. It compiled fine back then. Now, I have to make a few changes to it, but it doesn't compile anymore. The used language standard was C++20. I don't remember what version of VS was used then. That was definitely updated to the latest one.
-Now I get ""error C3889: call to object of class type 'std::ranges::_Find_if_fn': no matching call operator found"" error at calling
-ranges::find_if(network_list, [](auto const& item) { return item.IsConnected(); })
-
-then it says something about ""'std::ranges::borrowed_iterator_t': the associated constraints are not satisfied"" and then ""the concept 'std::ranges::range<IterableAsyncNetworkMonitor::NetworkInfo&>' evaluated to false"". It's not clear what's going on.
-My best guess is that find_if became stricter but I don't understand how exactly. network_list is an instance of a custom implementation of an iterable with the begin and end methods. end method returns a sentinel class. I guess that the problem is related to this because if I change the call to
-ranges::find_if(network_list.begin(), network_list.end(), [](auto const& item) { return item.IsConnected(); })
-
-I get a different error that somehow makes sense. It says that ""the concept 'std::sentinel_for<IteratorAsyncSentinel,IteratorAsync>' evaluated to false"" then ""the concept 'std::_Weakly_equality_comparable_with<IteratorAsyncSentinel,IteratorAsync>' evaluated to false"" and then ""the concept 'std::_Half_equality_comparable<IteratorAsyncSentinel,IteratorAsync>' evaluated to false"".
-Probably my implementation of that iterable and usage of a sentinel class does not follow particular rules, but I cannot find an example how they should be implemented for functions like find_if to work. Can anybody show a correct example?
-UPDATE:
-Answering the question about network_list type. I thought it was enough to state that it is a custom implementation of iterable. Here it is
-template<class T>
-class IterableAsync final {
-   friend class YieldPromise<T>;
-   friend class IteratorAsync<T>;
-
-   using handle_type = std::coroutine_handle<YieldPromise<T>>;
-
-private:
-   handle_type m_objHandle;
-
-public:
-   // promise_type is expected by STL to be publicly declared
-   using promise_type = YieldPromise<T>;
-
-private:
-   explicit IterableAsync(handle_type objHandle) : m_objHandle(std::move(objHandle)) {
-   }
-
-public:
-   IterableAsync(IterableAsync const& other) = delete;
-
-   IterableAsync(IterableAsync&& other) noexcept :
-      m_objHandle(other.m_objHandle) {
-      other.m_objHandle = nullptr;
-   }
-
-   ~IterableAsync() {
-      if(m_objHandle) {
-         m_objHandle.destroy();
-      }
-   }
-
-public:
-   IteratorAsync<T> begin() {
-      return IteratorAsync<T>(*this, false);
-   }
-
-   // ReSharper disable once CppMemberFunctionMayBeStatic
-   IteratorAsyncSentinel end() {
-      return IteratorAsyncSentinel();
-   }
-
-public:
-   IterableAsync& operator =(IterableAsync const& other) = delete;
-
-   IterableAsync& operator =(IterableAsync&& other) noexcept {
-      m_objHandle = other.m_objHandle;
-      other.m_objHandle = nullptr;
-      return *this;
-   }
-};
-
-UPDATE 2:
-I kind of hoped that somebody would provide a link to a standard of how to properly write interable/iterator/sentinel or post that example here rather than analyzing misses in my code which might be more difficult but if that is easier, here the other requested parts
- class IteratorAsyncSentinel final {};
-
-template<class T>
-class IterableAsync;
-
-template<class T>
-class IteratorAsync final {
-   friend class IterableAsync<T>;
-
-public:
-   // difference_type and value_type are expected by ranges
-   using difference_type = ptrdiff_t;
-   using value_type = T;
-
-private:
-   std::reference_wrapper<IterableAsync<T>> m_objIterable;
-   bool m_bIsDone;
-
-private:
-   IteratorAsync(IterableAsync<T>& objIterable, bool bIsDone) :
-      m_objIterable(objIterable),
-      m_bIsDone(bIsDone) {
-      if(!m_bIsDone) {
-         Advance();
-      }
-   }
-
-public:
-   IteratorAsync() :
-      m_objIterable(*reinterpret_cast<IterableAsync<T>*>(nullptr)),
-      m_bIsDone(false) {
-   }
-   IteratorAsync(IteratorAsync const& other) = default;
-   IteratorAsync(IteratorAsync&& other) noexcept = default;
-   ~IteratorAsync() = default;
-
-public:
-   bool operator !=(IteratorAsyncSentinel const&) const {
-      return !m_bIsDone;
-   }
-
-   bool operator ==(IteratorAsyncSentinel const&) const {
-      return m_bIsDone;
-   }
-
-   IteratorAsync& operator ++() {
-      Advance();
-      return *this;
-   }
-
-   IteratorAsync operator ++(int) {
-      IteratorAsync objPrevIterator = *this;
-      Advance();
-      return objPrevIterator;
-   }
-
-   T const& operator *() const {
-      return m_objIterable.get().m_objHandle.promise().Current();
-   }
-
-private:
-   void Advance() {
-      m_objIterable.get().m_objHandle.resume();
-      m_bIsDone = m_objIterable.get().m_objHandle.done();
-   }
-
-public:
-   IteratorAsync& operator =(IteratorAsync const& other) = default;
-   IteratorAsync& operator =(IteratorAsync&& other) noexcept = default;
-};
-
-","1. Thank you for the hint. It turned out that sentinel type implementation now requires equality operator to be implemented. It is still a mystery why, as I mentioned a year ago it didn't require it. For me, sentinel is just a ""marker"" of the end of the iteration. Equality operator is definitely needed on the iterator but not on the sentinel and the need of it on the sentinel complicates things. As long as my original problem is solved, I'm posting this answer but if somebody can shed a light on these new requirements and especially on if that requirement can be removed, that'll be great.
-",Sentinel
-"After that mouthful of a title here comes my snag:
-I have a Jenkins system based on JaC. Using Gradle-Dropwizard and Skipper to manage job creation, pipelines etc.
-I'm trying to implement the Jenkins Notifications plugin with it but i can't get it to work. Tried the official site, the guides(usual and free style job) and the few related questions here but nothing works.
-I know it needs to be added under publishers {} but node(){} nor steps(){} work.
-it always fails in the DSL creation script under a variation of this:
-No signature of method: javaposse.jobdsl.dsl.jobs.FreeStyleJob.stage() is applicable for argument types: (java.lang.String, script$_run_closure1$_closure2) values: [notify, script$_run_closure1$_closure2@9d55a72]
-Possible solutions: wait(), getName(), label(), any(), using(java.lang.String), label(java.lang.String)
-
-Has anyone got a clue what to do?
-","1. You can access the full DSL documentation on your own Jenkins server at the following link:
-<JENKINS_URL>/plugin/job-dsl/api-viewer/index.html
-In the documentation you can search for slack and see all the available configuration options.
-Assuming you are using the Slack Notification Plugin, your configuration can look something alike the following:
-freeStyleJob('Slack Notifer') {
-    // All other configuration
-    publishers{
-         slackNotifier {
-             notifySuccess(true)
-             customMessage(""My Message"")
-         }
-    }
-} 
-
-This is the full documentation for the salckNotifier:
-slackNotifier {
-     commitInfoChoice(String value)
-
-     // Basedir of the fileset is Fileset ‘includes’ the workspace root.
-     artifactIncludes(String value)
-
-     // The slack token to be used to send notifications to Slack.
-     authToken(String value)
-
-     // Your Slack-compatible-chat's (e.g.
-     baseUrl(String value)
-
-     // Bot user option indicates the token belongs to a custom Slack app bot user in Slack.
-
-     botUser(boolean value)
-     // Enter a custom message that will be included with the notifications.
-
-     customMessage(String value)
-     customMessageAborted(String value)
-     customMessageFailure(String value)
-     customMessageNotBuilt(String value)
-     customMessageSuccess(String value)
-     customMessageUnstable(String value)
-
-     // Choose a custom emoji to use as the bot's icon in Slack, requires using a bot user, e.g.
-     iconEmoji(String value)
-
-     includeCustomMessage(boolean value)
-     includeFailedTests(boolean value)
-     includeTestSummary(boolean value)
-     matrixTriggerMode(String value)
-
-     notifyAborted(boolean value)
-     notifyBackToNormal(boolean value)
-     notifyEveryFailure(boolean value)
-     notifyFailure(boolean value)
-     notifyNotBuilt(boolean value)
-     notifyRegression(boolean value)
-     notifyRepeatedFailure(boolean value)
-     notifySuccess(boolean value)
-     notifyUnstable(boolean value)
-
-     // Enter the channel names or user ids to which notifications should be sent.
-     room(String value)
-
-     sendAs(String value)
-
-     // Send message as text as opposed to an attachment.
-     sendAsText(boolean value)
-
-     slackUserIdResolver {}
-     startNotification(boolean value)
-
-     // Your team's workspace name.
-     teamDomain(String value)
-
-     // Token to use to interact with slack.
-     tokenCredentialId(String value)
-
-     uploadFiles(boolean value)
-
-     // Choose a custom username to use as the bot's name in Slack, requires using a bot user
-     username(String value)
-}
-
-",Skipper
-"I have a data file format which includes
-
-/* comments */
-/* nested /* comments */ too */ and
-// c++ style single-line comments..
-
-As usual, these comments can occur everywhere in the input file where normal white space is allowed.
-Hence, rather than pollute the grammar proper with pervasive comment-handling, I have made a skipper parser which will handle white space and the various comments.
-So far so good, and i am able to parse all my test cases.
-In my use case, however, any of the parsed values (double, string, variable, list, ...) must carry the comments preceding it as an attribute, if one or more comments are present. That is, my AST node for double should be
-struct Double {
-   double value;
-   std::string comment;
-};
-
-and so forth for all the values I have in the grammar.
-Hence I wonder if it is possible somehow to ""store"" the collected comments in the skipper parser, and then have them available for building the AST nodes in the normal grammar?
-The skipper which processes comments:
-template<typename Iterator>
-struct SkipperRules : qi::grammar<Iterator> {
-    SkipperRules() : SkipperRules::base_type(skipper) {
-        single_line_comment = lit(""//"") >> *(char_ - eol) >> (eol | eoi);
-        block_comment = ((string(""/*"") >> *(block_comment | char_ - ""*/"")) >> string(""*/""));
-        skipper = space | single_line_comment | block_comment;
-    }
-    qi::rule<Iterator> skipper;
-    qi::rule<Iterator, std::string()> block_comment;
-    qi::rule<Iterator, std::string()> single_line_comment;
-};
-
-I can store the commments using a global variable and semantic actions in the skipper rule, but that seems wrong and probably won't play well in general with parser backtracking. What's a good way to store the comments so they are later retrievable in the main grammar?
-","1. 
-I can store the commments using a global variable and semantic actions in the skipper rule, but that seems wrong and probably won't play well in general with parser backtracking.
-
-Good thinking. See Boost Spirit: ""Semantic actions are evil""?. Also, in your case it would unnecessarily complicate the correlation of source location with the comment.
-
-can I collect attributes from my skipper parser?
-
-You cannot. Skippers are implicitly qi::omit[] (like the separator in the Kleene-% list, by the way).
-
-In my use case, however, any of the parsed values (double, string,
-variable, list, ...) must carry the comments preceding it as an
-attribute, if one or more comments are present. That is, my AST node
-for double should be
-struct Double {
-   double value;
-   std::string comment;
-};
-
-
-There you have it: your comments are not comments. You need them in your AST, so you need them in the grammar.
-Ideas
-I have several ideas here.
-
-You could simply not use the skipper to soup up the comments, which, like you mention, is going to be cumbersome/noisy in the grammar.
-
-You could temporarily override the skipper to just be qi::space at the point where the comments are required. Something like
-value_ = qi::skip(qi::space) [ comment_ >> (string_|qi::double_|qi::int_)  ];
-
-Or given your AST, maybe a bit more verbose
-value_ = qi::skip(qi::space) [ comment_ >> (string_|double_|int_) ];
-string_ = comment_ >> lexeme['""' >> *('\\' >> qi::char_ | ~qi::char_('""')) >> '""'];
-double_ = comment_ >> qi::real_parser<double, qi::strict_real_policies<double> >{};
-int_    = comment_ >> qi::int_;
-
-Notes:
-
-in this case make sure the double_, string_ and int_ are declared with qi::space_type as the skipper (see Boost spirit skipper issues)
-the comment_ rule is assumed to expose a std::string() attribute. This is fine if used in the skipper context as well, because the actual attribute will be bound to qi::unused_type which compiles down to no-ops for attribute propagation.
-As a subtler side note I made sure to use strict real policies in the second snippet so that the double-branch won't eat integers as well.
-
-
-A fancy solution might be to store the souped up comment(s) into a ""parser state"" (e.g. member variable) and then using on_success handlers to transfer that value into the rule attribute on demand (and optionally flush comments on certain rule completions).
-
-I have some examples of what can be achieved using on_success for inspiration: https://stackoverflow.com/search?q=user%3A85371+on_success+qi. (Specifically look at the way position information is being added to AST nodes. There's a subtle play with fusion-adapted struct vs. members that are being set outside the control of autmatic attribute propagation. A particularly nice method is to use a base-class that can be generically ""detected"" so AST nodes deriving from that base magically get the contextual comments added without code duplication)
-
-Effectively this is a hybrid: yes you use semantic actions to ""side-channel"" the comment values. However, it's less unwieldy because now you can deterministically ""harvest"" those values in the on-success handler. If you don't prematurely reset the comments, it should even generically work well under backtracking.
-A gripe with this is that it will be slightly less transparent to reason about the mechanics of ""magic comments"". However, it does sit well for two reasons:
-- ""magic comments"" are a semantic hack whichever way you look at it, so it matches the grammar semantics in the code
-- it does succeed at removing comment noise from productions, which is effectively what the comments were from in the first place: they were embellishing the semantics without complicating the language grammar.
-
-
-
-
-I think option 2. is the ""straight-forward"" approach that you might not have realized. Option 3. is the fancy approach, in case you want to enjoy the greater genericity/flexibility. E.g. what will you do with
-  /*obsolete*/ /*deprecated*/ 5.12e7
-
-Or, what about
-  bla = /*this is*/ 42 /*also relevant*/;
-
-These would be easier to deal with correctly in the 'fancy' case.
-So, if you want to avoid complexity, I suggest option 2. If you need the flexibility, I suggest option 3.
-",Skipper
-"I am uploading files with skipper, everything it's working perfectly, but I have a problem with the option saveAs I am assigning it's value by means of a function but it doesn't work, how can I assign the value of req.param('titulo') + file extension to the option saveAs?
-var path = require('path');
-
-module.exports = {
-
-'save':function(req,res,next){
-
-    var uploadOptions = {
-        dirname: sails.config.appPath + '/assets/books',
-        saveAs: function(file){
-            return req.param('titulo')+path.extname(file.filename);
-        },
-        maxBytes: 20 * 1000 * 1000
-    }
-
-    req.file('archivoPath').upload(uploadOptions,function(err,files){
-        if(err){
-            return res.serverError(err);
-        }
-        else{
-            console.log(files);
-        }
-    });
-
-    Book.create(req.params.all(),function bookCreated(err,book,next){
-        if(err) {
-            console.log(err);
-        }
-        return res.redirect('/book/books');
-    });
-}
-
-};
-
-I also really want to know if inside of the folder assets would be a good place to upload a pdf file to show it in my front end, ty.
-","1. I solved the problem by replacing the saveAs function:
-saveAs: function(file){
-    return req.param('titulo') + path.extname (file.filename);
-},
-
-with the following:
-saveAs: function (__newFileStream, cb) {
-    cb(null, req.param('titulo') + path.extname(__newFileStream.filename));
-},
-
-",Skipper
-"I started to develop a applciation with spring.
-We are checking for an api gateway, we will surely go with 3scale.
-I'm checking to do Request aggreation but I don't find how to do it with 3scale
-","1. If you mean aggregating the responses coming from different backends, that at the moment is not possible with only 3scale gateway.
-3scale adopts an approach which is of separation of concerns, so only Header content is manipulated, to keep the gateway performant.
-My suggestion would be to add a simple camel integration to handle this transformation.
-",3Scale
-"I am trying to get the Vec<u8> or String (or more ideally a Blob ObjectURL) of a file uploaded as triggered by a button click.
-I am guessing this will require an invisible <input> somewhere in the DOM but I can't figure out how to leverage web_sys and/or gloo to either get the contents nor a Blob ObjectURL.
-","1. A js-triggered input probably won't do the trick, as many browsers won't let you trigger a file input from JS, for good reasons. You can use labels to hid the input if you think it is ugly. Other than that, you need to wiggle yourself through the files api of HtmlInputElement. Pretty painful, that:
-use js_sys::{Object, Reflect, Uint8Array};
-use wasm_bindgen::{prelude::*, JsCast};
-use wasm_bindgen_futures::JsFuture;
-use web_sys::*;
-
-#[wasm_bindgen(start)]
-pub fn init() {
-    // Just some setup for the example
-    std::panic::set_hook(Box::new(console_error_panic_hook::hook));
-    let window = window().unwrap();
-    let document = window.document().unwrap();
-    let body = document.body().unwrap();
-    while let Some(child) = body.first_child() {
-        body.remove_child(&child).unwrap();
-    }
-    // Create the actual input element
-    let input = document
-        .create_element(""input"")
-        .expect_throw(""Create input"")
-        .dyn_into::<HtmlInputElement>()
-        .unwrap();
-    input
-        .set_attribute(""type"", ""file"")
-        .expect_throw(""Set input type file"");
-
-    let recv_file = {
-        let input = input.clone();
-        Closure::<dyn FnMut()>::wrap(Box::new(move || {
-            let input = input.clone();
-            wasm_bindgen_futures::spawn_local(async move {
-                file_callback(input.files()).await;
-            })
-        }))
-    };
-    input
-        .add_event_listener_with_callback(""change"", recv_file.as_ref().dyn_ref().unwrap())
-        .expect_throw(""Listen for file upload"");
-    recv_file.forget(); // TODO: this leaks. I forgot how to get around that.
-    body.append_child(&input).unwrap();
-}
-
-async fn file_callback(files: Option<FileList>) {
-    let files = match files {
-        Some(files) => files,
-        None => return,
-    };
-    for i in 0..files.length() {
-        let file = match files.item(i) {
-            Some(file) => file,
-            None => continue,
-        };
-        console::log_2(&""File:"".into(), &file.name().into());
-        let reader = file
-            .stream()
-            .get_reader()
-            .dyn_into::<ReadableStreamDefaultReader>()
-            .expect_throw(""Reader is reader"");
-        let mut data = Vec::new();
-        loop {
-            let chunk = JsFuture::from(reader.read())
-                .await
-                .expect_throw(""Read"")
-                .dyn_into::<Object>()
-                .unwrap();
-            // ReadableStreamReadResult is somehow wrong. So go by hand. Might be a web-sys bug.
-            let done = Reflect::get(&chunk, &""done"".into()).expect_throw(""Get done"");
-            if done.is_truthy() {
-                break;
-            }
-            let chunk = Reflect::get(&chunk, &""value"".into())
-                .expect_throw(""Get chunk"")
-                .dyn_into::<Uint8Array>()
-                .expect_throw(""bytes are bytes"");
-            let data_len = data.len();
-            data.resize(data_len + chunk.length() as usize, 255);
-            chunk.copy_to(&mut data[data_len..]);
-        }
-        console::log_2(
-            &""Got data"".into(),
-            &String::from_utf8_lossy(&data).into_owned().into(),
-        );
-    }
-}
-
-(If you've got questions about the code, ask. But it's too much to explain it in detail.)
-And extra, the features you need on web-sys for this to work:
-[dependencies.web-sys]
-version = ""0.3.60""
-features = [""Window"", ""Navigator"", ""console"", ""Document"", ""HtmlInputElement"", ""Event"", ""EventTarget"", ""FileList"", ""File"", ""Blob"", ""ReadableStream"", ""ReadableStreamDefaultReader"", ""ReadableStreamReadResult""]
-
-
-If you're using gloo with the futures feature enabled, the second function can be implemented much more neatly:
-async fn file_callback(files: Option<FileList>) {
-    let files = gloo::file::FileList::from(files.expect_throw(""empty files""));
-    for file in files.iter() {
-        console_dbg!(""File:"", file.name());
-        let data = gloo::file::futures::read_as_bytes(file)
-            .await
-            .expect_throw(""read file"");
-        console_dbg!(""Got data"", String::from_utf8_lossy(&data));
-    }
-}
-
-
-2. Thanks to Caesar I ended up with this code for use with dominator as the Dom crate.
-pub fn upload_file_input(mimes: &str, mutable: Mutable<Vec<u8>>) -> Dom {
-    input(|i| {
-        i.class(""file-input"")
-            .prop(""type"", ""file"")
-            .prop(""accept"", mimes)
-            .apply(|el| {
-                let element: HtmlInputElement = el.__internal_element();
-
-                let recv_file = {
-                    let input = element.clone();
-                    Closure::<dyn FnMut()>::wrap(Box::new(move || {
-                        let input = input.clone();
-                        let mutable = mutable.clone();
-                        wasm_bindgen_futures::spawn_local(async move {
-                            file_callback(input.files(), mutable.clone()).await;
-                        })
-                    }))
-                };
-
-                element
-                    .add_event_listener_with_callback(
-                        ""change"",
-                        recv_file.as_ref().dyn_ref().unwrap(),
-                    )
-                    .expect(""Listen for file upload"");
-                recv_file.forget();
-                el
-            })
-    })
-}
-
-async fn file_callback(files: Option<FileList>, mutable: Mutable<Vec<u8>>) {
-    let files = match files {
-        Some(files) => files,
-        None => return,
-    };
-    for i in 0..files.length() {
-        let file = match files.item(i) {
-            Some(file) => file,
-            None => continue,
-        };
-        // gloo::console::console_dbg!(""File:"", &file.name());
-        let reader = file
-            .stream()
-            .get_reader()
-            .dyn_into::<ReadableStreamDefaultReader>()
-            .expect(""Reader is reader"");
-        let mut data = Vec::new();
-        loop {
-            let chunk = JsFuture::from(reader.read())
-                .await
-                .expect(""Read"")
-                .dyn_into::<Object>()
-                .unwrap();
-            // ReadableStreamReadResult is somehow wrong. So go by hand. Might be a web-sys bug.
-            let done = Reflect::get(&chunk, &""done"".into()).expect(""Get done"");
-            if done.is_truthy() {
-                break;
-            }
-            let chunk = Reflect::get(&chunk, &""value"".into())
-                .expect(""Get chunk"")
-                .dyn_into::<Uint8Array>()
-                .expect(""bytes are bytes"");
-            let data_len = data.len();
-            data.resize(data_len + chunk.length() as usize, 255);
-            chunk.copy_to(&mut data[data_len..]);
-        }
-        mutable.set(data);
-        // gloo::console::console_dbg!(
-        //     ""Got data"",
-        //     &String::from_utf8_lossy(&data).into_owned(),
-        // );
-    }
-}
-
-",Gloo
-"Anyone worked with solo.io's glooctl command.  I was working on the hello world example, https://docs.solo.io/gloo-edge/latest/guides/traffic_management/hello_world/
-and everything went smoothly until the last step, testing the route rule:
-bash % curl $(glooctl proxy url)/all-pets
-which returns, ""Error: load balancer ingress not found on service gateway-proxy
-curl: (3) URL using bad/illegal format or missing URL""
-I tried putting what I thought was Gloo's ""proxy url"":
-bash% curl $(glooctl gloo-system-gateway-proxy-8080)/all-pets
-
-and bash%   curl $(gloo-system-gateway-proxy-8080)/all-pets
-Error: unknown command ""gloo-system-gateway-proxy-8080"" for ""glooctl""
-So it doesn't like logical commands like ""proxy url"" and it doesn't
-take the actual proxy url.
-Anyone fought this battle and won?
-TIA
-","1. I use minikube, the problem is that EXTERNAL-IP is in pending state.
-minikube tunnel solve the problem.
-glooctl proxy url
-Error: load balancer ingress not found on service gateway-proxy
-kubectl get svc -n gloo-system
-NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                               AGE
-gateway         ClusterIP      10.102.152.223   <none>        443/TCP                               49m
-gateway-proxy   LoadBalancer   10.102.171.136   <pending>     80:30439/TCP,443:32178/TCP            49m
-gloo            ClusterIP      10.97.145.90     <none>        9977/TCP,9976/TCP,9988/TCP,9979/TCP   49m
-
-
-https://makeoptim.com/en/service-mesh/kubernetes-external-ip-pending
-Kubernetes service external ip pending
-
-
-2. I believe the command is curl $(glooctl proxy url)/all-pets; what does glooctl proxy url return for you?
-
-3. The solution @northmorn provided works just fine, and the command on minikube is as simple as minikube tunnel. Thanks @Northmorn.
-",Gloo
-"i am following this tutorial https://medium.com/@far3ns/kong-oauth-2-0-plugin-38faf938a468 and when i request the tokens with
-Headers: Content-Type:application/json
-Host:api.ct.id
-Body:
-{
-“client_id”: “CLIENT_ID_11”,
-“client_secret”: “CLIENT_SECRET_11”,
-“grant_type”: “password”,
-“provision_key”: “kl3bUfe32WBcppmYFr1aZtXxzrBTL18l”,
-“authenticated_userid”: “oneone@gmail.com”,
-“scope”: “read”
-} 
-
-it returns 
-{
-  ""error_description"": ""Invalid client authentication"",
-  ""error"": ""invalid_client""
-}
-
-no matter what i tried i couldn't fix it, any idea how to make it work properly 
-","1. You need to create kong developer and it will give you client_id and client_secret_Id. Use those values in generating auth token.
-
-2. Here is the working c# code.
-Option 1
-public static string GetOAuthToken(string url, string clientId, string clientSecret, string scope = ""all"", string grantType = ""client_credentials"")
-        {
-            try
-            {
-                string token = """";
-                if (string.IsNullOrWhiteSpace(url)) throw new ArgumentException(""message"", nameof(url));
-                if (string.IsNullOrWhiteSpace(clientId)) throw new ArgumentNullException(""message"", nameof(clientId));
-                if (string.IsNullOrWhiteSpace(clientSecret)) throw new ArgumentNullException(""message"", nameof(clientSecret));
-
-                var oAuthClient = new RestClient(new Uri(url));
-                var request = new RestRequest(""Authenticate"", Method.POST);
-
-                request.AddHeader(""Content-Type"", ""application/json"");
-
-                var credentials = new
-                {
-                    grant_type = grantType,
-                    scope = scope,
-                    client_id = clientId,
-                    client_secret = clientSecret
-                };
-
-                request.AddJsonBody(credentials);
-
-                var response = oAuthClient?.Execute(request);
-                var content = response?.Content;
-
-                if (string.IsNullOrWhiteSpace(content)) throw new ArgumentNullException(""message"", nameof(clientSecret));
-                token = content?.Trim('""');
-
-                return token;
-            }
-            catch (Exception ex)
-            {
-                throw new Exception(ex.Message,ex);
-            }
-        }
-
-Option 2
-var httpClient = new HttpClient()
-var creds = $""client_id={client_id}&client_secret{client_secret}&grant_type=client_credentials"";
-httpClient.DefaultRequestHeaders.Accept.Clear();
-httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(""application/x-www-form-urlencoded""));
-var content = new StringContent(creds, Encoding.UTF8, ""application/x-www-form-urlencoded"");
-var response = httpClient.PostAsync(""https://myorg/oauth/oauth2/cached/token"", content).Result;
-var OAuthBearerToken = response.Content.ReadAsStringAsync().Result;
-
-
-3. thats error because you missing or wrong the value of client_id and client_secret to your body.
-you can check your oauth2 consumer in your konga dashboard or kong admin
-ex:
-curl --location 'http://[your_kong_admin_host]/consumers/[your_consumers_name]/oauth2'
-
-than you can get the response ex in my case
-{
-""data"": [
-    {
-        ""hash_secret"": false,
-        ""redirect_uris"": [
-            ""http://ganang_ganteng.mpos""
-        ],
-        ""name"": ""brilink-mpos-dev-oauth2"",
-        ""client_type"": ""confidential"",
-        ""consumer"": {
-            ""id"": ""be463ef0-97a8-4381-8842-d609ac7e019a""
-        },
-        ""created_at"": 1710753469,
-        ""tags"": null,
-        ""client_id"": ""aBCWQ5ZYx351rCcNmdc52PGbcpFJ55f5"",
-        ""id"": ""20564328-b6ab-456d-b028-9b9e7bbebe00"",
-        ""client_secret"": ""AgUwqCx7cKJRrV40NcXY2Zb79naYtyUl""
-    }
-],
-""next"": null
-}
-
-and fill a value client_id and client_secret correct value
-",Kong
-"I'm upgrading kong gateway and facing an issue while running the image
-
-old configuration docker file: kong/kong-gateway:2.8.1.1
-and upgraded to: kong:3.3.1-alpine
-
-The image is built without any issues named kong-upg and I created another docker-compose file
-with kong service:
-services:
-  kong-gateway:
-    image: kong-upg
-    container_name: kong-gateway
-    user: kong
-    ports:
-      - ""8000:8000""
-    environment:
-      KONG_DATABASE: ""off""
-
-But I'm getting the following error:
-kong-gateway                        | 2024/04/29 12:32:56 [error] 24#0: init_by_lua error: attempt to compare string with number
-kong-gateway                        | stack traceback:
-kong-gateway                        |   [C]: in function 'sort'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/errors.lua:28: in function 'sorted_keys'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/errors.lua:229: in function 'schema_violation'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/schema/plugin_loader.lua:28: in function 'load_subschema'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/dao/plugins.lua:265: in function 'load_plugin'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/dao/plugins.lua:312: in function 'load_plugin_schemas'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/init.lua:619: in function 'init'
-kong-gateway                        |   init_by_lua:3: in main chunk
-kong-gateway                        | nginx: [error] init_by_lua error: attempt to compare string with number
-kong-gateway                        | stack traceback:
-kong-gateway                        |   [C]: in function 'sort'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/errors.lua:28: in function 'sorted_keys'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/errors.lua:229: in function 'schema_violation'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/schema/plugin_loader.lua:28: in function 'load_subschema'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/dao/plugins.lua:265: in function 'load_plugin'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/db/dao/plugins.lua:312: in function 'load_plugin_schemas'
-kong-gateway                        |   /usr/local/share/lua/5.1/kong/init.lua:619: in function 'init'
-kong-gateway                        |   init_by_lua:3: in main chunk
-kong-gateway                        | Error: /usr/local/share/lua/5.1/kong/cmd/start.lua:101: failed to start nginx (exit code: 1)
-kong-gateway                        | 
-kong-gateway                        |   Run with --v (verbose) or --vv (debug) for more details
-kong-gateway exited with code 1
-
-Has anyone faced this issue before? Tried different images and still got the same issues.
-","1. I found that the issue was in the custom plugin formatting, as the error being thrown from kong is misleading as it is not referring to specific code in the custom plugin.
-I needed to adjust the formatting in the plugin handler to be as follows:
-local typedefs = require ""kong.db.schema.typedefs""
-
-return {
-    name = ""api-key-auth"",
-    fields = {
-        { consumer = typedefs.no_consumer },
-        { config = { type = ""record"", fields = { }, }, },
-    },
-}
-
-",Kong
-"I have two VMs. One is running an Apache server with a Django API and the other is running Kong. The two VMs are on the same network and can ping each other, and I can access the Django site on my browser from my Kong VM.
-I have configured the Kong route to point specifically to the local IP address on the network (192.168.x.x) instead of localhost. On the Kong VM I can do curl requests directly to the API just fine, and I can access the site on my browser, and the test route to httpbin works fine as well. But I still continue to run into this error when I try to access the API through my service.
-I'm running Kong as is, no docker stuff
-I tried getting the logs and got no information other than the Bad Gateway stuff, I've triple checked and verified that the routes are pointing to the correct local IP address and route and not localhost, still nothing
-I also have a web application firewall setup (Modsecurity) but disabling the firewall doesn't fix the problem and no relevant error logs show up that might suggest the firewall is blocking my requests
-My route is at 192.168.226.128/api/mymodels and my services are configured as below
-{
-  ""data"": [
-    {
-      ""updated_at"": 1715195774,
-      ""path"": ""/api/mymodels/"",
-      ""name"": ""main"",
-      ""retries"": 5,
-      ""ca_certificates"": null,
-      ""port"": 80,
-      ""client_certificate"": null,
-      ""protocol"": ""http"",
-      ""enabled"": true,
-      ""connect_timeout"": 60000,
-      ""created_at"": 1715180691,
-      ""read_timeout"": 60000,
-      ""tls_verify"": null,
-      ""tags"": null,
-      ""tls_verify_depth"": null,
-      ""write_timeout"": 60000,
-      ""host"": ""192.168.228.128"",
-      ""id"": ""0052c1db-e28d-47ac-9ad4-5d41c510f066""
-    },
-  ],
-  ""next"": null
-}   
-
-{
-      ""updated_at"": 1715181353,
-      ""snis"": null,
-      ""name"": null,
-      ""tags"": [],
-      ""preserve_host"": false,
-      ""destinations"": null,
-      ""methods"": null,
-      ""strip_path"": true,
-      ""hosts"": null,
-      ""created_at"": 1715180717,
-      ""request_buffering"": true,
-      ""response_buffering"": true,
-      ""sources"": null,
-      ""https_redirect_status_code"": 426,
-      ""regex_priority"": 0,
-      ""service"": {
-        ""id"": ""0052c1db-e28d-47ac-9ad4-5d41c510f066""
-      },
-      ""protocols"": [
-        ""http"",
-        ""https""
-      ],
-      ""paths"": [
-        ""/main""
-      ],
-      ""headers"": null,
-      ""path_handling"": ""v0"",
-      ""id"": ""dd3d9f8e-4e88-44e5-afb1-3414e7c1b84d""
-    } 
-
-Any advice would be appreciated thank you
-","1. fixed it by setting a domain name in /etc/hosts, for some reason this works. unsure why
-but go to /etc/hosts, add a line that maps your target upstream's local IP address to some random domain name of your choice, then change the Kong service to serve that domain name instead of your local IP.
-192.168.226.128   mydomain.ccc
-
-then go to your service and change it as follows
-""host"": ""mydomain.ccc"",
-what a waste of 4 hours
-",Kong
-"I've been exploring the possibility of using Krakend as a proxy for my Nginx frontend, and I'd like to share my experience and gather some insights from the community.
-Here's the setup:
-
-I have a Spring Boot server serving as an API.
-There's a static site hosted on Nginx, both running within a VM.
-Krakend is running on a separate VM, acting as an API manager for the Spring Boot server.
-
-Currently, I've configured Krakend to proxy requests to the static site using the following configuration:
-{
-  ""endpoint"": ""/"",
-  ""output_encoding"": ""no-op"",
-  ""backend"": [
-    {
-      ""encoding"": ""no-op"",
-      ""url_pattern"": """",
-      ""sd"": ""static"",
-      ""host"": [
-        ""http://172.22.11.62""
-      ],
-      ""disable_host_sanitize"": false
-    }
-  ]
-}
-
-
-However, I'm encountering a couple of issues:
-
-When accessing ""/"", it redirects to auth/signin?redirectTo=%2F.
-I need to explicitly declare other routes of the static site along with parameters.
-
-I'm wondering:
-
-Can Krakend act purely as a proxy without any manipulation, allowing seamless routing to the Nginx frontend?
-What are the best practices for configuring Krakend in this scenario?
-Is it recommended to use Krakend as a proxy for an Nginx frontend, or are there better alternatives?
-
-","1. KrakenD CE requires explicit declaration of all routes, which can be managed more efficiently with Flexible Configuration. This feature allows for dynamic updates to your configuration based on variables defining your internal routes. You can find more details here: https://www.krakend.io/docs/configuration/flexible-config/
-For seamless routing and handling of unspecified paths, the wildcard and catch-all endpoints features of the KrakenD Enterprise Edition would be ideal. These allow KrakenD to act more flexibly as a proxy, fitting scenarios like yours better.
-
-Wildcard documentation: https://www.krakend.io/docs/enterprise/endpoints/wildcard/
-Catch-all endpoint documentation: https://www.krakend.io/docs/enterprise/endpoints/catch-all/
-
-In any case, our best practice recommendation is to explicitly declare routes for clearer API contracts and granular management at the gateway level, even when using more dynamic features.
-",KrakenD
-"I have keycloak bitnami chart and krakend deployed in in k8s. Also I have a test api, and I want being authenticated before access it. I'm able to get valid jwt token from keycloak, but when I'm trying to access my api through krakend, it returns 401 error
-Any help is really appreciated.
-Software versions:
-keycloak: 16.1.1
-crakend: 2.0.4
-{
-  ""$schema"": ""https://www.krakend.io/schema/v3.json"",
-  ""version"": 3,
-  ""timeout"": ""3000ms"",
-  ""cache_ttl"": ""300s"",
-  ""output_encoding"": ""json"",
-  ""port"": 8080,
-  ""endpoints"": [
-      {
-          ""endpoint"": ""/mock/parents/{id}"",
-          ""method"": ""GET"",
-          ""input_headers"": [
-             ""Authorization""
-           ],
-          ""extra_config"": {
-              ""auth/validator"": {
-                  ""alg"": ""RS256"",
-                  ""jwk-url"": ""http://keycloak-headless:8080/auth/realms/master/protocol/openid-connect/certs"",
-                  ""disable_jwk_security"": true,
-                  ""roles_key_is_nested"": true,
-                  ""roles_key"": ""realm_access.roles"",
-                  ""roles"": [""test-app-parent""],
-                  ""operation_debug"": true
-              }
-          },
-          ""output_encoding"": ""json"",
-          ""concurrent_calls"": 1,
-          ""backend"": [
-              {
-                  ""url_pattern"": ""/parents/{id}"",
-                  ""encoding"": ""json"",
-                  ""sd"": ""static"",
-                  ""extra_config"": {},
-                  ""host"": [
-                    ""http://testapp-service:8400""
-                  ],
-                  ""disable_host_sanitize"": false,
-                  ""blacklist"": [
-                      ""super_secret_field""
-                  ]
-              },
-              {
-                  ""url_pattern"": ""/siblings/{id}"",
-                  ""encoding"": ""json"",
-                  ""sd"": ""static"",
-                  ""extra_config"": {},
-                  ""host"": [
-                      ""http://testapp-service:8400""
-                  ],
-                  ""blacklist"": [
-                      ""sibling_id""
-                  ],
-                  ""group"": ""extra_info"",
-                  ""disable_host_sanitize"": false
-              },
-              {
-                  ""url_pattern"": ""/parents/{id}/children"",
-                  ""encoding"": ""json"",
-                  ""sd"": ""static"",
-                  ""extra_config"": {},
-                  ""host"": [
-                      ""http://testapp-service:8400""
-                  ],
-                  ""disable_host_sanitize"": false,
-                  ""mapping"": {
-                      ""content"": ""cars""
-                  },
-                  ""whitelist"": [
-                      ""content""
-                  ]
-              }
-          ]
-      },
-      {
-          ""endpoint"": ""/mock/bogus-new-api/{path}"",
-          ""method"": ""GET"",
-          ""extra_config"": {
-              ""auth/validator"": {
-                  ""alg"": ""RS256"",
-                  ""jwk-url"": ""http://keycloak-headless:8080/auth/realms/master/protocol/openid-connect/certs"",
-                  ""disable_jwk_security"": true
-              },
-              ""github.com/devopsfaith/krakend/proxy"": {
-                  ""static"": {
-                      ""data"": {
-                          ""new_field_a"": 123,
-                          ""new_field_b"": [
-                              ""arr1"",
-                              ""arr2""
-                          ],
-                          ""new_field_c"": {
-                              ""obj"": ""obj1""
-                          }
-                      },
-                      ""strategy"": ""always""
-                  }
-              }
-          },
-          ""output_encoding"": ""json"",
-          ""concurrent_calls"": 1,
-          ""backend"": [
-              {
-                  ""url_pattern"": ""/not-finished-yet"",
-                  ""encoding"": ""json"",
-                  ""sd"": ""static"",
-                  ""extra_config"": {},
-                  ""host"": [
-                      ""nothing-here""
-                  ],
-                  ""disable_host_sanitize"": false
-              }
-          ]
-      }
-  ]
- } 
-
-","1. Oh my God this made me go insane.
-In one of the last version updates they changed jwk-url to jwk_url.
-https://github.com/krakendio/krakend-ce/issues/495#issuecomment-1138397005
-After I fixed that it worked for me.
-
-2. It worked for me after I changed
-""jwk_url"": ""http://KEYCLOAK-SERVICE-NAME:8080/auth/realms/master/protocol/openid-connect/certs"" to ""jwk_url"": ""http://host.docker.internal:8080/auth/realms/master/protocol/openid-connect/certs""
-
-3. create new realms role ""test-app-parent"" and
-
-
-go to user section and assign that roles to that user.
-you can check from https://jwt.io/ is_your token contain ""test-app-parent"" this role in ""realm_access.roles"". like below sample example
-
-
-""realm_access"": {
-""roles"": [
-""default-roles-krakend"",
-""offline_access"",
-""test-app-parent"",
-""uma_authorization""
-]
-}
-",KrakenD
-"error image
- When I tried to update Anypoint Studio from version 7.12 to the version 7.17, the installation abruptly stopped and I got the above error prompt. Is there any solution to this problem?
-I have attached the error msg screenshot above
-Thank you
-","1. With so little information it is not possible to suggest a solution for the error. It could be a network error, some permission issue, some configuration issue or any number of things. Also that there are two years of versions in between may not be helpful.
-Instead of trying to upgrade the old version, it is simpler to just download a new installation of the current Anypoint Studio version and install it in a separate directory from the old version. Then you can just open existing workspaces.
-",MuleSoft
-"I am trying to understand the mule error object and I am a bit confused with what I see in the debugger.
-Say there is a http request call failure in my app and in the debugger I can see the error object with muleMessage property.
-And if I right click on error object > copy value then I don't see muleMessage in there. There it's called errorMessage. Both holds the same value.
-Similarly, for IBM MQ listener. When I look at attributes in the debugger I can see attributes.headers.messageId. Yet, when I capture it in munits using munit recording, the same value is captured in attributes.headers.JMSMessageID in JSON.
-Which ones should I be using?
-","1. For errors you should only use the expressions that are documented: https://docs.mulesoft.com/mule-runtime/latest/mule-error-concept#selector_expressions
-In that documentation #[error.errorMessage] is documented but the other one not. Just because you see in the debugger some attributes does not mean you should use them. Some are used internally by Mule, but there are no guarantees they are going to be accessible in the future. Applications using those non documented values failed when they stopped being accessible because of internal changes in a Mule release.
-
-2. In addition to what @Aled described.
-These are very important exprssons in Mule error handling and both works differently.
-The Mule error object is used to get information about errors in detail. The important expression in a mule error object are mainly these two -- error.errorMessage and error.description. These expressons are part of the public API and are guaranteed to be stable across all the Mule versions.
-error.description: Provides a clear detailed description of the error occured.
-error.errorMessage: it gives us additional details about the error if available.
-It's important that when you debug your application in debugger mode, the debugger might show other expressions like muleMessage, but those are not officially documented. These internal attributes may change without any notice in future Mule releases, so that will cause your application to fail if it relies on them.
-For JMS-related attributes, JMSMessageID is the standardized name according to JMS specifications and is likely the more reliable choice in your implementations.
-Related official document- https://docs.mulesoft.com/mule-runtime/latest/mule-error-concept#selector_expressions https://docs.mulesoft.com/mule-runtime/latest/mule-error-concept
-",MuleSoft
-"In my flow i set variable operation as :payload.operation then in Consume node i want to get it's value
-
-
-Variable value is set corectly
-
-I also include xml of my flow:
-<?xml version=""1.0"" encoding=""UTF-8""?>
-
-<mule xmlns:netsuite=""http://www.mulesoft.org/schema/mule/netsuite"" xmlns:wsc=""http://www.mulesoft.org/schema/mule/wsc""
-    xmlns:ee=""http://www.mulesoft.org/schema/mule/ee/core""
-    xmlns:db=""http://www.mulesoft.org/schema/mule/db"" xmlns:http=""http://www.mulesoft.org/schema/mule/http"" xmlns=""http://www.mulesoft.org/schema/mule/core"" xmlns:doc=""http://www.mulesoft.org/schema/mule/documentation"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xsi:schemaLocation=""http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
-http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
-http://www.mulesoft.org/schema/mule/db http://www.mulesoft.org/schema/mule/db/current/mule-db.xsd
-http://www.mulesoft.org/schema/mule/ee/core http://www.mulesoft.org/schema/mule/ee/core/current/mule-ee.xsd
-http://www.mulesoft.org/schema/mule/wsc http://www.mulesoft.org/schema/mule/wsc/current/mule-wsc.xsd
-http://www.mulesoft.org/schema/mule/netsuite http://www.mulesoft.org/schema/mule/netsuite/current/mule-netsuite.xsd"">
-    <http:listener-config name=""HTTP_Listener_config"" doc:name=""HTTP Listener config"" doc:id=""a532976e-413b-4f47-bc8f-3e79e6de1417"" >
-        <http:listener-connection host=""0.0.0.0"" port=""8081"" />
-    </http:listener-config>
-    <db:config name=""Database_Config"" doc:name=""Database Config"" doc:id=""276e5d43-bd49-4a82-b7c6-934288334aca"" >
-        <db:my-sql-connection host=""mudb.learn.mulesoft.com"" port=""3306"" user=""mule"" password=""mule"" database=""training"" />
-    </db:config>
-    <wsc:config name=""Web_Service_Consumer_Config"" doc:name=""Web Service Consumer Config"" doc:id=""a5692620-4f26-4b11-be70-b822710d6c0d"" >
-        <wsc:connection wsdlLocation=""http://dneonline.com/calculator.asmx?WSDL"" service=""Calculator"" port=""CalculatorSoap12"" soapVersion=""SOAP12"" address=""http://www.dneonline.com/calculator.asmx""/>
-    </wsc:config>
-    <flow name=""api-accountsFlow"" doc:id=""7a942ae2-4422-46ba-b57d-38bd37ec9f82"" >
-        <http:listener doc:name=""Get Accounts Listener"" doc:id=""a1873bcc-b9f1-4cf7-a405-a1c835cdf69c"" config-ref=""HTTP_Listener_config"" path=""/calculator"" />
-        <choice doc:name=""Choice"" doc:id=""0b885797-7da1-4932-9183-869b5a670943"" >
-            <when expression='#[payload.operation==""Add"" or payload.operation==""Subtract"" or payload.operation==""Mul"" or payload.operation==""Divide""]'>
-                <set-variable value=""#[payload.operation]"" doc:name=""Set Variable"" doc:id=""5c4da028-7ae8-43e6-ac35-84de6c3b4666"" variableName=""operation"" />
-                <ee:transform doc:name=""Transform JSON Request to XML"" doc:id=""86f3eebe-0050-4800-9c08-7167ded759d9"" >
-                    <ee:message >
-                        <ee:set-payload ><![CDATA[%dw 2.0
-output application/xml
-ns ns0 http://tempuri.org/
----
-{
- (""ns0#"" ++ vars.operation): { 
- 
-        ns0#intA: payload.val1,
-        ns0#intB: payload.val2
-    } 
-}
-]]></ee:set-payload>
-                    </ee:message>
-                </ee:transform>
-                <wsc:consume doc:name=""Consume"" doc:id=""091e6ca8-a78d-4a6f-b947-679235f7bfa4"" config-ref=""Web_Service_Consumer_Config"" operation=""#[vars.operation]""/>
-                <ee:transform doc:name=""Transform JSON Request to XML1"" doc:id=""6b77fd85-8af7-4f96-a299-2b64445ede0b"" >
-                    <ee:message >
-                        <ee:set-payload ><![CDATA[%dw 2.0
-output application/json
----
-payload
- ]]></ee:set-payload>
-                    </ee:message>
-                </ee:transform>
-            </when>
-            <otherwise >
-                <logger level=""INFO"" doc:name=""Logger"" doc:id=""b4a8341c-a265-4c60-ab58-9b42d899c8fd"" />
-                <set-payload value=""=== Operation Not Found ==="" doc:name=""Set Payload"" doc:id=""f8436921-0612-44b5-b12d-7dac7bfd7c2d"" />
-            </otherwise>
-        </choice>
-    </flow>
-</mule>
-
-
-When i run my flow i get
-org.mule.runtime.module.extension.internal.runtime.ValueResolvingException: org.mule.runtime.module.extension.internal.runtime.ValueResolvingException: Unable to resolve value for the parameter: operation
-    at org.mule.runtime.module.extension.internal.runtime.operation.OperationParameterValueResolver.getParameterValue(OperationParameterValueResolver.java:101)
-    at org.mule.runtime.module.extension.internal.metadata.MetadataMediator.getMetadataKeyObjectValue(MetadataMediator.java:426)
-    at org.mule.runtime.module.extension.internal.metadata.MetadataMediator.getMetadata(MetadataMediator.java:181)
-    at org.mule.runtime.module.extension.internal.runtime.ExtensionComponent.lambda$getMetadata$21(ExtensionComponent.java:656)
-    at org.mule.runtime.core.api.util.ExceptionUtils.tryExpecting(ExceptionUtils.java:224)
-    at org.mule.runtime.core.api.util.ClassUtils.withContextClassLoader(ClassUtils.java:1102)
-    at org.mule.runtime.core.api.util.ClassUtils.withContextClassLoader(ClassUtils.java:1020)
-    at org.mule.runtime.module.extension.internal.runtime.ExtensionComponent.lambda$getMetadata$22(ExtensionComponent.java:655)
-    at org.mule.runtime.module.extension.internal.runtime.ExtensionComponent.runWithMetadataContext(ExtensionComponent.java:793)
-    at org.mule.runtime.module.extension.internal.runtime.ExtensionComponent.getMetadata(ExtensionComponent.java:654)
-    at org.mule.runtime.metadata.internal.MuleMetadataService.lambda$getComponentMetadata$7(MuleMetadataService.java:218)
-    at org.mule.runtime.metadata.internal.MuleMetadataService.exceptionHandledMetadataFetch(MuleMetadataService.java:174)
-    at org.mule.runtime.metadata.internal.MuleMetadataService.getComponentMetadata(MuleMetadataService.java:217)
-    at org.mule.runtime.metadata.internal.MuleMetadataService.getOperationMetadata(MuleMetadataService.java:116)
-    at org.mule.runtime.config.internal.bean.lazy.LazyMetadataService.lambda$getOperationMetadata$4(LazyMetadataService.java:100)
-    at java.util.Optional.orElseGet(Optional.java:267)
-    at org.mule.runtime.config.internal.bean.lazy.LazyMetadataService.getOperationMetadata(LazyMetadataService.java:100)
-    at com.mulesoft.agent.services.metadata.MuleAgentMetadataService.lambda$getOperationMetadata$2(MuleAgentMetadataService.java:75)
-    at com.mulesoft.agent.services.metadata.MuleAgentMetadataService.withMetadataService(MuleAgentMetadataService.java:145)
-    at com.mulesoft.agent.services.metadata.MuleAgentMetadataService.getOperationMetadata(MuleAgentMetadataService.java:75)
-    at com.mulesoft.agent.external.handlers.metadata.MetadataRequestHandler.lambda$getOperationMetadata$3(MetadataRequestHandler.java:206)
-    at com.mulesoft.agent.util.ResponseHelper.response(ResponseHelper.java:88)
-    at com.mulesoft.agent.external.handlers.metadata.MetadataRequestHandler.getOperationMetadata(MetadataRequestHandler.java:204)
-    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
-    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-    at java.lang.reflect.Method.invoke(Method.java:498)
-    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
-    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:134)
-    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:177)
-    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
-    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:81)
-    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:478)
-    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:400)
-    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
-    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:256)
-    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
-    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
-    at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
-    at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
-    at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
-    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
-    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:235)
-    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:684)
-    at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
-    at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
-    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:358)
-    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:311)
-    at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
-    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
-    at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656)
-    at com.mulesoft.agent.rest.RequestLoggingFilter.doFilter(RequestLoggingFilter.java:95)
-    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
-    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
-    at com.mulesoft.agent.rest.AuthorizationFilter.doFilter(AuthorizationFilter.java:49)
-    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
-    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
-    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
-    at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
-    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
-    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
-    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505)
-    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
-    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
-    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
-    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
-    at org.eclipse.jetty.server.Server.handle(Server.java:516)
-    at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
-    at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
-    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
-    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
-    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
-    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
-    at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
-    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
-    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
-    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
-    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
-    at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
-    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
-    at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
-    at java.lang.Thread.run(Thread.java:750)
-Caused by: java.lang.IllegalArgumentException: Required parameter 'operation' was assigned with value '#[vars.operation]' which resolved to null. Required parameters need to be assigned with non null values
-    at org.mule.runtime.module.extension.internal.runtime.resolver.RequiredParameterValueResolverWrapper.resolve(RequiredParameterValueResolverWrapper.java:67)
-    at org.mule.runtime.module.extension.internal.runtime.LazyExecutionContext.lambda$lazy$1(LazyExecutionContext.java:61)
-    at org.mule.runtime.core.api.util.func.CheckedSupplier.get(CheckedSupplier.java:25)
-    at org.mule.runtime.api.util.LazyValue.get(LazyValue.java:75)
-    at org.mule.runtime.module.extension.internal.runtime.LazyExecutionContext.getParameter(LazyExecutionContext.java:78)
-    at org.mule.runtime.module.extension.internal.runtime.operation.OperationParameterValueResolver.lambda$getParameterValue$1(OperationParameterValueResolver.java:95)
-    at java.util.Optional.orElseGet(Optional.java:267)
-    at org.mule.runtime.module.extension.internal.runtime.operation.OperationParameterValueResolver.getParameterValue(OperationParameterValueResolver.java:78)
-    ... 81 more
-
-After some tests I think problem may be in this part:   (""ns0#"" ++ vars.operation):  because from postman I get this error:
-Error consuming the operation [Subtract], the request body is not a valid XML
-","1. I suspect that because the variable is set with the value from a JSON payload it maybe confusing Mule. The image shows the variable operation is a JSON. Try assigning it as a Java output to remove the JSON format from the string:
-<set-variable value=""#[output application/java --- payload.operation]"" doc:name=""Set Variable"" doc:id=""5c4da028-7ae8-43e6-ac35-84de6c3b4666"" variableName=""operation"" />
-
-",MuleSoft
-"Exisiting Json
-{
-a:1,
-b:2
-}
-Add the field
-c:3
-Final Output expected
-{
-a:1,
-b:2,
-c:3
-}
-","1. %dw 2.0
-output application/json
-
----
-
-{ a:1, b:2 } ++ {c: 3}
-
-Sample Output:
-{
-  ""a"": 1,
-  ""b"": 2,
-  ""c"": 3
-}
-
-
-2. You can directly keep payload as it is and add the new JSON field as below:
-payload ++ {c:3}
-
-output:
-{ 
- a:1,
- b:2, 
- c:3 
-}
-
-",MuleSoft
-"Creating shopify app just for the first time. I followed the steps from below
-https://shopify.dev/docs/apps/getting-started/create#step-3-install-your-app-on-your-development-store
-Works great up section 3 but after step 2 under section 3, I just keep receiving below error
-
-I did connect to ngrok locally
-
-I was able to run below command successfully as well
-PS C:\shopify\testappname> shopify app dev --tunnel-url https://XXXX-XXXX-XXXX-XXXX-XXXX-810e-ea3e-6b08-d392.ngrok-free.app:8080
-
-After pressing p and when I install the app selecting development store, I am seeing the error in 1st screenshot saying
-XXXX-XXXX-XXXX-XXXX-810e-ea3e-6b08-d392.ngrok-free.app refused to connect
-Any directions to fix above error would be helpful.
-","1. I think you're using the wrong port in ngrok. this command is using port 8080 shopify app dev --tunnel-url https://XXXX-XXXX-XXXX-XXXX-XXXX-810e-ea3e-6b08-d392.ngrok-free.app:8080 and ngrok is using port 80, try running ngrok with the same port ngrok http 8080
-",ngrok
-"I'm building a Node app that uses http and socket, and i'm exposing my localhost using Ngrok and letting other user connect to my react app using my IP4 Address.
-i always get this error :
-Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at ‘https://a29a-41-109-30-183.ngrok-free.app/socket.io/?EIO=4&transport=polling&t=O-hTmol’. (Reason: Credential is not supported if the CORS header ‘Access-Control-Allow-Origin’ is ‘*’).
-I setup the CORS like this :
-app.use(
-  cors({
-    origin: [""http://localhost:5173"", ""http://192.168.1.6:5173""],
-    credentials: true,
-  })
-);
-
-const server = http.createServer(app);
-const io = new SocketIOServer(server, {
-  cors: {
-    origin: [""http://localhost:5173"", ""http://192.168.1.6:5173""],
-    credentials: true,
-  },
-});
-
-and on the client side :
-const socket = io(backend_url_socket, {
-    withCredentials: true,
-    extraHeaders: {
-      Authorization: `Bearer ${secureLocalStorage.getItem(""authToken"")}`,
-      ""Another-Header"": ""HeaderValue"",
-    },
-  });
-  socket.on(""connect"", () => {});
-
-","1. Based on your error response its using https but it seems that your server is listening for http connections. You can try starting the ngrok service to use http instead of https
-In your terminal run
-
-
-ngrok --scheme http http 3000
-
-
-
-In the example I use port 3000 but change it to whatever your localhost is listening on.
-",ngrok
-"I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
-traefik/docker-compose.yml
-version: ""3.3""
-
-services:
-  traefik:
-    image: ""traefik:v2.1""
-    container_name: ""traefik""
-    restart: always
-    command:
-      - ""--log.level=DEBUG""
-      - ""--api=true""
-      - ""--api.dashboard=true""
-      - ""--providers.docker=true""
-      - ""--providers.docker.exposedbydefault=false""
-      - ""--providers.docker.network=proxy""
-      - ""--entrypoints.web.address=:80""
-      - ""--entrypoints.websecure.address=:443""
-      - ""--entrypoints.traefik-dashboard.address=:8080""
-      - ""--certificatesresolvers.devnik-resolver.acme.httpchallenge=true""
-      - ""--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web""
-      #- ""--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory""
-      - ""--certificatesresolvers.devnik-resolver.acme.email=####""
-      - ""--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json""
-    ports:
-      - ""80:80""
-      - ""443:443""
-      - ""8080:8080""
-    volumes:
-      - ""./letsencrypt:/letsencrypt""
-      - ""./data:/etc/traefik""
-      - ""/var/run/docker.sock:/var/run/docker.sock:ro""
-    networks:
-      - ""proxy""
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.http.routers.traefik.rule=Host(`devnik.dev`)""
-      - ""traefik.http.routers.traefik.entrypoints=traefik-dashboard""
-      - ""traefik.http.routers.traefik.tls.certresolver=devnik-resolver""
-      #basic auth
-      - ""traefik.http.routers.traefik.service=api@internal""
-      - ""traefik.http.routers.traefik.middlewares=auth""
-      - ""traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd""
-
-#Docker Networks
-networks:
-  proxy:
-
-database/docker-compose.yml
-version: ""3.3""
-
-services:
-  #MySQL Service
-  mysql:
-    image: mysql:5.7
-    container_name: mysql
-    restart: always
-    ports:
-      - ""3306:3306""
-    volumes:
-      #persist data
-      - ./mysqldata/:/var/lib/mysql/
-      - ./init:/docker-entrypoint-initdb.d
-    networks:
-      - ""mysql""
-    environment:
-      MYSQL_ROOT_PASSWORD: ####
-      TZ: Europe/Berlin
-
-#Docker Networks
-networks:
-  mysql:
-    driver: bridge
-
-For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
-This also works for the following case (but only sometimes)
-dev-releases/docker-compose.yml
-version: ""3.3""
-
-services:
-  backend:
-    image: ""registry.gitlab.com/devnik/dev-releases-backend/master:latest""
-    container_name: ""dev-releases-backend""
-    restart: always
-    volumes:
-      #laravel logs
-      - ""./logs/backend:/app/storage/logs""
-      #cron logs
-      - ""./logs/backend/cron.log:/var/log/cron.log""
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)""
-      - ""traefik.http.routers.dev-releases-backend.entrypoints=websecure""
-      - ""traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver""
-    networks:
-      - proxy
-      - mysql
-    environment:
-      TZ: Europe/Berlin
-
-#Docker Networks
-networks:
-  proxy:
-    external:
-      name: ""traefik_proxy""
-  mysql:
-    external:
-      name: ""database_mysql""
-
-As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error ""Gateway timeout"" when calling them in the browser.
-As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
-My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
-I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
-Let me know if you need more information
-EDIT (26.03.2020)
-I make it running.
-I put all my containers into one network ""proxy"". It seems mysql also have to be in the proxy network.
-So I add following to database/docker-compose.yml
-networks:
-  proxy:
-    external:
-      name: ""traefik_proxy""
-
-And removed the database_mysql network out of dev-releases/docker-compose.yml
-","1. based on the names of the files, your mysql network should be mysql_mysql.
-you can verify this by executing 
-$> docker network ls
-
-You are also missing a couple of labels for your services such as 
-traefik command line 
-- '--providers.docker.watch=true'
-- '--providers.docker.swarmMode=true'
-
-labels 
-- traefik.docker.network=proxy
-- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
-- traefik.http.routers.dev-releases-backend.service=mailcatcher
-
-You can check this for more info 
-",Traefik
-"Motivations
-I am a running into an issue when trying to proxy PostgreSQL with Traefik over SSL using Let's Encrypt.
-I did some research but it is not well documented and I would like to confirm my observations and leave a record to everyone who faces this situation.
-Configuration
-I use latest versions of PostgreSQL v12 and Traefik v2. I want to build a pure TCP flow from tcp://example.com:5432 -> tcp://postgresql:5432 over TLS using Let's Encrypt.
-Traefik service is configured as follow:
-  version: ""3.6""
-    
-    services:
-    
-      traefik:
-        image: traefik:latest
-        restart: unless-stopped
-        volumes:
-          - ""/var/run/docker.sock:/var/run/docker.sock:ro""
-          - ""./configuration/traefik.toml:/etc/traefik/traefik.toml:ro""
-          - ""./configuration/dynamic_conf.toml:/etc/traefik/dynamic_conf.toml""
-          - ""./letsencrypt/acme.json:/acme.json""
-    
-        networks:
-          - backend
-        ports:
-          - ""80:80""
-          - ""443:443""
-          - ""5432:5432""
-    
-    networks:
-      backend:
-        external: true
-
-With the static setup:
-
-[entryPoints]
-  [entryPoints.web]
-    address = "":80""
-    [entryPoints.web.http]
-      [entryPoints.web.http.redirections.entryPoint]
-        to = ""websecure""
-        scheme = ""https""
-
-  [entryPoints.websecure]
-    address = "":443""
-    [entryPoints.websecure.http]
-      [entryPoints.websecure.http.tls]
-        certresolver = ""lets""
-
-  [entryPoints.postgres]
-    address = "":5432""
-
-PostgreSQL service is configured as follow:
-version: ""3.6""
-
-services:
-
-  postgresql:
-    image: postgres:latest
-    environment:
-      - POSTGRES_PASSWORD=secret
-    volumes:
-      - ./configuration/trial_config.conf:/etc/postgresql/postgresql.conf:ro
-      - ./configuration/trial_hba.conf:/etc/postgresql/pg_hba.conf:ro
-      - ./configuration/initdb:/docker-entrypoint-initdb.d
-      - postgresql-data:/var/lib/postgresql/data
-    networks:
-      - backend
-    #ports:
-    #  - 5432:5432
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.docker.network=backend""
-      - ""traefik.tcp.routers.postgres.entrypoints=postgres""
-      - ""traefik.tcp.routers.postgres.rule=HostSNI(`example.com`)""
-      - ""traefic.tcp.routers.postgres.tls=true""
-      - ""traefik.tcp.routers.postgres.tls.certresolver=lets""
-      - ""traefik.tcp.services.postgres.loadBalancer.server.port=5432""
-
-networks:
-  backend:
-    external: true
-
-volumes:
-  postgresql-data:
-
-It seems my Traefik configuration is correct. Everything is OK in the logs and all sections in dashboard are flagged as Success (no Warnings, no Errors). So I am confident with the Traefik configuration above. The complete flow is about:
-EntryPoint(':5432') -> HostSNI(`example.com`) -> TcpRouter(`postgres`) -> Service(`postgres@docker`)
-
-But, it may have a limitation at PostgreSQL side.
-Debug
-The problem is that I cannot connect the PostgreSQL database. I always get a Timeout error.
-I have checked PostgreSQL is listening properly (main cause of Timeout error):
-# - Connection Settings -
-listen_addresses = '*'
-port = 5432
-
-And I checked that I can connect PostgreSQL on the host (outside the container):
-psql --host 172.19.0.4 -U postgres
-Password for user postgres:
-psql (12.2 (Ubuntu 12.2-4), server 12.3 (Debian 12.3-1.pgdg100+1))
-Type ""help"" for help.
-
-postgres=#
-
-Thus I know PostgreSQL is listening outside its container, so Traefik should be able to bind the flow.
-I also have checked external traefik can reach the server:
-sudo tcpdump -i ens3 port 5432
-tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
-listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
-09:02:37.878614 IP x.y-z-w.isp.com.61229 > example.com.postgresql: Flags [S], seq 1027429527, win 64240, options [mss 1452,nop,wscale 8,nop,nop,sackOK], length 0
-09:02:37.879858 IP example.com.postgresql > x.y-z-w.isp.com.61229: Flags [S.], seq 3545496818, ack 1027429528, win 64240, options [mss 1460,nop,nop,sackOK,nop,wscale 7], length 0
-09:02:37.922591 IP x.y-z-w.isp.com.61229 > example.com.postgresql: Flags [.], ack 1, win 516, length 0
-09:02:37.922718 IP x.y-z-w.isp.com.61229 > example.com.postgresql: Flags [P.], seq 1:9, ack 1, win 516, length 8
-09:02:37.922750 IP example.com.postgresql > x.y-z-w.isp.com.61229: Flags [.], ack 9, win 502, length 0
-09:02:47.908808 IP x.y-z-w.isp.com.61229 > example.com.postgresql: Flags [F.], seq 9, ack 1, win 516, length 0
-09:02:47.909578 IP example.com.postgresql > x.y-z-w.isp.com.61229: Flags [P.], seq 1:104, ack 10, win 502, length 103
-09:02:47.909754 IP example.com.postgresql > x.y-z-w.isp.com.61229: Flags [F.], seq 104, ack 10, win 502, length 0
-09:02:47.961826 IP x.y-z-w.isp.com.61229 > example.com.postgresql: Flags [R.], seq 10, ack 104, win 0, length 0
-
-So, I am wondering why the connection cannot succeed. Something must be wrong between Traefik and PostgreSQL.
-SNI incompatibility?
-Even when I remove the TLS configuration, the problem is still there, so I don't expect the TLS to be the origin of this problem.
-Then I searched and I found few posts relating similar issue:
-
-Introducing SNI in TLS handshake for SSL connections
-Traefik 2.0 TCP routing for multiple DBs;
-
-As far as I understand it, the SSL protocol of PostgreSQL is a custom one and does not support SNI for now and might never support it. If it is correct, it will confirm that Traefik cannot proxy PostgreSQL for now and this is a limitation.
-By writing this post I would like to confirm my observations and at the same time leave a visible record on Stack Overflow to anyone who faces the same problem and seek for help. My question is then: Is it possible to use Traefik to proxy PostgreSQL?
-Update
-Intersting observation, if using HostSNI('*') and Let's Encrypt:
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.docker.network=backend""
-      - ""traefik.tcp.routers.postgres.entrypoints=postgres""
-      - ""traefik.tcp.routers.postgres.rule=HostSNI(`*`)""
-      - ""traefik.tcp.routers.postgres.tls=true""
-      - ""traefik.tcp.routers.postgres.tls.certresolver=lets""
-      - ""traefik.tcp.services.postgres.loadBalancer.server.port=5432""
-
-Everything is flagged as success in Dashboard but of course Let's Encrypt cannot perform the DNS Challenge for wildcard *, it complaints in logs:
-time=""2020-08-12T10:25:22Z"" level=error msg=""Unable to obtain ACME certificate for domains \""*\"": unable to generate a wildcard certificate in ACME provider for domain \""*\"" : ACME needs a DNSChallenge"" providerName=lets.acme routerName=postgres@docker rule=""HostSNI(`*`)""
-
-When I try the following configuration:
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.docker.network=backend""
-      - ""traefik.tcp.routers.postgres.entrypoints=postgres""
-      - ""traefik.tcp.routers.postgres.rule=HostSNI(`*`)""
-      - ""traefik.tcp.routers.postgres.tls=true""
-      - ""traefik.tcp.routers.postgres.tls.domains[0].main=example.com""
-      - ""traefik.tcp.routers.postgres.tls.certresolver=lets""
-      - ""traefik.tcp.services.postgres.loadBalancer.server.port=5432""
-
-The error vanishes from logs and in both setups the dashboard seems ok but traffic is not routed to PostgreSQL (time out). Anyway, removing SSL from the configuration makes the flow complete (and unsecure):
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.docker.network=backend""
-      - ""traefik.tcp.routers.postgres.entrypoints=postgres""
-      - ""traefik.tcp.routers.postgres.rule=HostSNI(`*`)""
-      - ""traefik.tcp.services.postgres.loadBalancer.server.port=5432""
-
-Then it is possible to connect PostgreSQL database:
-time=""2020-08-12T10:30:52Z"" level=debug msg=""Handling connection from x.y.z.w:58389""
-
-","1. SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI(""example.com"") along with tls to expose postgres databases under different subdomains.
-As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)
-
-2. IMPORTANT: This feature only works on Traefik v3.0 onwards.
-I could make it work properly with the following labels in my postgres docker compose:
-labels:
-  - traefik.enable=true
-  - traefik.tcp.routers.postgresql.rule=HostSNI(`sub.domain.com`)
-  - traefik.tcp.routers.postgresql.tls=true
-  - traefik.tcp.services.postgresql.loadbalancer.server.port=5432
-  - traefik.tcp.routers.postgresql.entrypoints=dbsecure
-  - traefik.tcp.routers.postgresql.tls.certresolver=letsencrypt
-
-While my traefik already had a configuration to add SSL certificates with let's encrypt for HTTPS, so to make it work with the postgres database I only added the following command as part of the configuration:
-- --entrypoints.dbsecure.address=:5432
-
-
-3. I'm using Traefik to proxy PostgreSQL, so answer is yes. But I'm not using TLS, because my setup is a bit different. First of all, if PostgreSQL doesn't support SNI, then I would suggest to try to modify labels, especially HostSNI rule to this:
-""traefik.tcp.routers.postgres.rule=HostSNI(`*`)""
-
-That says: ignore SNI and just take any name from specified entrypoint as valid.
-",Traefik
-"I want to add new headers:
-x-application-id
-x-application-name
-x-organisation-id
-x-organisation-name
-if I do like this ""$tyk_context.jwt_claims_client_metadata.organisation_name"" it does not work.
-in the logs I see: “x-organisation-name-”
-Question: how can I parse an array?
-My jwt is:
-{
-  ""pol"": ""MyAPI"",
-  ""client_metadata"": {
-    ""application_id"": ""1"",
-    ""application_name"": ""SuperApp"",
-    ""organisation_id"": ""1"",
-    ""organisation_name"": ""SuperOrg""
-  },
-  ""iss"": ""http://example.com/"",
-  ""scope"": ""view create"",
-  ""gty"": ""client-credentials""
-}
-
-","1. It appears it is currently not possible to reach nested claims using context variables. More info here
-",Tyk
-"I have a Tyk dashboard and developer portal running in separate Docker containers on the same docker network.
-I use the following docker-compose.yaml file:
-version: '3.6'
-services:
-  tyk-portal:
-    depends_on:
-      - tyk-portal-mysql
-    image: tykio/portal:v1.3.0
-    command: --bootstrap
-    networks:
-      - tyk-portal
-    ports:
-      - 3001:3001
-    environment:
-      - ...
-
-  tyk-portal-mysql:
-    image: mysql:5.7
-    command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
-    volumes:
-      - tyk-portal-mysql-data:/var/lib/mysql
-    networks:
-      - tyk-portal
-    ports:
-      - 3306:3306  
-    environment:
-      - ... 
-
-  tyk-dashboard:
-    image: tykio/tyk-dashboard:v5.0
-    container_name: tyk-dashboard
-    environment:
-      - ...
-    depends_on:
-      tyk-postgres:
-        condition: service_healthy
-    ports:
-      - 3000:3000
-    env_file:
-      - ./confs/tyk_analytics.env
-    networks:
-      - tyk-portal
-
-  tyk-postgres:
-    image: postgres:latest
-    container_name: tyk-postgres
-    environment:
-      - ...
-    ports:
-      - 5432:5432
-    volumes:
-      - postgres-data:/var/lib/postgresql/data
-    healthcheck:
-      test: [""CMD-SHELL"", ""pg_isready -U postgres""]
-      interval: 5s
-      timeout: 5s
-      retries: 5
-    networks:
-      - tyk-portal 
-
-  tyk-gateway:
-    image: tykio/tyk-gateway:v5.0
-    container_name: tyk-gateway
-    ports:
-      - 8080:8080
-    env_file:
-      - ./confs/tyk.env
-    networks:
-      - tyk-portal
-
-  tyk-redis:
-    image: redis
-    container_name: tyk-redis
-    ports:
-      - 6379:6379
-    volumes:
-      - redis-data:/data
-    networks:
-      - tyk-portal 
-
-volumes:
-  redis-data:
-  tyk-portal-mysql-data:
-  postgres-data:
-
-networks:
-  tyk-portal:
-
-I imported an API definition via the Tyk dashboard and added a policy for it. The API can be accessed without any authentication. The policy also does not specify any authentication type.
-In the ""Providers"" section of the developer portal, I added a new provider pointing to the locally running Tyk dashboard. I use ""http://tyk-dashboard:3000"" as the URL and the correct secret and organisation ID.
-When I click on ""Synchronize"" it claims to have been successful. I also don't see any error logs in any of the containers.
-However, when I navigate to the catalogue page of the developer portal (I'm logged in as the admin user), it still says ""No APIs found"".
-Has anyone experienced this as well? How can I list the imported API in the catalogue page?
-","1. It turns out that the APIs authentication method can't be set to ""keyless"" (I used ""auth token"") and the policy partitioning has to be set (""Enforce access rights"" in my case).
-With these changes, the API is finally shown in the developer portal catalogue page.
-",Tyk
-"I'm trying to reference an environment variable in Tyk Dashboard.
-I set up a docker-compose.yaml file containing environment variables:
-version: '3.6'
-services:
-  tyk-portal:
-    image: tykio/portal:v1.3.0
-    command: --bootstrap
-    ports:
-      - 3001:3001
-    environment:
-      - ...
-      - MY_VAR=someValue123
-
-When I run docker-compose up, I can navigate to the Tyk dashboard on localhost:3001.
-Inside Tyks top_nav.tmp template file I'm now trying to display the value of my environment variable MY_VAR.
-I want use something like this:
-<p>
-{{ .Env.MY_VAR }}
-</p>
-
-However, nothing is displayed. I cannot find a concrete example in the docs and I'm starting to wonder if referencing an environment variable inside a Tyk template file is at all possible.
-","1. Tyk bundles the Sprig Library (v3) which has the env function. Use it like this:
-<p>
-{{ env ""MY_VAR"" }}
-</p>
-
-",Tyk
-"how to transform query parameters from camelCase to snake_case in TYK Gateway?
-For example,
-https://gateway.com?firstName=John&lastName=Doe
-to
-https://upstream.com?first_name=John&last_name=Doe
-","1. I think you could achieve your goal with a custom plugin. There isn't an inbuilt middleware that allows you to transform the query parameters to your desired result.
-The code for your plugin would be in the Request lifecycle and would take advantage of the request object. There are a few examples that can get you started on using custom plugins
-",Tyk
-"I am in the process of implementing an API Gateway as a point of access to several existing APIs that run as microservices.
-Each microservice API is defined in OpenAPI and runs an instance of swagger-ui to document and expose endpoints.  Everything is written in Ruby on Rails as separate API-only projects.
-I'm looking at Kong or Tyk in the role of API Gateway.  Is it possible with either project to run swagger-ui on the gateway to document available routes through the gateway and to allow authenticated users to try the various endpoints exposed by the different services in one place rather than per-service?  If not, does either project provide such an interface in any form?
-","1. Speaking for Kong, it does not provide this. But you can host a Swagger-UI instance behind the gateway just like a regular service. Swagger-UI is capable of serving multiple specs. Have a look at the urls parameter of the config:
-https://swagger.io/docs/open-source-tools/swagger-ui/usage/configuration/
-You will get a dropdown box on the top right to select the desired service.
-The swagger docs should have a server url according to your API gateway and also the desired authentication scheme(s). If the request goes through an API gateway, it must have the correct authentication mechanism in place.
-Keep in mind that you may have to have CORS configured, if Swagger-UI and your services are served from different domains.
-
-2. I am only familiar with Kong so I can only speak for that product. Kong has what is known as the ""developer portal"" it is intended to integrate with the gateway itself and serve API specs to allow consumers to view.
-You could certainly serve your own HTML-type applications via the Kong Gateway product too but this might be an optimal solution to save the resource required to do so.
-Personally, I use their Insomnia product (like postman) to maintain and push the Swagger specs direct to the dev portal.
-
-3. Yes for Tyk. You can design with Swagger-UI and import the generated OpenAPI specification into Tyk gateway for the routes. The gateway immediately populates the routes and start proxying API traffic. You can then choose to further enforce authentication or add middleware to these routes.
-https://tyk.io/docs/getting-started/using-oas-definitions/create-an-oas-api/
-For consumption, you can also publish APIs on to Tyk's developer portal with OpenAPI documentation.
-https://tyk.io/docs/tyk-stack/tyk-developer-portal/enterprise-developer-portal/getting-started-with-enterprise-portal/publish-api-products-and-plans/
-",Tyk
-"I am trying to run the official docker image by doing the following
-docker pull consul
-docker run -d --name=dev-consul -p 8500:8500 consul
-
-When I try to access the consul server using curl I get an empty reply 
-   vagrant@docker:~$ curl localhost:8500/v1/catalog/nodes --verbose
-* Hostname was NOT found in DNS cache
-*   Trying ::1...
-* Connected to localhost (::1) port 8500 (#0)
-> GET /v1/catalog/nodes HTTP/1.1
-> User-Agent: curl/7.35.0
-> Host: localhost:8500
-> Accept: */*
-> 
-* Empty reply from server
-* Connection #0 to host localhost left intact
-curl: (52) Empty reply from server
-
-What am I missing?
-","1. I start consul with:
-docker run  -p 8500:8500 -p 8600:8600/udp --name=consul consul:v0.6.4 agent -server -bootstrap -ui -client=0.0.0.0
-
-
-2. docker run -d -p 8500:8500 -p 8600:8600/udp --name=my-consul consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0
-
-If this command doesn't work, replace the code with this one. -client=""0.0.0.0"" (argument in double quotation marks)
-docker run -d -p 8500:8500 -p 8600:8600/udp --name=my-consul consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=""0.0.0.0""
-
-
-3. download latest consul docker image ... just issue
-docker pull hashicorp/consul:latest    # pull the latest image
-
-as per doc at  https://github.com/hashicorp/consul/issues/17973
-# docker pull consul:latest   # <-- BAD its obsolete ... replace with above
-
-ignore bad hashicorp consul doc at https://developer.hashicorp.com/consul/tutorials/archive/docker-container-agents  as its incorrectly giving bad advise ... good doc see above github ticket
-",Consul
-"Trying to setup Stolon on docker swarm, right now to simplify I have all services running on the same host, on the manager node.
-For the life of me I can't seem to get past the error log message from keeper
-Keeper logs
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Starting Stolon as a keeper...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to be ready at consul:8500...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to start...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to start...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Consul is ready.
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.328Z   INFO    cmd/keeper.go:2091   exclusive lock on data dir taken
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.332Z   INFO    cmd/keeper.go:569    keeper uid       {""uid"": ""postgres_dsyf1a7juv4u1iwyjj6434ldx""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.337Z   INFO    cmd/keeper.go:1048   no cluster data available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:02.345Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:07.347Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:12.349Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.352Z   INFO    cmd/keeper.go:1141   current db UID different than cluster data db UID        {""db"": """", ""cdDB"": ""8198992d""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.352Z   INFO    cmd/keeper.go:1148   initializing the database cluster
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.384Z   ERROR   cmd/keeper.go:1174   failed to stop pg instance       {""error"": ""cannot get instance state: exit status 1""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:22.387Z   ERROR   cmd/keeper.go:1110   db failed to initialize or resync
-
-Docker Compose
-version: '3.8'
-
-services:
-  consul:
-    image: dockerhub-user/app-consul:latest
-    volumes:
-      - console_data:/consul/data
-    ports:
-      - '8500:8500'  # Expose the Consul UI and API port
-      - ""8400:8400""
-      - ""8301-8302:8301-8302""
-      - ""8301-8302:8301-8302/udp""
-      - ""8600:8600""
-      - ""8600:8600/udp""
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager] # change to worker later if needed
-      restart_policy:
-        condition: on-failure
-    environment:
-      CONSUL_BIND_INTERFACE: 'eth0'
-      CONSUL_CLIENT_INTERFACE: 'eth0'
-    command: ""agent -server -ui -bootstrap -client=0.0.0.0 -bind={{ GetInterfaceIP 'eth0' }} -data-dir=/consul/data""
-
-  # Managing Stolon clusters, providing operational control.
-  stolon-ctl:
-    image: dockerhub-user/app-stolon-ctl:latest
-    depends_on:
-      - consul
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-
-  # Runs Stolon Keeper managing PostgreSQL data persistence and replication.
-  stolon-keeper:
-    image: dockerhub-user/app-stolon:latest
-    depends_on:
-      - stolon-ctl
-      - consul
-    environment:
-      - ROLE=keeper
-      - STKEEPER_UID=postgres_{{.Task.ID}}
-      - PG_REPL_USERNAME=repluser
-      - PG_REPL_PASSWORD=replpass
-      - PG_SU_USERNAME=postgres
-      - PG_SU_PASSWORD=postgres
-      - PG_APP_USER=app_user
-      - PG_APP_PASSWORD=mysecurepassword
-      - PG_APP_DB=app_db
-    volumes:
-      - stolon_data:/stolon/data
-      - pg_data:/var/lib/postgresql/data
-      - pg_log:/var/log/postgresql
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-
-  # Deploys Stolon Sentinel for monitoring and orchestrating cluster failovers.
-  stolon-sentinel:
-    image: dockerhub-user/app-stolon:latest
-    environment:
-      - ROLE=sentinel
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-    depends_on:
-      - stolon-keeper
-      - consul
-
-volumes:
-  stolon_data:
-  console_data:
-  pg_data:
-  pg_log:
-
-networks:
-  shared_swarm_network:
-    external: true
-
-Dockerfile
-# Use the official PostgreSQL image as a base
-FROM postgres:16.2
-
-# Define the version of Stolon being used
-ENV STOLON_VERSION v0.17.0
-
-# Install necessary packages
-RUN apt-get update && \
-    apt-get install -y curl unzip && \
-    rm -rf /var/lib/apt/lists/* 
-
-# Download and extract Stolon
-RUN curl -L https://github.com/sorintlab/stolon/releases/download/${STOLON_VERSION}/stolon-${STOLON_VERSION}-linux-amd64.tar.gz -o stolon.tar.gz && \
-    mkdir -p /stolon-installation && \
-    tar -xzf stolon.tar.gz -C /stolon-installation && \
-    ls /stolon-installation && \
-    mv /stolon-installation/*/bin/* /usr/local/bin/
-
-# Clean up installation files
-RUN rm -rf /stolon-installation stolon.tar.gz && \
-    apt-get purge -y --auto-remove unzip
-
-# Verify binaries are in the expected location
-RUN ls /usr/local/bin/stolon-*
-
-# Set up environment variables
-ENV STOLONCTL_CLUSTER_NAME=stolon-cluster \
-    STOLONCTL_STORE_BACKEND=consul \
-    STOLONCTL_STORE_URL=http://consul:8500 \
-    CONSUL_PORT=8500 \
-    STKEEPER_DATA_DIR=/stolon/data \
-    PG_DATA_DIR=/var/lib/postgresql/data \
-    PG_BIN_PATH=/usr/lib/postgresql/16/bin \
-    PG_PORT=5432
-
-# Expose PostgreSQL and Stolon proxy ports
-EXPOSE 5432 5433
-
-# Copy the entrypoint script into the container
-COPY script/entrypoint.sh /entrypoint.sh
-
-# Make the entrypoint script executable
-RUN chmod +x /entrypoint.sh
-
-# Set the entrypoint script as the entrypoint for the container
-ENTRYPOINT [""/entrypoint.sh""]
-
-
-Entrypoint.sh
-#!/bin/bash
-
-# Fetch the IP address of the container
-IP_ADDRESS=$(hostname -I | awk '{print $1}')
-
-if [ ""$ROLE"" = ""sentinel"" ]; then
-    # Verify registration with Consul
-    while ! curl -s ""http://$STOLONCTL_STORE_BACKEND:$CONSUL_PORT/v1/kv/stolon/cluster/$STOLONCTL_CLUSTER_NAME/keepers/info?keys"" | grep -q ""$KEEPER_ID""; do
-        echo ""Keeper not registered in Consul, waiting...""
-        sleep 1
-    done
-    echo ""Keeper is registered in Consul.""
-fi
-
-
-case ""$ROLE"" in
-  ""keeper"")
-    exec stolon-keeper \
-      --data-dir $STKEEPER_DATA_DIR \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL \
-      --pg-listen-address $IP_ADDRESS \
-      --pg-repl-username $PG_REPL_USERNAME \
-      --pg-repl-password $PG_REPL_PASSWORD \
-      --pg-su-username $PG_SU_USERNAME \
-      --pg-su-password $PG_SU_PASSWORD \
-      --uid $STKEEPER_UID \
-      --pg-bin-path $PG_BIN_PATH \
-      --pg-port $PG_PORT
-    ;;
-  ""sentinel"")
-    exec stolon-sentinel \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL
-    ;;
-  ""proxy"")
-    exec stolon-proxy \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL \
-      --listen-address 0.0.0.0
-    ;;
-  *)
-    echo ""Unknown role: $ROLE""
-    exit 1
-    ;;
-esac
-
-
-Checked network connectivity, consul is up and running fine, sentinel and proxy also working as expected, albeit pending for database to be ready.
-","1. Can you please confirm if you have initiated cluster with authenticated user ?
-",Consul
-"I am trying to setup a consul backed vault cluster.
-My consul cluster is working fine however when I am setting up my vault consul agent, I need to give an agent token with policy to have write access on node.
-Basically, I want that my vault consul agents should be able to register nodes with name starting only with ""vault-"".
-For this I tried policy below
-agent_prefix """" {
-  policy = ""write""
-}
-node ""vault-*"" {
-  policy = ""write""
-}
-node_prefix """" {
-  policy = ""read""
-}
-service_prefix """" {
-  policy = ""read""
-}
-session_prefix """" {
-  policy = ""read""
-}
-
-And in my consul config I gave node_name=vault-0/1/2
-I tried using a wildcard in my policy for write access for a specific node name and read for all, I am getting below error:
-agent: Coordinate update blocked by ACLs: accessorID=3db5e2e7-3264-50a9-c8f1-a5c955c5bec0
-
-Actually I want that my agents should be able to register their nodes with specific names only to identify them. And for each service there will be separate agent token with specific policy.
-","1. Consul's ACL system supports defining two types of rules; prefix-based rules, and exact matching rules. Per https://www.consul.io/docs/security/acl/acl-rules#rule-specification,
-
-When using prefix-based rules, the most specific prefix match determines the action. This allows for flexible rules like an empty prefix to allow read-only access to all resources, along with some specific prefixes that allow write access or that are denied all access. Exact matching rules will only apply to the exact resource specified.
-
-When creating a token for the Consul agents which are co-located with the Vault servers, you can use the following policy.
-## consul-agent-policy.hcl
-
-# Allow the agent write access to agent APIs on nodes starting with the name 'vault-'.
-agent_prefix ""vault-"" {
-  policy = ""write""
-}
-
-# Allow registering a node into the catalog if the name starts with 'vault-'
-node_prefix ""vault-"" {
-  policy = ""write""
-}
-
-# Allow the node to resolve any service in the datacenter
-service_prefix """" {
-  policy = ""read""
-}
-
-You should not need node:read or session:read privileges for the Consul agents, so I have removed those from the example policy.
-In Consul 1.8.1+ you can simplify this further by using node identities which eliminates the need to create node-specific ACL policies if you want to lock down the token's policy so that it can only register a specific name (e.g., vault-01).
-$ consul acl token create -node-identity=vault-01:dc1
-
-",Consul
-"I understand this may be an old item to discuss. Yet, I have not come across any proper document to answer my query
-Context
-We have 18 VMs running monolith applications and databases. 12 apps are candidate to move to a new 3 node docker swarm setup (docker ce). The swarm is deployed on air-gapped environment, hence private registry and images are used. The network is heavily dependent on the on-prem AD for existing service naming. There's a mix of other VMs in other 4 VM infra (16 physical servers running 48 VMs, mostly Windows, BSD and Solaris). The network is 10g end-to-end (if this helps) and current utilization is around 3% during peak hours. Not bringing up the DMZ here, since the extension of this docker swarm will be done for DMZ too.
-Kubernetes is not an option here due to complexity (also I am more fluent in vanilla docker), since one guy (that's me) is managing the infra (network, FW, servers, storage, Linux/UNIX), deployments. Also gitlab for upcoming pipeline for the new env.
-We want to gradually move the 12 apps to the swarm in 3 batches, dependent on their relation with each other. Timeline for 1st phase to complete UAT is 20th May 2024.
-
-There's no way to re-write apps,
-Binaries, configs and volumes will moved to images, and
-There's no direct dependency on IPs, as either it's /etc/hosts or the DNS helping with the names
-Consul is running on all 3 docker hosts directly on the host, rather than in docker. Cluster setup is on all 3 hosts. No envoy setup yet. Consul's DNS is also enabled on port 53, while UI and other grpc ports kept as-is.
-
-Problem
-We want to us the consul's DNS for all services running in docker swarm. We want to pass the DNS config on the docker. However, since there's no way to re-write the code, we want to ensure the services are registered dynamically on consul, so that its discoverable from other dockers running. We can use sidecar proxy for this, but there's just no proper documentation for our use case. Also, we're not sure if we need sidecar at all, since we're going to rely on consul's DNS and forwarding to AD for other legacy/non-docker services.
-Also, we're running on debian 12, hence no OEMs are involved in this.
-Please help!
-","1. 
-we want to ensure the services are registered dynamically on consul, so that its discoverable from other dockers running.
-
-Registrator can be used to automatically register Docker containers into Consul. The project hasn't seen a lot of updates as of late, but as far as I know it still works so it should be a viable option.
-",Consul
-"I am trying to follow the istio docs on deploying the bookinfo app and setting up ingress controllers at - https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports 
-After creating an istio gateway as -
-**kubectl apply -n my-bookinfo -f gateway.yaml**
-
-apiVersion: networking.istio.io/v1alpha3
-kind: Gateway
-metadata:
-  name: httpbin-gateway
-spec:
-  selector:
-    istio: ingressgateway # use Istio default gateway implementation
-  servers:
-  - port:
-      number: 80
-      name: http
-      protocol: HTTP
-    hosts:
-    - ""httpbin.example.com""
-
-Response: gateway.networking.istio.io/httpbin-gateway configured
-when I try to view it as - 
-kubectl get gateway -n my-bookinfo 
-
-I dont any resources back, instead I get - ""No resources found""
-What am I missing here? Should I not be able to see the gateway resources? I am not even sure that they got created. How do I validate it? 
-","1. in this case the solution is to use the full resource api name:
-kubectl get gateway.networking.istio.io -n my-bookinfo 
-
-
-2. Gateways & Ingress works very closely do to open port you have to specify the port that you are opening for your ingress, also
-
-If you are using Tsl certificate as a secret then secret should be in
-the namespace of Gateway & ingress as well.
-
-This is a Sample Gateway file
-
-apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata:
-name: istio-gateway   namespace: mobius-master spec:   selector:
-istio: ingress   servers:
-
-port:
-number: 80
-name: http
-protocol: HTTP
-hosts:
-
-""*""
-tls:
-httpsRedirect: false  # Enable HTTPS redirection
-
-
-hosts:
-
-""ig.xxx.ai""
-port:
-name: https
-number: 443
-protocol: HTTPS
-tls:
-mode: SIMPLE
-credentialName: xxx-ai-ssl-cert
-
-
-
-
-One more important point is the selector of ingress in Gateway files.
- selector:
-    istio: ingress
-
-some of istio version have selector as
- selector:
-    istio: ingressgateway
-
-Best way to debug this is to get open kiali to check if the selectors exist in the namespaces.
-",Istio
-"Im trying to install istio with istioctl, but pulling the images from my own registry. While the installation im getting following error:
-This installation will make default injection and validation pointing to the default revision, and originally it was pointing to the revisioned one.
-This will install the Istio 1.22.0 ""default"" profile (with components: Istio core, Istiod, and Ingress gateways) into the cluster. Proceed? (y/N) y
-2024-05-22T14:12:34.935176Z debug   sync complete   name=istiod analyzer attempt=1 time=897ns
-✔ Istio core installed                                                                                                                                                                                             
-✘ Istiod encountered an error: failed to wait for resource: resources not ready after 5m0s: context deadline exceeded                                                                                              
-  Deployment/istio-system/istiod (container failed to start: ImagePullBackOff: Back-off pulling image ""192.168.1.1:8082/pilot:1.22.0"")
-✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: context deadline exceeded                                                                                    
-  Deployment/istio-system/istio-ingressgateway (container failed to start: ContainerCreating: )
-- Pruning removed resources                                                                                                                                                                                        Error: failed to install manifests: errors occurred during operation
-
-I have a Kubernetes Cluster:
-kubectl get nodes
-NAME        STATUS   ROLES                       AGE     VERSION
-control-1   Ready    control-plane,etcd,master   4d22h   v1.28.7+rke2r1
-control-2   Ready    <none>                      4d22h   v1.28.7+rke2r1
-control-3   Ready    <none>                      4d22h   v1.28.7+rke2r1
-
-This servers do not have a internet connection.
-I got a private registry running on 192.168.1.1:8082 with the required docker images.
-On control-1 I got docker installed.
-My /etc/docker/daemon.json:
-{""insecure-registries"":[""192.168.1.1:8082""]}
-
-Im logged in the registry with:
-sudo docker login 192.168.1.1:8082
-
-My created credentials are saved under dockerconfig.json.
-On control-1 i can successfully pull the required image with:
-sudo docker pull 192.168.1.1:8082/pilot:1.22.0
-sudo docker images
-REPOSITORY                                              TAG       IMAGE ID       CREATED         SIZE
-192.168.1.1:8082/pilot                                  1.22.0    99ceea62d078   13 days ago     200MB
-
-I downloaded the file istio-1.22.0-linux-.tar.gz on a machine with a internet connection and get it on control-1.
-Then I extracted it with:
-tar -xvf istio-1.22.0-linux-amd64.tar.gz
-
-I added istioctl to the path:
-cd istio-1.22.0
-export PATH=$PWD/bin:$PATH
-
-I created a secret with:
-kubectl create namespace istio-system
-kubectl create secret generic regcred \
-    --from-file=.dockerconfigjson=/home/ubuntu/istio-1.22.0/dockerconfig.json \
-    --type=kubernetes.io/dockerconfigjson --namespace=istio-system
-
-I edit samples/operator/default-install.yaml:
-apiVersion: install.istio.io/v1alpha1
-kind: IstioOperator
-metadata:
-  namespace: istio-system
-  name: istio-operator
-spec:
-  profile: default
-  hub: 192.168.1.1:8082
-  values:
-    global:
-      imagePullSecrets:
-        - regcred
-
-Then im trying to install istio with:
-istioctl install -f samples/operator/default-install.yaml
-
-But im falling into the error mentions above.
-","1. If the istio installation has to refer private repo have configuration of istio operator to pull the images.  Below is the sample
-apiVersion: install.istio.io/v1alpha1
-kind: IstioOperator
-metadata:
-  namespace: istio-system
-  name: example-istiocontrolplane
-spec:
-  profile: default
-  values:
-    global:
-      hub: <repo-name>
-
-More details of customizing installation are in their documentation.  Also the details of hub are in istioctl documentation
-One more option is to manually copy below images into worker nodes.  This needs to be done in machine where internet is available and manually save into tar.gz file using ""docker save"" and move the tars to worker nodes and load into docker daemon using ""docker load""
-docker.io/istio/pilot
-docker.io/istio/proxyv2
-
-",Istio
-"Hi I'm trying to debug a service w/an envoy problem deployed in kuma service mesh.  The service in question uses the default kuma.io/sidecar-injection: enabled annotation to inject the kuma sidecar and enable the envoy proxy.
-One blocker to debugging is the service is being hit every few seconds with readiness checks(this complicates things because the additional requests trigger breakpoints out of band of my current request I'm trying to debug).
-I've attempted to disable them at the kuma level with:
- KUMA_RUNTIME_KUBERNETES_VIRTUAL_PROBES_ENABLED:            false
-
-env var set on the kuma-control-plane
-No luck.  Additionally I've also tried defining a health check for the app that just pings every 5 minutes or so, but that also didn't seem to change the behavior of the existing readiness check.
-EDIT:
-Looks like this readiness healthcheck is defined on the injected kuma-sidecar
-    Readiness:  http-get http://:9901/ready delay=1s timeout=3s period=5s #success=1 #failure=12
-
-But I'm still unsure of how to go about override a sidecar readiness check?
-Much appreciation for any suggestions here.
-","1. It seems you need to remove the probes from your k8s deployment manifest. Kuma only rewrites existing probes.
-
-2. You can create a containerpatch resource and apply it to pod or kuma-cp.
-Reference address:
-https://docs.konghq.com/mesh/latest/production/dp-config/dpp-on-kubernetes/#custom-container-configuration
-",Kuma
-"we have applications that work with Kafka (MSK), we noticed that once pod is starting to shutdown (during autoscaling or deployment) the app container loses all active connections and the SIGTERM signal causes Kuma to close all connections immediately which cause data loss due to unfinished sessions (which doesn’t get closed gracefully) on the app side and after that we receive connection errors to the kafka brokers,
-is anyone have an idea how to make Kuma wait some time once it gets the SIGTERM signal to let the sessions close gracefully?
-or maybe a way to let the app know before the kuma about the shutsown?
-or any other idea ?
-","1. This is known issue getting fixed in the coming 1.7 release: https://github.com/kumahq/kuma/pull/4229
-",Kuma
-"I'm trying to create a demo of a service mesh using Kuma, and I'm confused about how to configure a traffic traffic split when viewing the example in the docs.  I have two versions of a microservice which return different results depending on an environment variable which is defined in the Kubernetes config.  And the service associated with the pods is configured by its config which pod to use (not sure if this the right way to do this):
-apiVersion: v1
-kind: Pod
-metadata:
-  name: dntapi-mil
-  namespace: meshdemo
-  labels:
-    uservice: dntapi
-    format: military
-spec:
-  containers:
-  - name: dntapi
-    image: meshdemo:dntapi
-    ports:
-    - name: http
-      containerPort: 4000
-    env:
-      - name: MILITARY
-        value: ""true""
-
----
-
-apiVersion: v1
-kind: Pod
-metadata:
-  name: dntapi-std
-  namespace: meshdemo
-  labels:
-    uservice: dntapi
-    format: standard
-spec:
-  containers:
-  - name: dntapi
-    image: meshdemo:dntapi
-    ports:
-    - name: http
-      containerPort: 4000
-    env:
-      - name: MILITARY
-        value: ""false""
-
----
-
-apiVersion: v1
-kind: Service
-metadata:
-  name: dntapi
-  namespace: meshdemo
-spec:
-  selector:
-    uservice: dntapi
-    format: military
-  ports:
-  - protocol: TCP
-    port: 4000
-    targetPort: 4000
-
-This works from a purely K8s perspective if I change the selector on the service, but looking at the Kuma example of split traffic:
-  conf:
-    split:
-      - weight: 90
-        destination:
-          kuma.io/service: redis_default_svc_6379
-          version: '1.0'
-      - weight: 10
-        destination:
-          kuma.io/service: redis_default_svc_6379
-          version: '2.0'
-
-To what is ""version"" referring when associated with the service (and I have to admit that I don't understand how there could be two services with the same identifier).  Are these K8s selectors?
-I should add that when I inspect services with kumactl, I see two for this microservice, one without a port name:
-dntapi-std_meshdemo_svc          Online   1/1
-dntapi_meshdemo_svc_4000         Online   1/1
-
-Thanks in advance.
-","1. Change your Service definition to use only a label common to both variations of the workload (looks like this would be uservice: dntapi). Then use the format label as ""tags"" in the Kuma TrafficRoute destination, just as the example uses the version tag (which is/can be derived directly from the Kubernetes labels). This would allow you to control what percentage of traffic is sent to Pods labeled format: standard and what percentage is sent to Pods labeled format: military.
-See https://github.com/kumahq/kuma-demo/tree/master/kubernetes for another example. Scroll down to the ""Traffic Routing"" section; that example does exactly what I describe above.
-",Kuma
-"I am new to LinkerD and I can't get retries to work. I have configured a ServiceProfile:
-apiVersion: linkerd.io/v1alpha2
-kind: ServiceProfile
-metadata:
-  creationTimestamp: null
-  name: report.test-linkerd.svc.cluster.local
-  namespace: test-linkerd
-spec:
-  routes:
-    - name: All GET Requests
-      condition:
-        method: GET
-        pathRegex: "".*""
-      isRetryable: true
-    - name: All POST Requests
-      condition:
-        method: POST
-        pathRegex: "".*""
-      isRetryable: true
-  retryBudget:
-    retryRatio: 0.2
-    minRetriesPerSecond: 10
-    ttl: 10s
-
-I have a service profile that has two routes, one for GET and one for POST. I have a retry budget set for the service profile.
-When I try to make a condition for a retry (killing a downstream pod), I see that the retries are not happening.
-I started to investigate,
-➜  ~ linkerd viz routes --to deploy/report -n test-linkerd -o wide deploy/api-gateway
-ROUTE                        SERVICE   EFFECTIVE_SUCCESS   EFFECTIVE_RPS   ACTUAL_SUCCESS   ACTUAL_RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99
-All GET Requests    portfolio-report                   -               -                -            -             -             -             -
-All POST Requests   portfolio-report                   -               -                -            -             -             -             -
-[DEFAULT]           portfolio-report                   -               -                -            -             -             -             -
-
-And I see that the routes are not being matched. I have checked the logs of the api-gateway and I see that the requests are being made to the report service.
-Then I ran the following command to check the metrics of the report service:
-➜ ~ linkerd diagnostics proxy-metrics -n test-linkerd deploy/report | grep route_response_total # HELP route_response_total Total count of HTTP responses. # TYPE route_response_total counter route_response_total{direction=""inbound"",dst=""report.test-linkerd.svc.cluster.local:80"",rt_route=""All POST Requests"",status_code=""200"",classification=""success"",grpc_status="""",error=""""} 5 route_response_total{direction=""inbound"",dst=""report.test-linkerd.svc.cluster.local:8090"",rt_route=""All GET Requests"",status_code=""200"",classification=""success"",grpc_status="""",error=""""} 9 route_response_total{direction=""inbound"",dst=""report.test-linkerd.svc.cluster.local:8090"",rt_route=""All GET Requests"",status_code=""304"",classification=""success"",grpc_status="""",error=""""} 44
-
-And I see that the report service is receiving requests with the routes All GET Requests and All POST Requests.
-Additionally, in the dashboard, I do not see any requests from the api-gateway to the report service. They do show as ""meshed"", but there is no green bar or metrics.
-The only other clue I have found is the logs of the linkerd proxy container:
-{""timestamp"":""[  1355.773887s]"",""level"":""INFO"",""fields"":{""message"":""Connection closed"",""error"":""connection closed before message completed"",""client.addr"":""10.62.71.212:35084"",""server.addr"":""10.62.68.10:8080""},""target"":""linkerd_app_core::serve"",""spans"":[{""name"":""inbound""}],""threadId"":""ThreadId(1)""}
-
-Am I missing something in the configuration of the service profile?
-Am I misunderstanding how retries work in LinkerD?
-Any help would be appreciated. Thanks!
-Additional Info:
-Client version: stable-2.14.10
-Server version: stable-2.14.10
-
-linkerd check and linkerd viz check come back all good.
-","1. Your ServiceProfile works but I don't think it can pass your test scenario because the retry budget is depleted by the time the pod has restarted.
-To verify that it works in general, I have taken a copy of your ""All GET Requests"" route and created a full example.
-kind: Namespace
-apiVersion: v1
-metadata:
-  name: retries-test
-  annotations:
-    linkerd.io/inject: enabled
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: random-failures-api
-  namespace: retries-test
-  labels:
-    app: random-failures-api
-spec:
-  selector:
-    matchLabels:
-      app: random-failures-api
-  template:
-    metadata:
-      labels:
-        app: random-failures-api
-    spec:
-      containers:
-        - name: random-failures-api
-          image: ""ghcr.io/stzov/random-failing-api:rel-v0.1.6""
-          imagePullPolicy: Always
-          ports:
-            - name: http
-              containerPort: 5678
-              protocol: TCP
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: random-failures-api
-  namespace: retries-test
-  labels:
-    app: random-failures-api    
-spec:
-  selector:
-      app: random-failures-api
-  ports:
-  - protocol: TCP
-    port: 5678
-    targetPort: 5678
-    appProtocol: http
----
-apiVersion: linkerd.io/v1alpha2
-kind: ServiceProfile
-metadata:
-  name: random-failures-api.retries-test.svc.cluster.local
-  namespace: retries-test
-spec:
-  routes:
-  - name: All GET Requests
-    condition:
-      method: GET
-      pathRegex: "".*""
-    isRetryable: true
-    responseClasses:
-    - condition:
-        status:
-          min: 400
-          max: 599
-      isFailure: true
-    timeout: 60s
-  retryBudget:
-    retryRatio: 0.2
-    minRetriesPerSecond: 10
-    ttl: 10s
-
-The random-failing-api image I used randomly generates occasional 4xx and 5xx errors when deployed in a pod.
-Once you deploy the above you'll need a curl enabled pod and try running:
-while true; do curl -v random-failures-api.retries-test.svc.cluster.local:5678/get ; sleep 1 ; done
-
-You will notice that you only get HTTP/1.1 200 OK responses, even though the relevant container returns random errors.
-If you remove the responseClasses configuration I added to your route then by default linkerd will consider only 5xx as an error so if you try to curl again you will occasionally get a 4xx response.
-So this shows that linkerd is doing what it is supposed to do but it's failing to cover the scenario you described. To be honest it would be nice if it was able to somehow queue the requests until the pod is up but I don't think this is in scope.
-An easy solution for your scenario would be to have a multiple replicas for your pod. In that case, when at least one pod is alive you will not get an error.
-The source code for the image I used can be found here
-",Linkerd
-"I've installed linkerd correctly (linkerd check --proxy -n linkerd checkings are all ok).
-After that, I've annotated my covid namespace with ""auto-injection"":
-$ kubectl annotate namespace covid linkerd.io/inject=enabled
-
-After having deployed my deployment:
-$ linkerd stat deployments -n covid
-NAME                MESHED   SUCCESS   RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN
-dev-covid-backend      0/1         -     -             -             -             -          -
-
-$ linkerd stat pods -n covid
-NAME                                 STATUS   MESHED   SUCCESS   RPS   LATENCY_P50   LATENCY_P95   LATENCY_P99   TCP_CONN
-dev-covid-backend-7ccc987d4-494lv   Running      0/1         -     -             -             -             -          -
-
-As you can see, deployment is not meshed.
-I've trigerred heartbeat manually. I'm getting:
-time=""2020-05-05T12:29:39Z"" level=info msg=""running version stable-2.7.1""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=error msg=""Prometheus query failed: unexpected result Prometheus result vector length: 0""
-time=""2020-05-05T12:29:39Z"" level=info msg=""Sending heartbeat: https://versioncheck.linkerd.io/version.json?install-time=1588663782&k8s-version=v1.17.3%2Bk3s1&meshed-pods=9&p99-handle-us=50000&source=heartbeat&total-rps=3&uuid=991db911-da8b-45c7-98b5-eb63e6162e8d&version=stable-2.7.1""
-time=""2020-05-05T12:29:43Z"" level=fatal msg=""Failed to send heartbeat: Check URL [https://versioncheck.linkerd.io/version.json?install-time=1588663782&k8s-version=v1.17.3%2Bk3s1&meshed-pods=9&p99-handle-us=50000&source=heartbeat&total-rps=3&uuid=991db911-da8b-45c7-98b5-eb63e6162e8d&version=stable-2.7.1] request failed with: Get https://versioncheck.linkerd.io/version.json?install-time=1588663782&k8s-version=v1.17.3%2Bk3s1&meshed-pods=9&p99-handle-us=50000&source=heartbeat&total-rps=3&uuid=991db911-da8b-45c7-98b5-eb63e6162e8d&version=stable-2.7.1: dial tcp: lookup versioncheck.linkerd.io on 10.43.0.10:53: server misbehaving""
-
-Any ideas?
-","1. The namespace annotation must be pre-existing to the pod deployment. Adding the namespace annotation while the pod is running will not inject the proxy sidecar on the fly.
-Did you restart the dev-covid-backend deployment after annotating the covid namespace?
-kubectl rollout restart deploy/dev-covid-backend -n covid
-
-The heartbeat check is unrelated to the auto injection feature. You can check the proxy-injector logs
-kubectl logs -f deploy/linkerd-proxy-injector -n linkerd
-
-as well as the events:
-kubectl get events -n covid
-
-If you see errors or messages there, they should help to find a resolution.
-",Linkerd
-"We have a bunch of cronjobs in our env.  And they run linkerd-proxy as a sidecar.
-Well, somewhat often (but not always) the proxy container will fail after the main container is done.  We ""think"" it might be due to open connections, but only cause we read that could cause it.  We don't have any real evidence.
-But in the end we just don't care why.  We don't want the failed linkerd-proxy to cause the job to fail (and fire an alarm).  I found docs on podFailurePolicy.  But there are only two examples, and no links to more details on the format of the policy.
-One of the examples explains I can ignore failures of certain exit codes from a container.  But how would I say all exit codes?  Bonus points if you know where the docs are for the policy in general, cause I just can't seem to find anything on it.
-Edit: looking closer at the podFailurePolicy docs, I think it doesn't even do what I want, it just causes the failure not to count against the backoff, and reruns the job.  But I still would love to know the answer to the question anyway. :)
-","1. I think combining the Ignore action with the NotIn operator could achieve what you want?
-podFailurePolicy:
-  rules:
-    - action: Ignore
-      onExitCodes:
-        containerName: linkerd-proxy
-        operator: NotIn
-        values: [0]
-
-Otherwise I would probably advise to simply create a custom liveness probe for the linkerd-proxy container which always succeeds and therefore Kubernetes will see it as healthy, even when it ""fails""?
-",Linkerd
-"I installed Emacs 29.2 and sbcl 2.4.0.
-Then I installed slime using the command M-x package-install<Ret>slime<Ret>.  I can see that slime is installed (M-x list-packages):
-
-Now, I'm trying to get slime to work following these instructions:
-
-It is simple as:
-
-Open emacs
-Open your lisp file with fibonacci function
-Issue M-x slime
-Place you cursor over fibonacci function and press C-c C-c to evaluate/compile it in Slime.
-switch to slime window and call (fibonacci 10)
-
-Screenshot example with hello-world function:
-
-In emacs, I opened a hello world .lisp file with C-x C-f. However, when I type M-x slime<Ret>, I get [No match].  Same for M-x slime-mode<Ret>.
-Here is my ~/.emacs file:
-(custom-set-variables
- ;; custom-set-variables was added by Custom.
- ;; If you edit it by hand, you could mess it up, so be careful.
- ;; Your init file should contain only one such instance.
- ;; If there is more than one, they won't work right.
- '(package-selected-packages '(slime paredit)))
-(custom-set-faces
- ;; custom-set-faces was added by Custom.
- ;; If you edit it by hand, you could mess it up, so be careful.
- ;; Your init file should contain only one such instance.
- ;; If there is more than one, they won't work right.
- )
-
-;;;;;;;;;;;;;;;;;;;;;;;
-
-;; I added the following according to slime install directions:
-
-(setq inferior-lisp-program ""sbcl"")
-
-sbcl is in my path:
-% which sbcl
-/opt/local/bin/sbcl
-
-% echo $PATH
-...:/Users/7stud/bin/:/opt/local/bin:....
-
-I tried altering the .emacs file to use the full path to sbcl:
-(setq inferior-lisp-program ""/opt/local/bin/sbcl"")
-
-but I still get M-x slime [No match].  I've been quitting emacs and relaunching it after I make changes to the .emacs file.
-Here is my ~/.emacs.d/elpa directory (which I haven't touched):
-% ls
-archives               macrostep-0.9.2.signed slime-2.29.1
-gnupg                  paredit-26             slime-2.29.1.signed
-macrostep-0.9.2        paredit-26.signed
-
-There were a bunch of warnings when I installed slime, but I can't find where those warnings are logged, so I can't post them.
-I was able to successfully install and use the package paredit. When I open a .lisp file, it opens in Lisp major-mode, and if I do M-x paredit-mode, that adds Paredit as a minor-mode:
-
-After adding Paredit as a minor-mode, parentheses get matched, so it works.
-I'm on macOS 12.5.1, and I installed emacs with:
-% sudo port install emacs-app 
-
-Edit: =======
-I unistalled slime by displaying the package list, M-x list-packages, then searching for the slime listing, C-s, then typing d on the slime listing, then typing x.  I reinstalled slime by finding the slime listing again, then typing i on the slime line, then x.  Here are the warnings:
-⛔ Warning (comp): slime-autodoc.el:51:17: Warning: ‘eldoc-message’ is an obsolete function (as of eldoc-1.1.0); use ‘eldoc-documentation-functions’ instead.
-⛔ Warning (comp): slime-autodoc.el:52:15: Warning: ‘eldoc-message’ is an obsolete function (as of eldoc-1.1.0); use ‘eldoc-documentation-functions’ instead.
-⛔ Warning (comp): slime-autodoc.el:64:8: Warning: ‘eldoc-message’ is an obsolete function (as of eldoc-1.1.0); use ‘eldoc-documentation-functions’ instead.
-⛔ Warning (comp): slime-autodoc.el:106:8: Warning: ‘font-lock-fontify-buffer’ is for interactive use only; use ‘font-lock-ensure’ or ‘font-lock-flush’ instead.
-⛔ Warning (comp): slime-autodoc.el:165:14: Warning: ‘eldoc-display-message-p’ is an obsolete function (as of eldoc-1.6.0); Use ‘eldoc-documentation-functions’ instead.
-⛔ Warning (comp): slime-autodoc.el:166:10: Warning: ‘eldoc-message’ is an obsolete function (as of eldoc-1.1.0); use ‘eldoc-documentation-functions’ instead.
-⛔ Warning (comp): bridge.el:115:2: Warning: defvar `bridge-leftovers' docstring wider than 80 characters
-⛔ Warning (comp): slime-cl-indent.el:115:2: Warning: custom-declare-variable `lisp-align-keywords-in-calls' docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime-cl-indent.el:1448:2: Warning: defvar `common-lisp-indent-clause-joining-loop-macro-keyword' docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): hyperspec.el:1320:4: Warning: Alias for ‘common-lisp-hyperspec-glossary-function’ should be declared before its referent
-⛔ Warning (comp): slime.el:112:2: Warning: Alias for ‘slime-contribs’ should be declared before its referent
-⛔ Warning (comp): slime.el:277:12: Warning: defcustom for ‘slime-completion-at-point-functions’ fails to specify type
-⛔ Warning (comp): slime.el:689:4: Warning: Doc string after `declare'
-⛔ Warning (comp): slime.el:1160:6: Warning: ‘byte-compile-file’ called with 2 arguments, but accepts only 1
-⛔ Warning (comp): slime.el:2408:10: Warning: ‘hide-entry’ is an obsolete function (as of 25.1); use ‘outline-hide-entry’ instead.
-⛔ Warning (comp): slime.el:3081:12: Warning: ‘beginning-of-sexp’ is an obsolete function (as of 25.1); use ‘thing-at-point--beginning-of-sexp’ instead.
-⛔ Warning (comp): slime.el:3392:16: Warning: ‘beginning-of-sexp’ is an obsolete function (as of 25.1); use ‘thing-at-point--beginning-of-sexp’ instead.
-⛔ Warning (comp): slime.el:3681:20: Warning: ‘find-tag-marker-ring’ is an obsolete variable (as of 25.1); use ‘xref-push-marker-stack’ or ‘xref-go-back’ instead.
-⛔ Warning (comp): slime.el:4113:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime.el:4960:6: Warning: ‘font-lock-fontify-buffer’ is for interactive use only; use ‘font-lock-ensure’ or ‘font-lock-flush’ instead.
-⛔ Warning (comp): slime.el:5536:10: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime.el:5692:10: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime.el:5795:10: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime.el:6329:8: Warning: Obsolete calling convention for 'sit-for'
-⛔ Warning (comp): slime.el:6611:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime.el:7187:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime.el:7194:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime.el:7312:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime-parse.el:319:20: Warning: Stray ‘declare’ form: (declare (ignore args))
-⛔ Warning (comp): slime-repl.el:130:2: Warning: defvar `slime-repl-history-use-mark' docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime-repl.el:138:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime-repl.el:978:2: Warning: docstring has wrong usage of unescaped single quotes (use \= or different quoting)
-⛔ Warning (comp): slime-repl.el:1580:22: Warning: Stray ‘declare’ form: (declare (ignore args))
-⛔ Warning (comp): slime-repl.el:1689:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime-presentations.el:601:2: Warning: docstring wider than 80 characters
-⛔ Warning (comp): slime-presentations.el:759:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime-presentations.el:760:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime-presentations.el:761:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime-presentations.el:762:4: Warning: ‘easy-menu-add’ is an obsolete function (as of 28.1); this was always a no-op in Emacs and can be safely removed.
-⛔ Warning (comp): slime-references.el:102:67: Warning: reference to free variable ‘name’
-⛔ Warning (comp): slime-references.el:107:15: Warning: ‘:info’ called as a function
-⛔ Warning (comp): slime-references.el:109:15: Warning: ‘t’ called as a function
-⛔ Warning (comp): slime-references.el:109:15: Warning: the function ‘t’ is not known to be defined.
-⛔ Warning (comp): slime-references.el:107:15: Warning: the function ‘:info’ is not known to be defined.
-⛔ Warning (comp): slime-references.el:106:13: Warning: the function ‘case’ is not known to be defined.
-⛔ Warning (comp): slime-package-fu.el:226:21: Warning: ‘looking-back’ called with 1 argument, but requires 2 or 3
-⛔ Warning (comp): slime-package-fu.el:263:14: Warning: ‘looking-back’ called with 1 argument, but requires 2 or 3
-⛔ Warning (comp): slime-trace-dialog.el:162:12: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime-trace-dialog.el:248:16: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime-trace-dialog.el:261:28: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime-trace-dialog.el:352:13: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-⛔ Warning (comp): slime-trace-dialog.el:478:12: Warning: ‘inhibit-point-motion-hooks’ is an obsolete variable (as of 25.1); use ‘cursor-intangible-mode’ or ‘cursor-sensor-mode’ instead
-
-I checked the output in *Messages*, but it's too voluminous to post here.  Here is the end of the output:
-...
-...
-Warning: Optimization failure for make-ert-test: Handler: make-ert-test--cmacro
-(error ""Keyword argument :file-name not one of (:name :documentation :body :most-recent-result :expected-result-type :tags)"")
-Warning: Optimization failure for make-ert-test: Handler: make-ert-test--cmacro
-(error ""Keyword argument :file-name not one of (:name :documentation :body :most-recent-result :expected-result-type :tags)"")
-Warning: Optimization failure for make-ert-test: Handler: make-ert-test--cmacro
-(error ""Keyword argument :file-name not one of (:name :documentation :body :most-recent-result :expected-result-type :tags)"")
-Wrote /Users/7stud/.emacs.d/elpa/slime-2.29.1/contrib/test/slime-repl-tests.elc
-Checking /Users/7stud/.emacs.d/elpa/slime-2.29.1/contrib/test...
-Done (Total of 57 files compiled, 2 skipped in 4 directories)
-ad-handle-definition: ‘slime-note.message’ got redefined
-Package ‘slime’ installed.
-Operation [ Install 1 ] finished
-
-","1. You were nearly there:
-
-However, when I type M-x slime, I get [No match]
-
-you are missing
-(require 'slime)
-
-You must add this in your ~/.emacs (or ""~/.emacs.d/init.el"") and you can evaluate in your current session with M-: or M-x eval-expression<Ret> (require 'slime)<Ret>.
-
-2. I was never able to get slime to work when I followed the slime install instructions.  I decided to look around for another emacs distribution and try something else.  I visited the emacs4cl repo, and they have a .emacs file you can download.  I decided to try that .emacs file with my current emacs install.
-First, I moved the current ~/.emacs file and the ~/.emacs.d/ directory to ~/.xemacs and ~/.xemacs.d/, then I put the .emacs file from emacs4cl in my home directory: ~/.emacs.  After that, I started up emacs, and once again there were lots of warnings from the various packages that were installed, but when I typed M-x slime, another window opened up:
-
-Here's the .emacs file from emacs4cl:
-;; Customize user interface.
-(when (display-graphic-p)
-  (tool-bar-mode 0)
-  (scroll-bar-mode 0))
-(setq inhibit-startup-screen t)
-
-;; Dark theme.
-(load-theme 'wombat)
-(set-face-background 'default ""#111"")
-
-;; Use spaces, not tabs, for indentation.
-(setq-default indent-tabs-mode nil)
-
-;; Highlight matching pairs of parentheses.
-(setq show-paren-delay 0)
-(show-paren-mode)
-
-;; Workaround for https://debbugs.gnu.org/34341 in GNU Emacs <= 26.3.
-(when (and (version< emacs-version ""26.3"") (>= libgnutls-version 30603))
-  (setq gnutls-algorithm-priority ""NORMAL:-VERS-TLS1.3""))
-
-;; Write customizations to a separate file instead of this file.
-(setq custom-file (expand-file-name ""custom.el"" user-emacs-directory))
-(load custom-file t)
-
-;; Enable installation of packages from MELPA.
-(require 'package)
-(add-to-list 'package-archives '(""melpa"" . ""https://melpa.org/packages/"") t)
-(package-initialize)
-(unless package-archive-contents
-  (package-refresh-contents))
-
-;; Install packages.
-(dolist (package '(slime paredit rainbow-delimiters))
-  (unless (package-installed-p package)
-    (package-install package)))
-
-;; Configure SBCL as the Lisp program for SLIME.
-(add-to-list 'exec-path ""/usr/local/bin"")
-(setq inferior-lisp-program ""sbcl"")
-
-;; Enable Paredit.
-(add-hook 'emacs-lisp-mode-hook 'enable-paredit-mode)
-(add-hook 'eval-expression-minibuffer-setup-hook 'enable-paredit-mode)
-(add-hook 'ielm-mode-hook 'enable-paredit-mode)
-(add-hook 'lisp-interaction-mode-hook 'enable-paredit-mode)
-(add-hook 'lisp-mode-hook 'enable-paredit-mode)
-(add-hook 'slime-repl-mode-hook 'enable-paredit-mode)
-(require 'paredit)
-(defun override-slime-del-key ()
-  (define-key slime-repl-mode-map
-    (read-kbd-macro paredit-backward-delete-key) nil))
-(add-hook 'slime-repl-mode-hook 'override-slime-del-key)
-
-;; Enable Rainbow Delimiters.
-(add-hook 'emacs-lisp-mode-hook 'rainbow-delimiters-mode)
-(add-hook 'ielm-mode-hook 'rainbow-delimiters-mode)
-(add-hook 'lisp-interaction-mode-hook 'rainbow-delimiters-mode)
-(add-hook 'lisp-mode-hook 'rainbow-delimiters-mode)
-(add-hook 'slime-repl-mode-hook 'rainbow-delimiters-mode)
-
-;; Customize Rainbow Delimiters.
-(require 'rainbow-delimiters)
-(set-face-foreground 'rainbow-delimiters-depth-1-face ""#c66"")  ; red
-(set-face-foreground 'rainbow-delimiters-depth-2-face ""#6c6"")  ; green
-(set-face-foreground 'rainbow-delimiters-depth-3-face ""#69f"")  ; blue
-(set-face-foreground 'rainbow-delimiters-depth-4-face ""#cc6"")  ; yellow
-(set-face-foreground 'rainbow-delimiters-depth-5-face ""#6cc"")  ; cyan
-(set-face-foreground 'rainbow-delimiters-depth-6-face ""#c6c"")  ; magenta
-(set-face-foreground 'rainbow-delimiters-depth-7-face ""#ccc"")  ; light gray
-(set-face-foreground 'rainbow-delimiters-depth-8-face ""#999"")  ; medium gray
-(set-face-foreground 'rainbow-delimiters-depth-9-face ""#666"")  ; dark gray 
-
-I opened a hello world .lisp file in the top window, C-x C-f.  Then I put the cursor to the right of the last parenthesis of a function definition, then I hit C-x C-e, and the name of the function appeared in the echo line below the slime window. But I couldn't execute the function in the slime repl: I got an error saying the function name wasn't recognized. I also noticed the rainbow parentheses weren't working.  Hold on...I just quit emacs and relaunched it, and now C-x C-e works as expected: I see the function name returned on the echo line below the slime window, then I can execute the function in the slime repl...and now I can see the rainbow parentheses.
-Also, on my first try with the new .emacs file, when I put the cursor in the middle of a function and typed C-c C-c, the echo window said something like ""unrecognized command"", and now that works just like C-x C-e.  Hallelujah!
-Here are the differences in the directories:
-~/.emacs.d% ls
-auto-save-list custom.el      eln-cache      elpa
-~/.emacs.d% cd elpa
-~/.emacs.d/elpa  ls
-archives                         paredit-20221127.1452
-gnupg                            rainbow-delimiters-20210515.1254
-macrostep-20230813.2123          slime-20240125.1336
-
-~/.xemacs.d% ls
-auto-save-list eln-cache      elpa
-~/.xemacs.d% cd elpa
-~/.xemacs.d/elpa%  ls
-archives               macrostep-0.9.2.signed slime-2.29.1
-gnupg                  paredit-26             slime-2.29.1.signed
-macrostep-0.9.2        paredit-26.signed
-
-The directories look different, but I don't know what the significance of that is.
-Some other things I did:
-
-To increase font size: C-x C-+  (command + stopped working).
-Or, permanently set the font size in ~/.emacs:
-(set-face-attribute 'default nil :font ""Menlo"" :height 180)
-;; The ""height"" is 100 times the font size.
-
-
-To change meta key from command to option:  In .emacs I added:
- (setq mac-command-key-is-meta nil) ; set cmd-key to nil   
- (setq mac-option-modifier 'meta) ; set option-key to meta
-
-
-
-",Slime
-"I'm trying to make a list of 3d coordinates of a sphere vertices, starting with ((0 0 1) ...) like this:
-(defvar spherelatamount 7)
-(defvar spherelonamount 8)
-(defparameter sphereverticeslist
-  (make-list (+ 2 (* (- spherelatamount 2) spherelonamount))))
-(setf (elt sphereverticeslist 0) '(0 0 1))
-
-Trying to add next point
-(setf (elt sphereverticeslist 1)
-      '(0 (sin (/ pi 6)) (cos (/ pi 6))))
-
-this gives result
-((0 0 1) (sin (/ pi 6)) (cos (/ pi 6)) ...)
-
-while I need:
-((0 0 1) (0 0.5 0.866) ...)
-
-i.e. evaluated sin and cos. How do I achieve that? Thanks.
-wrote:
-(defvar spherelatamount 7)
-(defvar spherelonamount 8)
-(defparameter sphereverticeslist
-  (make-list (+ 2 (* (- spherelatamount 2) spherelonamount))))
-(setf (elt sphereverticeslist 0)
-      '(0 0 1))
-(setf (elt sphereverticeslist 1)
-     '(0 (sin (/ pi 6)) (cos (/ pi 6))))
-
-expecting:
-((0 0 1) (0 0.5 0.866) ...) 
-
-","1. Quoting the list prevents evaluation of everything in the list, so you just insert the literal values.
-Call list to create a list with evaluated values.
-(setf (elt sphereverticeslist 1) (list 0 (sin (/ pi 6)) (cos (/ pi 6))))
-
-If you have a mixture of literal and evaluated values, you can use backquote/comma.
-(setf (elt sphereverticeslist 1) `(0 ,(sin (/ pi 6)) ,(cos (/ pi 6))))
-
-
-2. Instead of using an inefficient 3-element list, consider using a structure to represent the coordinates instead. This stores the values more compactly, gives you O(1) accessors and setters instead of a list's O(N), and lets you use symbolic names for components of the coordinates, leading to more readable code. Something like:
-(defstruct sphere (distance 0.0 :type float) ; r
-                  (polar-angle 0.0 :type float) ; θ
-                  (azimuth 0.0 :type float)) ; φ
-
-;; Make a list of sphere coordinates
-(defparameter sphere-vertices-list
-  (list (make-sphere :distance 0.0 :polar-angle 0.0 :azimuth 1.0)
-        (make-sphere :distance 0.0 :polar-angle (sin (/ pi 6)) :azimuth (cos (/ pi 6)))))
-
-;; Set and get a value
-(setf (sphere-distance (first sphere-vertices-list)) 0.5)
-(print (sphere-distance (first sphere-vertices-list)))
-;; etc.
-
-
-3. You need to evaluate the calls:
-(defvar spherelonamount 8)
-(defparameter sphereverticeslist
-   (make-list (+ 2 (* (- spherelatamount 2) spherelonamount))))
-(setf (elt sphereverticeslist 0) '(0 0 1))
-(setf (elt sphereverticeslist 1)
-      `(0 ,(sin (/ pi 6)) ,(cos (/ pi 6))))
-
-",Slime
-"When I debug in Slime and inspect the value of a floating point variable, I see something like
-6.8998337e-4
-
-However, I find that very hard to read and would prefer
-0.00068998337
-
-How can I achieve that?
-","1. First:
-CL-USER> (format nil ""~F"" 6.8998337e-4)
-""0.00068998337""
-CL-USER> (format nil ""~E"" 6.8998337e-4)
-""6.8998337e-4""
-
-In slime/sly, when you Inspect (C-c I) the value 6.8998337e-4, you get:
-#<SINGLE-FLOAT {3A34E00000000019}>
---------------------
-Scientific: 6.8998337e-4
-Decoded: 1.0 * 0.70654297 * 2^-10
-Digits: 24
-Precision: 24
-
-The ""Scientific"" value is formatted in swank/slynk in contrib/swank-fancy-inspector.lisp or contrib/slynk-fancy-inspector.lisp :
-(defmethod emacs-inspect ((f float))
-  (cond
-    ((float-nan-p f)
-     ;; try NaN first because the next tests may perform operations
-     ;; that are undefined for NaNs.
-     (list ""Not a Number.""))
-    ((not (float-infinity-p f))
-     (multiple-value-bind (significand exponent sign) (decode-float f)
-       (append
-    `(""Scientific: "" ,(format nil ""~E"" f) (:newline)
-             ""Decoded: ""
-             (:value ,sign) "" * ""
-             (:value ,significand) "" * ""
-             (:value ,(float-radix f)) ""^""
-             (:value ,exponent) (:newline))
-    (label-value-line ""Digits"" (float-digits f))
-    (label-value-line ""Precision"" (float-precision f)))))
-    ((> f 0)
-     (list ""Positive infinity.""))
-    ((< f 0)
-     (list ""Negative infinity.""))))
-
-You could modify emacs-inspect method and change (format nil ""~E"" f) to (format nil ""~F"" f), or you could shadow emacs-inspect with an emacs-inspect :around method to modify the behavior to use ""~F"".
-",Slime
-"We are running into OOM when we run large number of SQL queries. We are using Apache Ignite 2.15. Pretty standard query code like below,
-SqlFieldsQuery sqlQuery = new SqlFieldsQuery(query);
-if (args != null) {
-    sqlQuery.setArgs(args);
-}
-FieldsQueryCursor<List<?>> cursor = cache.query(sqlQuery);
-
-Doing the heap analysis indicated, ""org.apache.ignite.internal.processors.query.RunningQueryManager"" having a map which maintains references of all the running queries. But does not seem to clean up the map post query execution.
-On trying to look this up further, we found references to the same issue but no activity,
-https://issues.apache.org/jira/browse/IGNITE-13130
-Apache Ignite : Ignite on-heap cache occupying too much memory in heap. Causing the application to throw OutOfMemory exception
-Any help appreciated.
-","1. On further debugging this as i was creating a reproducer. Found a deep hidden reference to cursor that wasn't closed. On closing that reference this issue was solved. Thanks.
-",Apache Ignite
-"I've been working on a fairly simple proof-of-concept (POC) project using Apache Ignite (started under v2.8, now v2.16.0). Using Java 11, I have 3 simple ""server"" ignite processes running inside Java processes. I bring up all these processes on the same machine:
-
-Process A: Takes external input, uses a service from Process B, passes data to Process B.
-Process B: Publishes a service (used by Process A), some data processing, passes data on to Process C.
-Process C: takes the data from Process B, does some processing, can call a service published by Process A.
-
-My POC works under Java 11, but as the future is always moving forward, I wanted to upgrade to Java 17. So I upgraded from:
-zulu11.37.17-ca-jdk11.0.6-win_x64
-to
-zulu17.48.15-ca-jdk17.0.10-win_x64
-I've consulted the following already regarding using Ignite with Java 11+
-https://ignite.apache.org/docs/latest/quick-start/java#running-ignite-with-java-11-or-later
-I added the JVM arguments to each of the processes (the numerous --add-opens).  After specifying the new JDK and adding the documented JVM parameters, each process starts up as expected.
-HOWEVER, the issue is that each process no longer ""clusters"" under Java 17. Process A cannot find the service published by Process B. Process A says it is in a cluster by itself, despite Process B and C also running (they are all hard-coded to join same cluster).  I can also change the JDK back to the 11 version, and the clustering functions as expected (even if I leave in the --add-open JVM options in JDK 11).
-This seems like a rather fundamental issue... Did I miss something obvious here switching between 11/17 that might explain this behavior? Thanks.
-IgniteConfiguration.toString() as requested:
-IgniteConfiguration [igniteInstanceName=Ignite POC, pubPoolSize=8, svcPoolSize=null, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, dataStreamerPoolSize=8, utilityCachePoolSize=8, utilityCacheKeepAliveTime=60000, p2pPoolSize=2, qryPoolSize=8, buildIdxPoolSize=2, igniteHome=null, igniteWorkDir=null, mbeanSrv=null, nodeId=null, marsh=null, marshLocJobs=false, p2pEnabled=true, netTimeout=5000, netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=10000, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=null, segPlc=USE_FAILURE_HANDLER, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=10000, commSpi=null, evtSpi=null, colSpi=null, deploySpi=null, indexingSpi=null, addrRslvr=null, encryptionSpi=null, tracingSpi=null, clientMode=false, rebalanceThreadPoolSize=2, rebalanceTimeout=10000, rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0, rebalanceBatchSize=524288, txCfg=TransactionConfiguration [txSerEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0, txTimeoutOnPartitionMapExchange=0, pessimisticTxLogSize=0, pessimisticTxLogLinger=10000, tmLookupClsName=null, txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true, discoStartupDelay=60000, deployMode=SHARED, p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=10000, sysWorkerBlockedTimeout=null, clientFailureDetectionTimeout=30000, metricsLogFreq=0, connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768, idleQryCurTimeout=600000, idleQryCurCheckFreq=60000, sndQueueLimit=0, selectorCnt=4, idleTimeout=7000, sslEnabled=false, sslClientAuth=false, sslFactory=null, portRange=100, threadPoolSize=8, msgInterceptor=null], odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null, binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=null, snapshotPath=snapshots, snapshotThreadPoolSize=4, activeOnStart=true, activeOnStartPropSetFlag=false, autoActivation=true, autoActivationPropSetFlag=false, clusterStateOnStart=null, sqlConnCfg=null, cliConnCfg=ClientConnectorConfiguration [host=null, port=10800, portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true, maxOpenCursorsPerConn=128, threadPoolSize=8, selectorCnt=4, idleTimeout=0, handshakeTimeout=10000, jdbcEnabled=true, odbcEnabled=true, thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true, sslClientAuth=false, sslCtxFactory=null, thinCliCfg=ThinClientConfiguration [maxActiveTxPerConn=100, maxActiveComputeTasksPerConn=0, sendServerExcStackTraceToClient=false], sesOutboundMsgQueueLimit=0], mvccVacuumThreadCnt=2, mvccVacuumFreq=5000, authEnabled=false, failureHnd=null, commFailureRslvr=null, sqlCfg=SqlConfiguration [longQryWarnTimeout=3000, dfltQryTimeout=0, sqlQryHistSize=1000, validationEnabled=false], asyncContinuationExecutor=null]
-
-","1. Looks like the culprit was needing to set the -Djava.net.preferIPv4Stack=true JVM option. This option didn't seem to be needed under Java 11, seems to be needed under Java 17. Go figure...
-",Apache Ignite
-"I want to implement Apache ignite as in memory database so that the data retrieval will be faster. I've tried implementing the Key-Value API of Apache Ignite. But as my requirement got changed, I'm trying to implement using SQL API.
-This is my server code:
-public class ImdbApplication {
-
-    public static void main(String[] args) {
-
-        IgniteConfiguration cfg = new IgniteConfiguration();
-        cfg.setIgniteInstanceName(""Instance"");
-        cfg.setConsistentId(""NodePoswavier"");
-
-        TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
-
-        TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
-        TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
-        ipFinder.setAddresses(Arrays.asList(""127.0.0.1:47500..47509""));
-        discoSpi.setIpFinder(ipFinder);
-        cfg.setDiscoverySpi(discoSpi);
-
-        Ignition.start();
-
-        CacheConfiguration<Long, Person> cacheCfg = new CacheConfiguration<>(""personCache"");
-
-        QueryEntity queryEntity = new QueryEntity();
-        queryEntity.setKeyFieldName(""id"").setKeyType(Long.class.getName()).setValueType(Person.class.getName());
-
-        LinkedHashMap<String, String> fields = new LinkedHashMap<>();
-        fields.put(""id"", ""java.lang.Long"");
-        fields.put(""name"", ""java.lang.String"");
-        fields.put(""salary"", ""java.lang.Float"");
-
-        queryEntity.setFields(fields);
-
-        queryEntity.setIndexes(Arrays.asList(
-                new QueryIndex(""name""),
-                new QueryIndex(Arrays.asList(""id"", ""salary""), QueryIndexType.SORTED)
-        ));
-
-        cacheCfg.setQueryEntities(Arrays.asList(queryEntity));
-
-        // Get or create cache
-        Ignite ignite = Ignition.ignite();
-        IgniteCache<Long, Person> cache = ignite.getOrCreateCache(cacheCfg);
-
-
-    }
-}
-
-My client side code is:
-public class ImdbClientApplication {
-
-    public static void main(String[] args) {
-        // Register the Ignite JDBC driver.
-        try {
-            Class.forName(""org.apache.ignite.IgniteJdbcDriver"");
-        } catch (ClassNotFoundException e) {
-            e.printStackTrace();
-            return;
-        }
-
-        Ignite ignite = Ignition.start();
-
-        IgniteCache<Long, Person> cache = ignite.cache(""personCache"");
-
-        // Open a JDBC connection to the source database.
-        try (Connection sourceConn = DriverManager.getConnection(""jdbc:mysql://127.0.0.1:3306/person"", ""root"", ""root"")) {
-            // Execute a SELECT query to fetch data from the source database.
-            PreparedStatement sourceStmt = sourceConn.prepareStatement(""SELECT id, name, salary FROM person"");
-            ResultSet rs = sourceStmt.executeQuery();
-
-            while (rs.next()) {
-                long id = rs.getLong(""id"");
-                String name = rs.getString(""name"");
-                int salary = rs.getInt(""salary"");
-
-                cache.query(new SqlFieldsQuery(""INSERT INTO personCache(id, firstName, lastName) VALUES(?, ?, ?)"")
-                                .setArgs(id, name, salary))
-                        .getAll();
-            }
-        } catch (Exception e) {
-            e.printStackTrace();
-        }
-    }
-}
-
-When I run this client, the SELECT statement, it works fine. But that INSERT statement throws :
-javax.cache.CacheException: Failed to parse query. Table ""PERSONCACHE"" not found; SQL statement:
-
-What am I doing wrong?
-","1. You almost answer it yourself. Compare your queries:
-        PreparedStatement sourceStmt = sourceConn.prepareStatement(""SELECT id, name, salary FROM person"");
-        
-        cache.query(new SqlFieldsQuery(""INSERT INTO personCache(id, firstName, lastName) VALUES(?, ?, ?)"")
-                            .setArgs(id, name, salary))
-
-Note that you use a different table name. In the SELECT, you use person (which is correct) and in the INSERT you use personCache (which isn't). Update your second query to use the person table and it should work.
-",Apache Ignite
-"I am attempting to install BigchainDB Version 2.2.2 on my Ubuntu 22.04 machine.
-I entered the code
-sudo pip3 install bigchaindb==2.2.2
-
-The installation went part of the way before exiting and reporting the following, sorry, there is a lot of it;
-Collecting gevent==20.6.2 (from bigchaindb-abci==1.0.5->bigchaindb==2.2.2)
-  Using cached gevent-20.6.2.tar.gz (5.8 MB)
-  Installing build dependencies ... done
-  Getting requirements to build wheel ... error
-  error: subprocess-exited-with-error
-  
-  × Getting requirements to build wheel did not run successfully.
-  │ exit code: 1
-  ╰─> [60 lines of output]
-
-      warning: src/gevent/_gevent_cgreenlet.pxd:112:33: Declarations should not be declared inline.
-      
-      Error compiling Cython file:
-      ------------------------------------------------------------
-      ...
-      cdef load_traceback
-      cdef Waiter
-      cdef wait
-      cdef iwait
-      cdef reraise
-      cpdef GEVENT_CONFIG
-            ^
-      ------------------------------------------------------------
-      
-  
-  note: This error originates from a subprocess, and is likely not a problem with pip.
-error: subprocess-exited-with-error
-
-× Getting requirements to build wheel did not run successfully.
-│ exit code: 1
-╰─> See above for output.
-
-note: This error originates from a subprocess, and is likely not a problem with pip.
-
-
-I can see that the error is related to gevent version 20.6.2.  I can see from the website that the latest version is 22.10.2.  I installed this version with ...
-pip install gevent==22.10.2
-
-This completed successfully.  I then rebooted, thinking that it might help!  haha
-I restarted the bigchaindb installation and it failed at the same point.  I am guessing that the python script running the install is not checking for dependencies before it just blerts everything onto the machine!!!  Probably foolish of me to think that I could avoid the error like that anyway.
-Many of the 60 lines of output reported this;
-The 'DEF' statement is deprecated and will be removed in a future Cython version. Consider using global variables, constants, and in-place literals instead. See https://github.com/cython/cython/issues/4310
-
-So, my questions are these:
-
-Has anyone else experienced the same error?
-If so, how was it resolved, please?
-If not, is somebody knowledgeable willing to work with me to resolve the issue, please?
-
-Thank you
-Graham
-It would be great if BigChainDB did not use old versions of libraries, I expect they would update their script when new libraries are available and tested.
-","1. This is a known compactibility issue, to solve this, here are the dependencies you need.
-markupsafe - v2.0.1 
-itsdangerous -  v2.0.1 
-werkzeug - v2.0.1 
-Jinja2 - v3.0.3 
-gevent - v20.6.2 
-greenlet - v0.4.16
-If encountered any issue installing PyNCl, use this (Infact this method works perfectly for me, so I'd suggest you try installing through SODIUM):
-wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.18-stable.tar.gz
-tar -xvf libsodium-1.0.18-stable.tar.gz libsodium-stable
-cd libsodium-stable/
-./configure
-make -j
-make install
-SODIUM_INSTALL=system pip3 install PyNaCl==1.1.2 bigchaindb werkzeug==2.0.1 markupsafe==2.0.1 itsdangerous==2.0.1 Jinja2==3.0.3 gevent==20.6.2 greenlet==0.4.16
-
-Finally, Run:
-bigchaindb configure
-bigchaindb -y drop
-bigchaindb init
-bigchaindb start
-
-",BigchainDB
-"I am new to bigchaindb and i have a question. In case a single company wants to store data as asset on bigchaindb and share it with other companies, what advantages would they get from bigchaindb over mongodb?
-Decentralization — Since the company in question would be owning all the bigchaindb nodes, the system would not be decentralized.
-Immutability — They can implement that using code.
-Transferring assets — This can also be done via using mongo db and code.
-","1. BigchainDB's advantage is decentralization. If a single company owns all nodes, you might as well use a single server, there's not much of a difference (unless you want independence of multiple locations within the organization). You should only use BigchainDB - or blockchain in general for that matter - if you're dealing with multiple semi-trusted participants who are trying to write to a shared database while ensuring there transparency, auditability and integrity.
-So in your use case (a single organization storing all data to share with others): no, there is no clear advantage to using BigchainDB over a custom MongoDB implementation.
-Edit: I just saw that this question was answered by Troy McConaghy (creator of BigchainDB) on Medium. Since his answers differs slightly, I'll include it here:
-
-
-Decentralization isn’t an all-or-nothing property, it’s a continuum. Even if a single company runs all the nodes, they can have each one operated by a different employee, in a different business unit, in a different country, for example.
-There are two general approaches to adding immutability to MongoDB using code. One is to try and do it in the application layer. The problem with that is that the MongoDB database is one logical database, so anyone who manages to gain admin-like privileges on one node can change or delete records across the entire database: there is a single point of failure, making its “decentralization” dubious. (In BigchainDB, each node has an independent MongoDB database, so corrupting one doesn’t affect the others.) The other way would be to fork MongoDB to make it so it can’t change or delete existing records. Go ahead, it will take hundreds of coder hours to do that and in the end all you have is something similar to Datomic or HBase. Why not just use one of those instead? Of course, those still have the central admin problem, so you’d probably want to fork…
-Yes, almost any database can be used to track and manage asset transfers. You would have to add that thing where only the true owner of an asset can make the transfer happen (by providing a valid cryptographic signature), but that’s totally doable in the application-level code (or maybe in the database’s internal scripting language). It all takes time and BigchainDB has that out of the box.
-
-
-2. This is not decentralized:
-""RethinkDB has an “admin” user which can’t be deleted and which can make big changes to the database, such as dropping a table. Right now, that’s a big security vulnerability""
-Source:
-https://bigchaindb.readthedocs.io/projects/server/en/v0.5.1/topic-guides/decentralized.html
-If your cryptocoin project runs on BigChainDB and someday the government doesn't like cryptocoins, it can force the companies supporting BigChainDB to erase all your data from BigChainDB.
-If some project is backed by a company and this company is controlled by government rules, this project is not decentralized. Period!
-",BigchainDB
-"OS: Ubuntu 18.04.4 LTS
-Bigchaindb ver: 2.0.0
-Tendermint ver: 0.31.5-d2eab536
-Setup: 1 node bigchaindb+tendermint - running as a docker container
-Problem: Bigchaindb starts fine and tendermint connects successfully to it.  However, when transactions are committed, the commit fails with errors logged in bigchaindb.log mentioning unable to connect to localhost:26657.  netstat command doesn't show any process listening on 26657.  Moreover, tendermint.out.log shows: 
-E[2020-05-03|19:19:48.586] abci.socketClient failed to connect to tcp://127.0.0.1:26658.  Retrying... module=abci-client connection=query err=""dial tcp 127.0.0.1:26658: conn
-ect: connection refused""
-However, as in the below netstat output, the port is in listen mode and bigchaindb.log shows tendermint as conncted: 
-[2020-05-03 19:19:51] [INFO] (abci.app)  ABCIServer started on port: 26658 (MainProcess - pid: 35)
-[2020-05-03 19:19:51] [INFO] (abci.app)  ... connection from Tendermint: 127.0.0.1:59392 ... (MainProcess - pid: 35)
-[2020-05-03 19:19:51] [INFO] (abci.app)  ... connection from Tendermint: 127.0.0.1:59394 ... (MainProcess - pid: 35)
-[2020-05-03 19:19:51] [INFO] (abci.app)  ... connection from Tendermint: 127.0.0.1:59396 ... (MainProcess - pid: 35)
-[2020-05-03 19:19:51] [INFO] (bigchaindb.core) Tendermint version: 0.31.5-d2eab536 (MainProcess - pid: 35)
-
-Output of netstat: 
-bash-5.0# netstat -anp
-Active Internet connections (servers and established)
-Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
-tcp        0      0 0.0.0.0:2812            0.0.0.0:*               LISTEN      32/monit
-tcp        0      0 0.0.0.0:9984            0.0.0.0:*               LISTEN      48/gunicorn: master
-tcp        0      0 0.0.0.0:9985            0.0.0.0:*               LISTEN      54/bigchaindb_ws
-tcp        0      0 0.0.0.0:26658           0.0.0.0:*               LISTEN      35/bigchaindb
-tcp        0      0 127.0.0.1:26658         127.0.0.1:59394         ESTABLISHED 35/bigchaindb
-tcp        0      0 127.0.0.1:59394         127.0.0.1:26658         ESTABLISHED 37/tendermint
-tcp        0      0 127.0.0.1:26658         127.0.0.1:59392         ESTABLISHED 35/bigchaindb
-tcp        0      0 172.17.0.2:33424        172.31.28.97:27017      ESTABLISHED 35/bigchaindb
-tcp        0      0 172.17.0.2:33426        172.31.28.97:27017      ESTABLISHED 35/bigchaindb
-tcp        0      0 127.0.0.1:59392         127.0.0.1:26658         ESTABLISHED 37/tendermint
-tcp        0      6 127.0.0.1:26658         127.0.0.1:59396         ESTABLISHED 35/bigchaindb
-tcp        0      0 172.17.0.2:34490        172.31.28.97:27017      ESTABLISHED 53/gunicorn: worker
-tcp        0      0 127.0.0.1:59396         127.0.0.1:26658         ESTABLISHED 37/tendermint
-tcp        0      0 172.17.0.2:34488        172.31.28.97:27017      ESTABLISHED 53/gunicorn: worker
-tcp        0      0 :::2812                 :::*                    LISTEN      32/monit
-Active UNIX domain sockets (servers and established)
-Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path
-unix  3      [ ]         STREAM     CONNECTED     3426959421 54/bigchaindb_ws    
-unix  3      [ ]         STREAM     CONNECTED     3426959420 54/bigchaindb_ws    
-
-The problem is random.  Sometimes, it gets connected magically and tendermint rpc process listens to 26657 port.  
-Stacktrace from bigchaindb.log: 
-[2020-05-03 19:29:12] [ERROR] (bigchaindb.web.server) Exception on /api/v1/transactions/ [POST] (bigchaindb_webapi - pid: 53)                                                
-Traceback (most recent call last):                                                                                                                                           
-  File ""/usr/lib/python3.7/site-packages/urllib3/connection.py"", line 157, in _new_conn                                                                                      
-    (self._dns_host, self.port), self.timeout, **extra_kw                                                                                                                    
-  File ""/usr/lib/python3.7/site-packages/urllib3/util/connection.py"", line 84, in create_connection                                                                          
-    raise err                                                                                                                                                                
-  File ""/usr/lib/python3.7/site-packages/urllib3/util/connection.py"", line 74, in create_connection                                                                          
-    sock.connect(sa)                                                                                                                                                         
-ConnectionRefusedError: [Errno 111] Connection refused                                                                                                                       
-
-During handling of the above exception, another exception occurred:                                                                                                          
-
-Traceback (most recent call last):                                                                                                                                           
-  File ""/usr/lib/python3.7/site-packages/urllib3/connectionpool.py"", line 672, in urlopen                                                                                    
-    chunked=chunked,                                                                                                                                                         
-  File ""/usr/lib/python3.7/site-packages/urllib3/connectionpool.py"", line 387, in _make_request                                                                              
-    conn.request(method, url, **httplib_request_kw)                                                                                                                          
-  File ""/usr/lib/python3.7/http/client.py"", line 1252, in request                                                                                                            
-    self._send_request(method, url, body, headers, encode_chunked)                                                                                                           
-  File ""/usr/lib/python3.7/http/client.py"", line 1298, in _send_request                                                                                                      
-    self.endheaders(body, encode_chunked=encode_chunked)                                                                                                                     
-  File ""/usr/lib/python3.7/http/client.py"", line 1247, in endheaders                                                                                                         
-    self._send_output(message_body, encode_chunked=encode_chunked)                                                                                                           
-  File ""/usr/lib/python3.7/http/client.py"", line 1026, in _send_output                                                                                                       
-    self.send(msg)                                                                                                                                                           
-  File ""/usr/lib/python3.7/http/client.py"", line 966, in send                                                                                                                
-    self.connect()                                                                                                                                                           
-  File ""/usr/lib/python3.7/site-packages/urllib3/connection.py"", line 184, in connect                                                                                        
-    conn = self._new_conn()                                                                                                                                                  
-  File ""/usr/lib/python3.7/site-packages/urllib3/connection.py"", line 169, in _new_conn                                                                                      
-    self, ""Failed to establish a new connection: %s"" % e                                                                                                                     
-urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f3fa4a31a90>: Failed to establish a new connection: [Errno 111] Connection refused    
-
-During handling of the above exception, another exception occurred:                                                                                                          
-
-Traceback (most recent call last):                                                                                                                                           
-  File ""/usr/lib/python3.7/site-packages/requests/adapters.py"", line 449, in send                                                                                            
-    timeout=timeout                                                                                                                                                          
-  File ""/usr/lib/python3.7/site-packages/urllib3/connectionpool.py"", line 720, in urlopen                                                                                    
-    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]                                                                                                          
-  File ""/usr/lib/python3.7/site-packages/urllib3/util/retry.py"", line 436, in increment                                                                                      
-    raise MaxRetryError(_pool, url, error or ResponseError(cause))                                                                                                           
-urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=26657): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPC
-
-During handling of the above exception, another exception occurred:                                                                                                          
-
-Traceback (most recent call last):                                                                                                                                           
-  File ""/usr/lib/python3.7/site-packages/flask/app.py"", line 1949, in full_dispatch_request                                                                                  
-    rv = self.dispatch_request()                                                                                                                                             
-  File ""/usr/lib/python3.7/site-packages/flask/app.py"", line 1935, in dispatch_request                                                                                       
-    return self.view_functions[rule.endpoint](**req.view_args)                                                                                                               
-  File ""/usr/lib/python3.7/site-packages/flask_restful/__init__.py"", line 458, in wrapper                                                                                    
-    resp = resource(*args, **kwargs)                                                                                                                                         
-  File ""/usr/lib/python3.7/site-packages/flask/views.py"", line 89, in view                                                                                                   
-    return self.dispatch_request(*args, **kwargs)                                                                                                                            
-  File ""/usr/lib/python3.7/site-packages/flask_restful/__init__.py"", line 573, in dispatch_request                                                                           
-    resp = meth(*args, **kwargs)                                                                                                                                             
-  File ""/usr/src/app/bigchaindb/web/views/transactions.py"", line 99, in post                                                                                                 
-    status_code, message = bigchain.write_transaction(tx_obj, mode)                                                                                                          
-  File ""/usr/src/app/bigchaindb/lib.py"", line 100, in write_transaction                                                                                                      
-    response = self.post_transaction(transaction, mode)                                                                                                                      
-  File ""/usr/src/app/bigchaindb/lib.py"", line 95, in post_transaction                                                                                                        
-    return requests.post(self.endpoint, json=payload)                                                                                                                        
-  File ""/usr/lib/python3.7/site-packages/requests/api.py"", line 116, in post                                                                                                 
-    return request('post', url, data=data, json=json, **kwargs)                                                                                                              
-  File ""/usr/lib/python3.7/site-packages/requests/api.py"", line 60, in request                                                                                               
-    return session.request(method=method, url=url, **kwargs)                                                                                                                 
-  File ""/usr/lib/python3.7/site-packages/requests/sessions.py"", line 533, in request                                                                                         
-    resp = self.send(prep, **send_kwargs)                                                                                                                                    
-  File ""/usr/lib/python3.7/site-packages/requests/sessions.py"", line 646, in send                                                                                            
-    r = adapter.send(request, **kwargs)                                                                                                                                      
-  File ""/usr/lib/python3.7/site-packages/requests/adapters.py"", line 516, in send                                                                                            
-    raise ConnectionError(e, request=request)                                                                                                                                
-requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=26657): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HT
-
-How do I troubleshoot this?
-Internet references: 
-1) Issue: https://github.com/bigchaindb/bigchaindb-driver/issues/499 
-2) Tried steps mentioned in: https://github.com/bigchaindb/bigchaindb/issues/2581#issuecomment-455952861
-Thanks in advance.
-EDIT: Sample code that used to test: (copied from https://github.com/bigchaindb/bigchaindb/issues/2581#issuecomment-455958416 )
-Note: localhost:10001 is mapped to 0.0.0.0:9984 of the respective container.
-from bigchaindb_driver import BigchainDB
-from bigchaindb_driver.crypto import generate_keypair
-
-bdb_root_url = 'http://localhost:10001'
-bdb = BigchainDB(bdb_root_url)
-
-msg = 'Varadhan test message for bigchaindb'
-
-alice = generate_keypair()
-tx = bdb.transactions.prepare(
-    operation='CREATE',
-    signers=alice.public_key,
-    asset={'data': {'message': msg}})
-signed_tx = bdb.transactions.fulfill(
-    tx,
-    private_keys=alice.private_key)
-bdb.transactions.send_commit(signed_tx) # write
-block_height = bdb.blocks.get(txid=signed_tx['id'])
-block = bdb.blocks.retrieve(str(block_height)) # read
-print(block)
-
-EDIT-2: I tried running tendermint with log level set to debug and got: 
-bash-5.0# tendermint node --rpc.laddr ""tcp://0.0.0.0:26657"" --log_level=""*:debug""
-I[2020-05-06|18:40:05.136] Starting multiAppConn                        module=proxy impl=multiAppConn
-I[2020-05-06|18:40:05.137] Starting socketClient                        module=abci-client connection=query impl=socketClient
-I[2020-05-06|18:40:05.138] Starting socketClient                        module=abci-client connection=mempool impl=socketClient
-I[2020-05-06|18:40:05.139] Starting socketClient                        module=abci-client connection=consensus impl=socketClient
-I[2020-05-06|18:40:05.139] Starting EventBus                            module=events impl=EventBus
-I[2020-05-06|18:40:05.140] Starting PubSub                              module=pubsub impl=PubSub
-I[2020-05-06|18:40:05.151] Starting IndexerService                      module=txindex impl=IndexerService
-I[2020-05-06|18:40:05.231] ABCI Handshake App Info                      module=consensus height=355 hash= software-version= protocol-version=0
-I[2020-05-06|18:40:05.233] ABCI Replay Blocks                           module=consensus appHeight=355 storeHeight=2760414 stateHeight=2760413
-I[2020-05-06|18:40:05.233] Applying block                               module=consensus height=356
-I[2020-05-06|18:40:05.315] Executed block                               module=consensus height=356 validTxs=0 invalidTxs=0
-I[2020-05-06|18:40:05.395] Applying block                               module=consensus height=357
-I[2020-05-06|18:40:05.519] Executed block                               module=consensus height=357 validTxs=0 invalidTxs=0
-I[2020-05-06|18:40:05.599] Applying block                               module=consensus height=358
-I[2020-05-06|18:40:05.723] Executed block                               module=consensus height=358 validTxs=0 invalidTxs=0
-I[2020-05-06|18:40:05.803] Applying block                               module=consensus height=359
-I[2020-05-06|18:40:05.927] Executed block                               module=consensus height=359 validTxs=0 invalidTxs=0
-I[2020-05-06|18:40:06.007] Applying block                               module=consensus height=360
-I[2020-05-06|18:40:06.131] Executed block                               module=consensus height=360 validTxs=0 invalidTxs=0
-
-As can be seen, RPC server module is not loaded at all.  Is there more debugging options available to see why RPC server module is not loaded?
-","1. Try connecting 2 instances using tendermint genesis file and then commit transaction. Tendermint is not able to find peer for transaction. Also the tendermint port should be open to allow the transaction.
-
-2. Ensure bigchaindb server is running before starting tendermint
-you can do the following if you installed bigchaindb using docker:
-
-make stop
-make start
-make run
-
-or if you don't mind loosing your data
-
-make stop
-make clean
-make start
-make run
-
-Also ensure you allow required ports through firewall
-Add port 26656, 26657, 26658, 9984, 9985 through firewall in your machine and also run
-ufw allow 26656 , ufw allow 26657 , ufw allow 26658 , ufw allow 9984 and ufw allow 9985 on ssh
-",BigchainDB
-"SELECT username
-FROM (VALUES(""user1""),(""user2""),(""user3"")) V(username)
-EXCEPT
-SELECT username
-FROM userdetails;
-
-In the query above, in Cassandra it doesn't work, and I get the following error:
-
-SyntaxException: line 2:5 no viable alternative at input '(' (SELECT usernameFROM [(]...)
-
-I tried researching, but nothing was found for a solution in Cassandra.
-","1. CQL which is Cassandra's query language (1) only has a subset of the SQL grammar and (2) does not include the EXCEPT clause. This is intentional because Cassandra is designed for extremely fast data retrieval at internet scale so reads are optimised for single-partition requests.
-You choose Cassandra because you have a scale problem and need to retrieve data in single-millisecond latency.
-In your case, you are performing a full table scan with SELECT username FROM userdetails -- an unbounded, unfiltered query. If you think about it, this query does not scale.
-Consider use cases with billions and billions of records (partitions) in a table that is distributed across hundreds of nodes in the cluster. If we allow an unbounded query to run, there is a good chance that it will never complete within an ""acceptable"" timeframe.
-Cassandra is optimised for OLTP workloads. When you need to run analytics (or analytics-like) workloads then consider using Apache Spark to execute the analytics queries. These are executed using the Spark Cassandra connector which optimises the query by breaking it up into small token ranges then distributes them across workers/executors which in turn request the sub-range reads from replica nodes in the Cassandra cluster. Cheers!
-",Cassandra
-"I'm trying to RELIABLY implement that pattern.
-For practical purposes, assume we have something similar to a twitter clone (in cassandra and nodejs).
-So, user A has 500k followers. When user A posts a tweet, we need to write to 500k feeds/timelines.
-Conceptually this is easy, fetch followers for user A, for each one: write tweet to his/her timeline. But this is not ""atomic"" (by atomic I mean that, at some point, all of the writes will succeed or none will).
-async function updateFeeds(userId, tweet) {
-
-  let followers = await fetchFollowersFor(userId)
-  for(let f of followers) {
-    await insertIntoFeed(f, tweet)
-  }
-
-}
-
-
-
-This seems like a DoS attack:
-
-async function updateFeeds(userId, tweet) {
-
-  let followers = await fetchFollowersFor(userId)
-  await Promise.all(followers.map(f => insertIntoFeed(f, tweet)))
-
-}
-
-
-
-How do I keep track of the process? How do I resume in case of failure? I'm not asking for a tutorial or anything like that, just point me in the right direction (keywords to search for) if you can please.
-","1. I would start by setting up a message broker (like Kafka), and write all the tweets into a topic.
-Then develop an agent that consumes the messages. For each message, the agent fetches a batch of users that are followers but have not yet the tweet into their feed, and insert the tweet into the feed of each user. When there are no more users that are followers but have not the tweet, the agent commits the message and process the following messages. The reason for proceeding this way is resilience : if for any reason the agent is restarted, it will resume from where it left.
-Configure the topic with a lot of partitions, in order to be able to scale up the processing of the messages. If you have ONE partition, you can have ONE agent to process the messages. If you have N partitions, you can have up to N agents to process the messages in parallel.
-To keep track of the overall processing, you can watch the ""lag"" into the message broker, which is the number of messages yet to be processed into the topic. If it is too high for too long, then you have to scale up the number of agents.
-If you want to keep track of the processing of a given message, the agent can query how many users are still to be processed before processing a batch of users. Then the agent can log this number, or expose it through its API, or expose it as a Prometheus metric...
-
-2. In addition to @christophe-quintard answer there is another trick to consider. Which is - to ... not use the fan out write pattern here.
-Basically instead of writing a big number of tweets into 500k timelines you just create a separate abstraction for ""popular""/""hot"" accounts (it can be counted based on number of followers for example or number of followers, maybe number of tweets per day can be in consideration too) and build the timeline for their subscribers on the fly. Hence you fetch the ""ordinary"" timeline and join it with the all ""popular"" ones for the user when it is requested, this way you can reduce amount of data stored and processed.
-For ""non-hot"" accounts you just do some batching plus eventually consistent processing, i.e. you post a message to some background processor that will do some kind of batching processing (there are several options/things to consider here).
-",Cassandra
-"I'm trying to create a partitioned table in Clickhouse with:
-CREATE TABLE IF NOT EXISTS new_table (
-  logging_day Date,
-  sensor_name String,
-  ts       DateTime
-  ts_hour  DateTime MATERIALIZED toStartOfHour(ts),
-)
-ENGINE=MergeTree()
-PARTITION BY logging_day
-ORDER BY (sensor_name, ts_hour, ts)
-PRIMARY KEY (sensor_name, ts_hour);
-
-The table is partitioned by logging_day. From query performance perspective, does it still need be added to the ORDER BY or PRIMARY KEY lists like:
-ORDER BY (logging_day, sensor_name, ts_hour, ts)
-PRIMARY KEY (logging_day, sensor_name, ts_hour);
-
-Which way will make the filtered queries faster, for example SELECT * FROM new_table WHERE logging_day = '2024-04-20' AND sensor_name = 'foo' AND ts_hour = '2024-04-20 13:00:00', with or without the partition key in the lists? Thanks.
-","1. The ORDER BY key controls the order of data on disk. The PRIMARY KEY controls the sparse index.
-By default the PRIMARY KEY will be set to the ORDER BY key - it builds the sparse index and loads it into memory for the columns in the key.
-However, you can set the PRIMARY KEY to a prefix of the ORDER BY.
-When would you do this?
-When you want the compression of specifying a full ORDER BY but don't expect to query on the PRIMARY KEY latter entries - you save the memory of loading the index for these columns. Its rare but there are cases in use cases like ReplacingMergeTree and some access patterns.
-In general, you don't need it.
-
-2. In most cases, partitioning by month is sufficient. Overly granular partitioning, can lead to inefficiencies.
-CREATE TABLE IF NOT EXISTS new_table (
-logging_day Date,
-sensor_name String,
-ts DateTime,
-ts_hour DateTime MATERIALIZED toStartOfHour(ts)
-) ENGINE = MergeTree()
-PARTITION BY toYYYYMM(logging_day)  -- Coarser partitioning by month
-ORDER BY (logging_day, sensor_name, ts_hour)
-PRIMARY KEY (logging_day, sensor_name, ts_hour);
-
-This approach reduces the number of partitions and aligns with ClickHouse's best practices.
-This ordering still respects the finer-grained needs of your queries without overly complicating the partitioning.
-",ClickHouse
-"I work on ClickHouse proxy which would validate and modify query before sending it to ClickHouse. To implement validation logic and I need to know what columns user request. The problem comes in when user uses * in select's.
-For simple queries like select * from table I can expand * by myself and transform original query to e.g. select a,b,c from table.
-Knowing table and column names and can check against user's permissions if he can access those columns. But how to deal with complicated queries with many join, subqueries etc. I was hoping that maybe there is functionality in ClickHouse which would allow to dry-run query before executing it, then ClickHouse would parse, analyze, optimize original SQL and produce expand one without *.
-I couldn't find anything like that in ClickhHouse documentation. I use sqlglot library to transform AST. How can I resolve my problem?
-","1. I asked someone working in ClickHouse. You can run EXPLAIN <query> in the ClickHouse, but not aware of such parser independently outside ClickHouse. As this results require the table schema definition.
-",ClickHouse
-"I got Clickhouse v. 5.7.30 (24.1.5.).
-Is this version got equivalent of windows function:
- sum(...) over(partition by ... order by ...)
-
-I also tried this code:
-select  salon_name
-        , date
-        , runningAccumulate(sumState(revenue_fact_sum), salon_name) as revenue_fact_cumsum
-from    revenue_plan_fact_without_cumsum
-group by salon_name,
-        date
-order by salon_name,
-        date
-
-But result is strange:
-
-Original data:
-
-","1. Questions is not clear
-Yes ClickHouse 24.1 support window function which you describe above
-Look https://clickhouse.com/docs/en/sql-reference/window-functions
-
-2. Then I need to calculate cumulative sum with:
-runningAccumulate(sumState(revenue_fact), salon_name)
-
-I must put in select all column, including column 'revenue_fact'.
-In group by I also need add all columns to keep all values of revenue from original table.
-select  salon_name
-        , date
-        , revenue_fact
-        , runningAccumulate(sumState(revenue_fact), salon_name) AS revenue_cumulative
-from    metrics_for_specializations_with_hour_plan
-group by salon_name
-        , date
-        , revenue_fact
-order by salon_name
-        , date
-
-If I need add more columns to make ""partition by"", I need add this columns to select, group by and inside runningAccumulate like this:
-select  salon_name
-        , specialization_unification
-        , date
-        , revenue_fact
-        , runningAccumulate(sumState(revenue_fact), [salon_name, specialization_unification]) AS revenue_cumulative
-from    metrics_for_specializations_with_hour_plan
-group by salon_name
-        , specialization_unification
-        , date
-        , revenue_fact
-order by salon_name
-        , specialization_unification
-        , date
-
-It's not obviously for me after reading documentation.
-",ClickHouse
-"If I have CockroachDb replicas in three Kubernetes Clusters in different regions and one of the clusters loses connection to the others, will a app in that cluster still be able to read from the CockroachDb in the same cluster?
-For example will the App in Region 1 still be able to read data from CockroachDb, even if it has lost connection to Region 2 and Region 3?
-Example Setup
-I have tried to find the answer online.
-","1. Even if a Kubernetes cluster in CockroachDB loses connection to others, it can still read locally stored data. CockroachDB ensures availability and consistency through its distributed architecture.
-",CockroachDB
-"I have a written a simple golang CRUD example connecting to cockroachdb using pgxpool/pgx.
-All the CRUD operations are exposed as REST api using Gin framework.
-By using curl command or Postman, the operations (GET/POST/DELETE) are working good and the data reflect in the database.
-Next I dockerized this simple app and trying to run. The application seems to get struck in the below code
-func Connection(conn_string string) gin.HandlerFunc {
-  log.Println(""Connection: 0"", conn_string)
-  config, err := pgxpool.ParseConfig(conn_string)
-  log.Println(""Connection: 1"", config.ConnString())
-  if err != nil {
-      log.Fatal(err)
-  }
-  log.Println(""Connection: 2"")
-  pool, err := pgxpool.ConnectConfig(context.Background(), config) // gets struck here
-  if err != nil {
-      log.Fatal(err)
-  }
-  log.Println(""Connection: 3"")
-  return func(c *gin.Context) {
-      c.Set(""pool"", pool)
-      c.Next()
-  }
-}
-
-The code seems to get frozen after printing Connection: 2 at the line
-pool, err := pgxpool.ConnectConfig(context.Background(), config)
-After few minutes, I am getting a error
-FATA[0120] failed to connect to host=192.165.xx.xxx user=user_name database=dbname`: dial error (timeout: dial tcp 192.165.xx.xxx:5432: i/o timeout).
-Below is my docker file
-FROM golang as builder
-WORKDIR /catalog
-COPY main.go ./
-COPY go.mod ./
-COPY go.sum ./
-RUN go get .
-RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o catalog .
-
-# deployment image
-FROM scratch
-#FROM alpine:3.17.1
-# copy ca-certificates from builder
-COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
-WORKDIR /bin/
-COPY --from=builder /catalog .
-CMD [ ""./catalog"" ]
-#CMD go run /catalog/main.go
-EXPOSE 8080
-
-Note, I tried getting into the container bash shell and could ping the target ip 192.165.xx.xxx.
-Please let me know why the pgxpool fails to connect to DB in the docker container but work in host (ubuntu) without any issue.
-","1. Update-2 : The real issue is passing the arguments while starting the application. When the arguments are passed correctly, this is started working.
-Update-1: I still see issues while running the query and could produce it outside the docker as well.
-I could fix it with upgraded pgxpool v5 instead of v4.
-All I did is
-go get -u github.com/jackc/pgx/v5/pgxpool, used it in the code as well
-and it worked as expected.
-This could be a known bug but could not find any related issue to include it in this post.
-Below is the final code that is working
-func Connection(conn_string string) gin.HandlerFunc {
-    log.Println(""Connection: 0"", conn_string)
-    config, err := pgxpool.ParseConfig(conn_string)
-    log.Println(""Connection: 1"", config.ConnString())
-    if err != nil {
-        log.Fatal(err)
-    }
-    log.Println(""Connection: 2"")
-    //pool, err := pgxpool.ConnectConfig(context.Background(), config)
-    pool, err := pgxpool.NewWithConfig(context.Background(), config)
-    if err != nil {
-        log.Fatal(err)
-    }
-    log.Println(""Connection: 3"")
-    return func(c *gin.Context) {
-        c.Set(""pool"", pool)
-        c.Next()
-    }
-}
-
-",CockroachDB
-"The Rows.Scan method takes as many parameters as there are columns in the SQL query.
-As the query being executed is SHOW COLUMNS FROM my_table I cannot omit any column which I don't require (or can I?).
-Is there any way to ignore some fields from the query result set which is not required?
-Below is my code:
-rows, err := db.Query(""SHOW COLUMNS FROM "" + r.Name)
-DieIf(err)
-//var field, dataType, ignoreMe1, ignoreMe2, ignoreMe3 string
-var field, dataType string
-for rows.Next() {
-                    //This Place
-                    //   |
-                    //   V
-    if err := rows.Scan(&field, &dataType); err != nil {
-        DieIf(err)
-    }
-    r.Attributes[field] = Attribute{
-        Name:       field,
-        DataType:   dataType,
-        Constraint: false,
-    }
-}
-
-error:
-sql: expected 5 destination arguments in Scan, not 2
-","1. So here I'm with one solution for you, try this one to get field and type from query.
-package main
-
-import (
-    ""fmt""
-    _ ""github.com/lib/pq""
-    ""database/sql""
-)
-
-func main() {
-
-    db, _ := sql.Open(
-        ""postgres"",
-        ""user=postgres dbname=demo password=123456"")
-
-    rows, _ := db.Query(""SELECT * FROM tableName;"")
-
-    columns, _ := rows.Columns()
-    count := len(columns)
-    values := make([]interface{}, count)
-    valuePtr := make([]interface{}, count)
-
-    for rows.Next() {
-        for i, _ := range columns {
-            valuePtr[i] = &values[i]
-        }
-
-        rows.Scan(valuePtr...)
-
-        for i, col := range columns {
-            var v interface{}
-
-            val := values[i]
-
-            b, ok := val.([]byte)
-            if (ok) {
-                v = string(b)
-            } else {
-                v = val
-            }
-
-            fmt.Println(col, v)
-        }
-    }
-}
-
-
-2. Like this answer suggested, sqlx could be a good choice.
-I personally use db.Unsafe() to ignore unwanted fields.
-type MyTable struct {
-    // list the fields you want here
-    Name   string `db:""name""`
-    Field1 string `db:""field1""`
-}
-
-db := try(sqlx.ConnectContext(context.Background(), ""mysql"",
-    fmt.Sprintf(""root:@tcp(%s:%d)/?charset=utf8mb4&timeout=3s"", host, port)))
-db = db.Unsafe()
-
-rows := try(db.Queryx(""select * from mytable""))
-for rows.Next() {
-    myTable := MyTable{}
-    rows.StructScan(&myTable)
-}
-
-",CockroachDB
-"Is there any stable nosql database for iOS except for Couchbase?
-Couchbase is now a beta version which i don't want to use on a app with many users.(Although i like Couchbase very much)
-Any suggestions? Special Thx!
-","1. There are several projects to get a CouchDB-compatible API available on mobile devices.
-
-TouchDB, a native iOS build
-PouchDB, an HTML5 implementation, for web and PhoneGap apps
-
-
-2. Edit (April, 2024):
-
-Realm.io is the way to go nowadays.
-
-
-Also take a look to this Key/value databases that have been ported (wrapped) to iOS:
-
-LevelDB (Port: LevelDB): Made by Google, and it seems to be one of the fastest out there. No relational data model, no SQL queries, no support for indexes
-
-
-3. I am also looking NoSQL for iOS and found NanoStore
-https://github.com/tciuro/NanoStore
-Although if you have time to explore, it would be a great experience learning SQLite properly with custom functions. It is very easy to create your own NoSQL database. Just one table for all objects storing dictionaries/json along views/indexes with custom functions.
-Making your own solution is not the hard part. The hard work is mapping your objects to the database. This task can grow the complexity of your codebase in most hideous ways and you need to be a very good coder to avoid that. Although maybe you must suffer through such experience if you want to be very good.
-One of the most nasty problems will also be the relationships between objects. Solving that is the main goal of CoreData, that is the reason for which you will read around that CoreData is not a database.
-Learning SQLite properly, specially where you create custom plugins for it, can open many doors. However be aware that most developers do not care at all about learning those details and will get lost with your code base.
-",Couchbase
-"I have a question and want to ask you:
-Whats the difference between couchbase capella vs couchbase server ?
-And what would you recommend to use ?
-Do I have the same benefits on the capella as the server ?
-thank you very much!
-","1. The simplest way to think about it: Couchbase Capella is a managed database-as-a-service, hosted version of Couchbase Server. Couchbase manages the infrastructure, the software, upgrades, patches, and so on. You get access to it via a control plane.
-Couchbase Server is the host-it-yourself, downloadable version. It can be deployed pretty much anywhere, but you will have to manage the upgrades, patches, infrastructure, and so on.
-That being said, there are differences between them, and that's likely to continue. For instance, Capella also has ""App Services"" available as part of the platform. This is roughly equivalent to Couchbase Sync Gateway, which would be yet another downloadable install for you to manage.
-But I think the more important question, and the one probably not great for Stack Overflow, is ""And what would you recommend to use?""
-Short answer: if you have the time, expertise, resources, and experience in hosting your own Couchbase cluster, and want maximum control, you might be good with Server. Otherwise, I'd say Capella.
-
-2. Couchbase Server is an on-premises or self-managed database where you are responsible for deploying, configuring, managing, and maintaining it. Hence, you have more control over it.
-On the other hand, Couchbase Capella is a Database-as-a-Service (DBaaS) where you deploy and run the Couchbase Server in the cloud. It offers features like auto-scaling, monitoring, and maintaining database clusters. More or less, it will take care of the server for you, and you could have less control over it, but it is more convenient as you are not worried about the infrastructure and other things.
-",Couchbase
-"I'm experimenting with dgraph and so far my biggest struggle is to create an edge between two objects without prior knowledge of their uids (for bulk loading).
-Example - let's have two types - parent and child, the only difference is that child is always a leaf node, so the schema may be something like
-<name>: string .
-<age>: int .
-<children>: [uid] .
-
-type Parent {
-    name
-    age
-    children
-}
-
-type Child {
-    name
-    age
-}
-
-Now I would like to insert three nodes - two parents and one child - and create an edge between them, all that using one query, without prior querying uid. I imagine something like this:
-{
-    ""set"": [
-        {
-            ""name"": ""Kunhuta"",
-            ""age"": 38,
-            ""dgraph.type"": ""Parent""
-        },
-        {
-            ""name"": ""Vendelin"",
-            ""age"": 41,
-            ""dgraph.type"": ""Parent""
-        },
-        {
-            ""name"": ""Ferko"",
-            ""age"": 14,
-            ""dgraph.type"": ""Child"",
-            ""Parent"": [
-                {
-                    ""name"": ""Kunhuta""
-                },
-                {
-                    ""name"": ""Vendelin""
-                }
-            ]
-        }
-    ]
-}
-
-(Suppose names are unique type identifiers)
-Is it possible to somehow do so in dgraph?
-","1. So if you are adding new nodes to the graph and wish to connect them to each other with new edges you can do this with blank nodes.
-From the dgraph docs: https://dgraph.io/docs/v1.0.8/mutations/
-
-Blank nodes in mutations, written _:identifier, identify nodes within a mutation
-
-Here's your example with blank nodes added:
-{
-    ""set"": [
-        {
-            ""uid"": ""_:kunhuta"",
-            ""name"": ""Kunhuta"",
-            ""age"": 38,
-            ""dgraph.type"": ""Parent""
-        },
-        {
-            ""uid"": ""_:vendelin"",
-            ""name"": ""Vendelin"",
-            ""age"": 41,
-            ""dgraph.type"": ""Parent""
-        },
-        {
-            ""uid"": ""_:ferko"",
-            ""name"": ""Ferko"",
-            ""age"": 14,
-            ""dgraph.type"": ""Child"",
-            ""Parent"": [
-                {
-                    ""uid"": ""_:kunhuta"",
-                    ""name"": ""Kunhuta""
-                },
-                {
-                    ""uid"": ""_:vendelin"",
-                    ""name"": ""Vendelin""
-                }
-            ]
-        }
-    ]
-}
-
-Things get a bit more difficult when you want to connect a new node to an existing one in the graph. For example say you committed the above mutation then you wanted to add a new child to ""Kunhuta"". You would need to query the existing uid of ""Kunhuta"" and use that uid on a new set mutation to add your child.
-Also I would use some form of unique identifier outside of names to help query for the uid. If you ended up adding another ""Kunhuta"" in your graph how would you know which is the real parent? For this reason dgraph recommends adding an external identifier (usually a uuid) and calling it xid in your schema.
-",Dgraph
-"Issue : 2024-05-16T08:03:08,345 ERROR [qtp475172655-138] org.apache.druid.sql.avatica.DruidMeta - INSERT operations are not supported by requested SQL engine [native], consider using MSQ.
-org.apache.druid.error.DruidException: INSERT operations are not supported by requested SQL engine [native], consider using MSQ.
-Avatica JDBC jar : 1.25.0
-url = ""jdbc:avatica:remote:url=http://54.210.239.246:8888/druid/v2/sql/avatica/;transparent_reconnection=true""
-common.runtime.properties contains the entry - druid.extensions.loadList=[""druid-hdfs-storage"", ""druid-kafka-indexing-service"", ""druid-datasketches"", ""druid-multi-stage-query""]
-Druid started with auto mode :
-[Thu May 16 07:10:17 2024] Running command[broker]: bin/run-druid broker /home/ec2-user/apache-druid-29.0.1/conf/druid/auto '-Xms1849m -Xmx1849m -XX:MaxDirectMemorySize=1232m'
-[Thu May 16 07:10:17 2024] Running command[historical]: bin/run-druid historical /home/ec2-user/apache-druid-29.0.1/conf/druid/auto '-Xms2144m -Xmx2144m -XX:MaxDirectMemorySize=3216m'
-[Thu May 16 07:10:17 2024] Running command[middleManager]: bin/run-druid middleManager /home/ec2-user/apache-druid-29.0.1/conf/druid/auto '-Xms67m -Xmx67m' '-Ddruid.worker.capacity=2 -Ddruid.indexer.runner.javaOptsArray=[""-server"",""-Duser.timezone=UTC"",""-Dfile.encoding=UTF-8"",""-XX:+ExitOnOutOfMemoryError"",""-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"",""-Xms502m"",""-Xmx502m"",""-XX:MaxDirectMemorySize=502m""]'
-[Thu May 16 07:10:17 2024] Running command[router]: bin/run-druid router /home/ec2-user/apache-druid-29.0.1/conf/druid/auto '-Xms256m -Xmx256m -XX:MaxDirectMemorySize=128m'
-[Thu May 16 07:10:17 2024] Running command[coordinator-overlord]: bin/run-druid coordinator-overlord /home/ec2-user/apache-druid-29.0.1/conf/druid/auto '-Xms2010m -Xmx2010m'
-","1. I believe this issue is likely because you sent a SQL INSERT statement to the interactive query (SELECT) API.
-The interactive query API is synchronous, and supports SELECT.
-Asynchronous operations, including ingestion and queries that you do not need to run interactively, go to the MSQ-enabled SQL-based ingestion API.
-See the SQL-based ingestion API documentation for more information.
-",Druid
-"My use case is First i need to do batch ingestion so that a datascource is created for that batch ingestion. Next for that same datasource i need to append data using streaming ingestion (which is real time). How to do this in Apache Druid.
-I have tried batch ingestion and streaming separately.
-","1. This is very common, and usually has something like Apache Airflow controlling batch ingestion while the supervisor handles consumption from Apache Kafka, Azure Event Hub, Amazon Kinesis, etc.
-Both batch and streaming ingestion into Apache Druid allow you to specify a destination table.
-Streaming ingestion is supervised, meaning that it runs continuously until you stop it. You can have only one supervisor per table.
-Batch ingestion is asynchronous, and can be run at any time against a table.
-As you build out an ingestion specification for streaming ingestion in the Druid console, notice that it builds a JSON document. It contains the all-important table name in the datasource element.
-Note that, in the current version of Druid, locking is (essentially) on time intervals. Therefore, so long as your batch ingestion and streaming ingestion do not cross time periods, you will avoid this, and can run both batch and streaming ingestion at the same time on the same table.
-See also streaming documentation on the official site and the associated tutorial.
-",Druid
-"After installing Druid i want to connect with Kafka. So, in my Apache Druid i came up with this information:
-
-After i follow the instructions my file located at /opt/druid/conf/druid/cluster/_common/common.runtime.properties had already the extension as you can see on the following picture:
-
-Apache Druid: 0.21.1
-Apache Kafka: latest
-I am doing this using docker. Both are installed on the same machine.
-After everything an following all the information in documentation i am not able to connect Kafka and Druid because i am still getting the message:  ""please make sure that kafka-indexing-service extension is included in the loadlist"" in Druid
-Update:
-My Docker Composer:
-version: '2'
-services:
-  spark:
-    container_name: spark
-    image: bde2020/spark-master
-    ports: 
-      - 9180:8080
-      - 9177:7077
-      - 9181:8081
-    links: 
-      - elassandra
-    volumes:
-hosein:/var/lib/docker/volumes/data/python
-      - /home/mostafa/Desktop/kafka-test/together/cassandra/mostafa-hosein:/var/lib/docker/volumes/data/python
-
-
-
-  elassandra:
-    image: strapdata/elassandra
-    container_name: elassandra
-    build: /home/mostafa/Desktop/kafka-test/together/cassandra
-    env_file:
-      - /home/mostafa/Desktop/kafka-test/together/cassandra/conf/cassandra.env
-    volumes:
-      - /home/mostafa/Desktop/kafka-test/together/cassandra/jarfile:/var/lib/docker/volumes/data/_data
-    ports:
-      - '7000:7000'
-      - '7001:7001'
-      - '7199:7199'
-      - '9042:9042'
-      - '9142:9142'
-      - '9160:9160'
-      - '9200:9200'
-      - '9300:9300'
-
-  zookeeper:
-    image: wurstmeister/zookeeper
-    container_name: zookeeper
-    ports:
-      - ""2181:2181""
-
-  kafka:
-    build: .
-    container_name: kafka
-    links:
-     - zookeeper
-    ports:
-      - ""9092:9092""
-    environment:
-      KAFKA_ADVERTISED_HOST_NAME: localhost
-      KAFKA_ADVERTISED_PORT: 9092
-      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
-      KAFKA_OPTS: -javaagent:/usr/app/jmx_prometheus_javaagent.jar=7071:/usr/app/prom-jmx-agent-config.yml
-      CONNECTORS: elassandra
-    volumes:
-      - /var/run/docker.sock:/var/run/docker.sock
-    depends_on: 
-      - elassandra
-
-  kafka_connect-cassandra:
-    image: datamountaineer/kafka-connect-cassandra
-    container_name: kafka-connect-cassandra
-    ports:
-      - 8083:8083
-      - 9102:9102
-    environment: 
-      - connect.cassandra.contact.points=localhost
-      - KAFKA_ZOOKEEPER_CONNECT =  ""zookeeper:2181""
-      - KAFKA_ADVERTISED_LISTENERS= ""kafka:9092""
-      - connect.cassandra.port=9042
-      - connector.class=com.datamountaineer.streamreactor.connect.cassandra.sink.CassandraSinkConnector
-      - tasks.max=1
-    depends_on:
-      - kafka
-      - elassandra
-
-
-Druid Config: (conf/druid/cluster/_common/common.runtime.properties)
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# ""License""); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-# Extensions specified in the load list will be loaded by Druid
-# We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
-# We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead
-
-# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
-# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
-# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
-druid.extensions.loadList=[""druid-hdfs-storage"", ""druid-kafka-indexing-service"", ""druid-datasketches""]
-
-# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
-# and uncomment the line below to point to your directory.
-#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
-
-
-#
-# Hostname
-#
-druid.host=localhost
-
-#
-# Logging
-#
-
-# Log all runtime properties on startup. Disable to avoid logging properties on startup:
-druid.startup.logging.logProperties=true
-#
-# Zookeeper
-#
-
-druid.zk.service.host=localhost
-druid.zk.paths.base=/druid
-
-#
-# Metadata storage
-#
-
-# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
-druid.metadata.storage.type=derby
-druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
-druid.metadata.storage.connector.host=localhost
-druid.metadata.storage.connector.port=1527
-
-# For MySQL (make sure to include the MySQL JDBC driver on the classpath):
-#druid.metadata.storage.type=mysql
-#druid.metadata.storage.connector.connectURI=jdbc:mysql://db.example.com:3306/druid
-#druid.metadata.storage.connector.user=...
-#druid.metadata.storage.connector.password=...
-
-# For PostgreSQL:
-#druid.metadata.storage.type=postgresql
-#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
-#druid.metadata.storage.connector.user=...
-#druid.metadata.storage.connector.password=...
-
-#
-# Deep storage
-#
-
-# For local disk (only viable in a cluster if this is a network mount):
-druid.storage.type=local
-druid.storage.storageDirectory=var/druid/segments
-
-# For HDFS:
-#druid.storage.type=hdfs
-#druid.storage.storageDirectory=/druid/segments
-
-# For S3:
-#druid.storage.type=s3
-#druid.storage.bucket=your-bucket
-#druid.storage.baseKey=druid/segments
-#druid.s3.accessKey=...
-#druid.s3.secretKey=...
-#
-# Indexing service logs
-#
-
-# For local disk (only viable in a cluster if this is a network mount):
-druid.indexer.logs.type=file
-druid.indexer.logs.directory=var/druid/indexing-logs
-
-# For HDFS:
-#druid.indexer.logs.type=hdfs
-#druid.indexer.logs.directory=/druid/indexing-logs
-
-# For S3:
-#druid.indexer.logs.type=s3
-#druid.indexer.logs.s3Bucket=your-bucket
-#druid.indexer.logs.s3Prefix=druid/indexing-logs
-
-#
-# Service discovery
-#
-
-druid.selectors.indexing.serviceName=druid/overlord
-druid.selectors.coordinator.serviceName=druid/coordinator
-
-#
-# Monitoring
-#
-
-druid.monitoring.monitors=[""org.apache.druid.java.util.metrics.JvmMonitor""]
-druid.emitter=noop
-druid.emitter.logging.logLevel=info
-
-# Storage type of double columns
-# ommiting this will lead to index double as float at the storage layer
-
-druid.indexing.doubleStorage=double
-
-#
-# Security
-#
-druid.server.hiddenProperties=[""druid.s3.accessKey"",""druid.s3.secretKey"",""druid.metadata.storage.connector.password""]
-
-
-#
-# SQL
-#
-druid.sql.enable=true
-
-#
-# Lookups
-#
-druid.lookup.enableLookupSyncOnStartup=false
-
-","1. Stop the coordinator container, and Kafka will become available. Then, you can start it again.
-Once you configure the Kafka settings, it will work normally.
-
-2. In the same folder with your docker-compose, there is a ""environment"" file, just add ""druid-kafka-indexing-service"" to the ""druid_extensions_loadList"" then run your docker compose again
-the file I mention:
-https://github.com/apache/druid/blob/master/distribution/docker/environment
-",Druid
-"I knew we have a way to get the Etabs model object by calling the function to open the Etabs application. However, in many cases, we need to import data from etabs file but don't want to open the Etabs application.
-Is there any way to get the Model object from an edb file without opening ETabs application? Can anyone point me?
-","1. You can use two way to get and set data to Etabs. first one is to read and write on e2k files and second way is to use ""CSI API ETABS v1.chm"" the file for using Etabs Interfa
-",EDB
-"when trying to use the VSCode Debugger, I get an error message:
-""Failed to launch: could not launch process: can not run under Rosetta, check that the installed build of Go is right for your CPU architecture""
-some background context as I read solutions for similar questions:
-
-I use foundationDB which does not work with GO arm64
-For this reason, I am using GO amd64
-Switching to arm64 would mean that foundationDB will not work, which is not an option
-
-I tried downloading dlv, but it doesn't work. Also tried the solution proposed here to run VSCode integrated terminal in x86-64.
-Is there a way for the debugger to work with Apple M1 using go1.18 darwin/amd64?
-","1. I just got this issue on my M1 and was able to resolve. Here are my steps:
-
-go to the go download page, https://go.dev/dl/ and download the arm version of go installer. Specifically, go.darwin-arm64.pkg
-
-install go, if it detects a previous version, agree to replace
-
-open terminal and verify go version and it should say ""go version go darwin/arm64""
-
-On VSCode, click on plugins, find the installed Go plugin, and uninstall then reload VSCode.
-
-When the plugin installation is complete, press Ctrl + Shift + P in VSCode to bring up the Command Palette, then type go: Install and select go: Install/Update Tools, then click the first checkbox to install all Tools.
-
-When Go Tools install is complete, reload VSCode and retry breakpoint.
-
-
-
-2. This is a popular issue on Mac. The solution is to install Golang and VSCODE in ARM-64 (not AMD64).
-Here are some links for reference
-https://github.com/go-delve/delve/issues/2604
-Cannot run debug Go using VSCode on Mac M1
-
-3. I faced the same issue,
-Both Go and VSCode were ARM-64 version
-The following steps resolved my issue,
-
-Update Go to the latest version
-Update VSCode
-Uninstall Go extension
-relaunch VSCode
-Press command + shift + P
-Run go: Install/Update Tools
-Select all the tools
-update
-again relaunch VSCode
-
-",FoundationDB
-"I am trying to install fdb k8s operator with helm chart. But when i try to add the repo, getting below error.
-helm repo add fdb-kubernetes-operator https://foundationdb.github.io/fdb-kubernetes-operator/
-
-Error: looks like ""https://foundationdb.github.io/fdb-kubernetes-operator/"" is not a valid chart repository or cannot be reached: failed to fetch https://foundationdb.github.io/fdb-kubernetes-operator/index.yaml : 404 Not Found
-
-
-Any help on this?
-","1. This is an issue that should be addressed to the maintainers of the foundation db repository, as this command appears as-is in the docs. They should fix it so that the chart is usable.
-If you'd like to work around the issue until the issue is remediated, you can clone the GitHub repository itself and install the chart from your local files by running helm install like so -
-helm install fob-operator ./charts/fdb-operator
-",FoundationDB
-"Trying to cross compile on macos arm for linux. My sample project looks like this:
-main.go:
-package main
-
-import(
- ""github.com/apple/foundationdb/bindings/go/src/fdb""
-)
-
-
-func main() {
-        fdb.APIVersion(630)
-        fdb.MustOpenDatabase(""fdb.cluster"")
-}
-
-go.mod
-module fdbtest
-
-go 1.19
-
-require github.com/apple/foundationdb/bindings/go v0.0.0-20221026173525-97cc643cef69
-
-require golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 // indirect
-
-go.sum
-github.com/apple/foundationdb/bindings/go v0.0.0-20221026173525-97cc643cef69 h1:vG55CLKOUgyuD15KWMxqRgTPNs8qQfXPtWjYYN5Wai0=
-github.com/apple/foundationdb/bindings/go v0.0.0-20221026173525-97cc643cef69/go.mod h1:w63jdZTFCtvdjsUj5yrdKgjxaAD5uXQX6hJ7EaiLFRs=
-golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
-golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
-
-I've installed foundationdb go lang bindings via go get github.com/apple/foundationdb/bindings/go@6.3.25
-but when I do env GOOS=linux GOARCH=amd64 go build I get the following errors:
- env GOOS=linux GOARCH=amd64 go build                                                                                       
-# github.com/apple/foundationdb/bindings/go/src/fdb
-../../../go/pkg/mod/github.com/apple/foundationdb/bindings/go@v0.0.0-20221026173525-97cc643cef69/src/fdb/keyselector.go:39:10: undefined: KeyConvertible
-../../../go/pkg/mod/github.com/apple/foundationdb/bindings/go@v0.0.0-20221026173525-97cc643cef69/src/fdb/snapshot.go:33:3: undefined: transaction
-../../../go/pkg/mod/github.com/apple/foundationdb/bindings/go@v0.0.0-20221026173525-97cc643cef69/src/fdb/generated.go:45:9: undefined: NetworkOptions
-<...>
-../../../go/pkg/mod/github.com/apple/foundationdb/bindings/go@v0.0.0-20221026173525-97cc643cef69/src/fdb/generated.go:94:9: too many errors
-
-So it seems that it cannot find any of the types from fdb. Yet the KeyConvertible and the NetworkOptions (and others) exist in ../../../go/pkg/mod/github.com/apple/foundationdb/bindings/go@v0.0.0-20221026173525-97cc643cef69/src/fdb/fdb.go
-My golang version: go version go1.19.3 darwin/arm64
-Newer fdb go bindings (7.1.25, 7.1.0) seem to behave the same...
-what am I missing here?
-","1. If you follow https://github.com/apple/foundationdb/tree/main/bindings/go you can see how to install the package properly.
-
-2. arm64 architecture installations may be problematic but you should also check some requirements.
-In my case with amd64 architecture the default Ubuntu installation for Windows 11 WSL2 was missing some dependencies:
-
-mono
-libc6
-cmake
-
-After installing the above packages the errors were gone.
-See more details at https://forums.foundationdb.org/t/golang-errors-finding-github-com-apple-foundationdb-bindings-go-src-fdb/4133
-",FoundationDB
-"I'm interested to know the state of the JanusGraph-FoundationDB Storage Adapter:
-Four years ago it was announced: https://www.youtube.com/watch?v=rQM_ZPZy8Ck&list=PLU8TPe7k8z9ew5W6YpACnGvjBDYaJVORZ&index=1
-According to the README: https://github.com/JanusGraph/janusgraph-foundationdb, the storage adapter is compatible only for the JanusGraph version 0.5.2 with FoundationDB version 6.2.22, while the latest JanusGraph version is 0.6.3 and the latest FoundationDB version is 7.2 (https://apple.github.io/foundationdb/).
-Since I don't see FoundationDB listed in storage backends
-
-and there is no official JanusGraph documentation about FoundationDB as backend storage, I would like to know if it is still feasible, and, more importantly, advisable, to use FoundationDB as storage backend for JanusGraph.
-","1. Looks like Foundation DB is not officially supported with Janusgraph, but based on my research I found that ScyllaDB is more performant with Janusgraph. If you want to use the latest version of Janusgraph then you may end up with compatibility issues with Foundation DB
-",FoundationDB
-"I am following the official document to setup environment.
-using
-
-grafana 9.5.15
-greptimedb latest
-connect through prometheus plugin
-
-The GreptimeDB Table:
-CREATE TABLE IF NOT EXISTS ""system"" (
-  ""host"" STRING NULL,
-  ""ts"" TIMESTAMP(3) NOT NULL,
-  ""cpu"" DOUBLE NULL,
-  ""mem"" DOUBLE NULL,
-  TIME INDEX (""ts""),
-  PRIMARY KEY (""host"")
-)
-
-The GreptimeDB current Data:
-Time   cpu  mem
-12345  2000 8000
-45467  2000 10000
-
-The Grafana Query: system
-The current result
-Time   Value
-12345  10000
-45467  12000
-
-And the time series dashboard only have one line.
-It look like the data has been auto summed.
-The result I want:
-Time   cpu  mem
-12345  2000 8000
-45467  2000 10000
-
-And the time series dashboard must have two line
-
-cpu
-mem
-
-","1. The table you created contains multiple fields (cpu and mem), which are not supported by vanilla Prometheus and the corresponding Grafana plugin. You can use the extended PromQL tag __field__ to write two queries in Grafana:
-system{__field__=""cpu""}
-
-system{__field__=""mem""}
-
-",GreptimeDB
-"I'm trying to implement a rate limiter using Bucket4j and Infinispan. Below is the code I'm using to store LockFreeBucket objects in the Infinispan cache:
-public boolean isRateLimited(String key) {
-
-Bucket bucket = cacheManager.getCache(""rate-limiter"").get(key, 
-Bucket.class);
-
-    if (bucket == null) {
-        bucket = bucketSupplier.get();
-        Cache<String, Bucket> cache = cacheManager.getCache(""rate-limiter"");
-        cache.put(key, bucket);
-    }
-    return !bucket.tryConsume(1);
-}
-
-When it tries to put the key i.e cache.put(key, bucket) I'm getting exception as org.infinispan.client.hotrod.exceptions.HotRodClientException:: Unable to marshall object of type [io.github.bucket4j.local.LockFreeBucket]] with root cause
-java.io.NotSerializableException: io.github.bucket4j.local.LockFreeBucket
-And below is my RemoteCacheManagerConfiugration
-@Bean
-public RemoteCacheManager getRemoteCacheManager() {
-
-    ConfigurationBuilder configurationBuilder = new ConfigurationBuilder();
-
-configurationBuilder
-    .addServer()
-    .host(""somehost"")
-    .port(11222)
-    .marshaller(new JavaSerializationMarshaller())
-    .addJavaSerialWhiteList(""io.github.bucket4j.*"")
-    .security()
-    .ssl()
-    .sniHostName(""infinispan"")
-    .trustStoreFileName(""./cacerts 2"")
-    .trustStorePassword(""changeit"".toCharArray())
-    .authentication()
-    .username(""some"")
-    .password(""somepassword"")
-    .addContextInitializer(new CreateProtoImpl())
-    .clientIntelligence(ClientIntelligence.BASIC);
-    return new RemoteCacheManager(configurationBuilder.build());
-}
-
-","1. LockFreeBucket does not implement java.io.Serializable. The simplest way would be with Protostream adapters, although it seems a complex class to serialize: https://infinispan.org/docs/stable/titles/encoding/encoding.html#creating-protostream-adapter_marshalling
-Are you using bucket4j-infinispan?
-",Infinispan
-"I'm using infispan for caching with spring boot application and gradle as build tool. In developing environment with intellij idea First startup of the application working fine without any error, but second time when I'm trying to run applicaton it getting below error.
-Caused by: org.infinispan.manager.EmbeddedCacheManagerStartupException: ISPN029025: Failed acquiring lock 'FileSystemLock{directory=.\cache1\0\RT12-CACHE, file=RT12-CACHE.lck}' for SIFS
-    at org.infinispan.manager.DefaultCacheManager.internalStart(DefaultCacheManager.java:781)
-    at org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:746)
-    at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:412)
-    at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:361)
-    at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:336)
-    at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:324)
-
-only using /local-cache-configuration as below
-      <local-cache-configuration name=""default-template-1"" statistics=""true"">
-            <expiration lifespan=""129600000""/>
-            <persistence>
-                <file-store path=""./cache1/1""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-no-cache"" statistics=""true"">
-            <expiration lifespan=""0""/>
-            <persistence>
-                <file-store path=""./cache1/0""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-60-min-cache"" statistics=""true"">
-            <expiration lifespan=""3600000""/>
-            <persistence>
-                <file-store path=""./cache1/0""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-1-min-cache"" statistics=""true"">
-            <expiration lifespan=""60000""/>
-            <persistence>
-                <file-store path=""./cache1/0""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-10-min-cache"" statistics=""true"">
-            <expiration lifespan=""600000""/>
-            <persistence>
-                <file-store path=""./cache1/0""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-20-sec-cache"" statistics=""true"">
-            <expiration lifespan=""20000""/>
-            <persistence>
-                <file-store path=""./cache1/1""/>
-            </persistence>
-        </local-cache-configuration>
-
-        <local-cache-configuration name=""template-cm-3"" statistics=""true"">
-            <!--            for 60 minutes-->
-            <expiration lifespan=""3600000""/>
-            <persistence>
-                <file-store path=""./cache1/1""/>
-            </persistence>
-        </local-cache-configuration>
-
-versions:
-
-spring boot 3.2.2
-infinispan-spring-boot-starter-embedded 14.0.28.Final
-infinispan-core 15.0.3.Final
-
-But after deleting the cache folder I can run the application without any error.  Now every time I run the application I have to delete the cache folder.
-I'm looking for a solution for running application without every time deleting the cache folder
-","1. do you have a small reproducer for this? I created a project with the given information but didn't hit the problem. Additionally, why the misaligned Infinispan versions?
-To give more context. This locking issue happens when the application has a hard crash or a misconfiguration somewhere in the application, creating multiple instances of the Infinispan cache manager pointing to the same data folder.
-On shutdown, Spring invokes all the procedures registered by beans to run on stop. This shutdown includes cleaning the lock file. Even if CTRL-C the application, Spring would execute the shutdown process.
-It is not optimal, but deleting the file ending with .lck on the cache's data folder should be enough.
-",Infinispan
-"I am using CDC incremental load from mariadb database to the duckdb as a destination. Since there is no any connector for mariadb, I have used mysql connector. There is not any issue during set up process. However while syncing, the process stops after some minutes throwing out the following error :
-Cannot invoke ""io.airbyte.protocol.models.AirbyteGlobalState.getStreamStates()"" because the return value of ""io.airbyte.protocol.models.AirbyteStateMessage.getGlobal()"" is null
-and I am also getting following warning :
-missing stats for job [job number]
-I have checked the airbyte state message in the state table inside the internal postgresql. However, the state table is empty. Probably, the source is not emitting any state messages.
-docker exec -ti airbyte-db psql -U docker -d airbyte
-
-SELECT * FROM state;
-
-","1. The AirByte uses Debezium for processing the MySQL logs.
-From Debezium, MariaDB is supported with MariaDB Connector/J requiring some supplemental configuration
-{
-  ...
-  ""connector.adapter"": ""mariadb"",
-  ""database.protocol"": ""jdbc:mariadb"",
-  ""database.jdbc.driver"": ""org.mariadb.jdbc.Driver""
-}
-
-",MariaDB
-"I am new to mariadb. Today, I was attempting to import a mysql database to mariadb and during the process the import stops when a warning is encountered as shown below.
-
-Now, I said to myself that I should check a log file so I can see the error but I can't seem to fine any log file. I ran the query below with help from Get the error log of Mariadb:
-
-As you can see there is no path to an error log file. 
-Next I checked /var/lib/mysql and below is the dir content:
--rw-rw----. 1 mysql mysql    16384 Jun  5 16:03 aria_log.00000001
--rw-rw----. 1 mysql mysql       52 Jun  5 16:03 aria_log_control
--rw-rw----. 1 mysql mysql 79691776 Jun  8 08:02 ibdata1
--rw-rw----. 1 mysql mysql 50331648 Jun  8 08:02 ib_logfile0
--rw-rw----. 1 mysql mysql 50331648 Jun  5 16:03 ib_logfile1
--rw-rw----. 1 mysql mysql        6 Jun  5 16:12 IMSPRO.pid
-drwx------. 2 mysql mysql     4096 Jun  8 08:02 ecommence
--rw-rw----. 1 mysql mysql        0 Jun  5 16:12 multi-master.info
-drwx--x--x. 2 mysql mysql     4096 Jun  5 16:03 mysql
-srwxrwxrwx. 1 mysql mysql        0 Jun  5 16:12 mysql.sock
-drwx------. 2 mysql mysql       20 Jun  5 16:03 performance_schema
--rw-rw----. 1 mysql mysql    24576 Jun  5 16:12 tc.log
-
-No file in the above dir logs error.
-Below is the content of my /etc/my.cnf
-#
-# This group is read both both by the client and the server
-# use it for options that affect everything
-#
-[client-server]
-
-#
-# include all files from the config directory
-#
-!includedir /etc/my.cnf.d
-
-Below is the content of /etc/my.cnf.d
-drwxr-xr-x.  2 root root  117 Jun  5 16:02 .
-drwxr-xr-x. 91 root root 8192 Jun  7 01:14 ..
--rw-r--r--.  1 root root  295 May 29 16:48 client.cnf
--rw-r--r--.  1 root root  763 May 29 16:48 enable_encryption.preset
--rw-r--r--.  1 root root  232 May 29 16:48 mysql-clients.cnf
--rw-r--r--.  1 root root 1080 May 29 16:48 server.cnf
--rw-r--r--.  1 root root  285 May 29 16:48 tokudb.cnf
-
-What can I do to get error log?
-","1. The way to see the warnings is to type this immediately after receiving that ""Warnings: 1"":
- SHOW WARNINGS;
-
-(As soon as the next other command is run, the warnings are cleared.)
-
-2. In mariadb version 10.3.29,
-
-#show variables like 'log-error';
-
-is showing ""Empty set"" output
-
-
-Instead, the following command is working
-
-show global variables like 'log_error';
-
-
-
-3. In my case, I edited /etc/my.cnf and added the line log_error after [mysqld]:
-[mysqld]
-...
-log_error
-
-After that, the query show variables like 'log_error'; displays the following (before the change, the Value column was empty):
-+---------------+--------------------------+
-| Variable_name | Value                    |
-+---------------+--------------------------+
-| log_error     | /var/lib/mysql/host.err  |
-+---------------+--------------------------+
-
-Now the error log is being written to the above file.
-The exact name of the file will vary from server to server and will take the name of the current host, so expect it to be different in your particular case.
-",MariaDB
-"I am trying to setup mariadb galera cluster on Debian Wheezy 7.5. I have found many different instructions, all a bit different, but none have worked thus far. 
-I am trying to setup a two node cluster.
-On the primary node, I am using the default my.cnf, with these additional settings in conf.d/cluster.cnf:
-[mysqld]
-#mysql settings
-bind-address=10.1.1.139
-query_cache_size=0
-query_cache_type=0
-binlog_format=ROW
-default_storage_engine=innodb
-innodb_autoinc_lock_mode=2
-innodb_doublewrite=1
-
-#galery settings
-wsrep_provider=/usr/lib/galera/libgalera_smm.so
-wsrep_cluster_address=""gcomm://10.1.1.139,10.1.1.140""
-wsrep_sst_method=rsync
-wsrep_cluster_name=""sql_cluster""
-wsrep_node_incoming_address=10.1.1.139
-wsrep_sst_receive_address=10.1.1.139
-wsrep_sst_auth=cluster:password
-wsrep_node_address='10.1.1.139'
-wsrep_node_name='sql1'
-wsrep_on=ON
-
-Created the cluster user, gave that user all the required permissions, started the server successfully with
-service mysql start --wsrep-new-cluster
-
-The cluster starts up, I can see cluster_size=1;
-On the second node, I am using the default my.cnf, with these additional settings in conf.d/cluster.cnf:
-[mysqld]
-#mysql settings
-bind-address=10.1.1.140
-query_cache_size=0
-query_cache_type=0
-binlog_format=ROW
-default_storage_engine=innodb
-innodb_autoinc_lock_mode=2
-innodb_doublewrite=1
-
-#galery settings
-wsrep_provider=/usr/lib/galera/libgalera_smm.so
-wsrep_cluster_address=""gcomm://10.1.1.139,10.1.1.140""
-wsrep_sst_method=rsync
-wsrep_cluster_name=""sql_cluster""
-wsrep_node_incoming_address=10.1.1.140
-wsrep_sst_receive_address=10.1.1.140
-wsrep_sst_auth=cluster:password
-wsrep_node_address='10.1.1.140'
-wsrep_node_name='sql1'
-wsrep_on=ON
-
-I also replaced debian.cnf on the secondary node with the one from the primary node as per this suggestion:
-http://docs.openstack.org/high-availability-guide/content/ha-aa-db-mysql-galera.html and granted the appropriate permissions (this was also suggested in other places, don't have the links right now).
-Contents of debian.cnf on both nodes:
-[client]
-host = localhost
-user = debian-sys-maint
-password = <password>
-socket = /var/run/mysqld/mysqld.sock
-[mysql_upgrade]
-host = localhost
-user = debian-sys-maint
-password = <password>
-socket = /var/run/mysqld/mysqld.sock
-basedir = /usr
-
-When I try to start the second node with:
-service mysql start
-It fails, and I get this in /var/log/syslog:
-May  7 19:45:30 ns514282 mysqld_safe: Starting mysqld daemon with databases from /var/lib/mysql
-May  7 19:45:30 ns514282 mysqld_safe: WSREP: Running position recovery with --log_error='/var/lib/mysql/wsrep_recovery.s6Uwyc' --pid-file='/var/lib/mysql/ns514282.ip-167-114-159.net-recover.pid'
-May  7 19:45:33 ns514282 mysqld_safe: WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: wsrep_start_position var submitted: '00000000-0000-0000-0000-000000000000:-1'
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Read nil XID from storage engines, skipping position init
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib/galera/libgalera_smm.so'
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: wsrep_load(): Galera 3.9(rXXXX) by Codership Oy <info@codership.com> loaded successfully.
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: CRC-32C: using hardware acceleration.
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Passing config to GCS: base_host = 10.1.1.142; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false; pc.ignore_sb = false; pc.npv
-May  7 19:45:33 ns514282 mysqld: o = false; pc.recovery 
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Service thread queue flushed.
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: wsrep_sst_grab()
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Start replication
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: protonet asio version 0
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: Using CRC-32C for message checksums.
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: backend: asio
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: restore pc from disk successfully
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: GMCast version 0
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: EVS version 0
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: gcomm: connecting to group 'bfm_cluster', peer '10.1.1.141:,10.1.1.142:'
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Warning] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') address 'tcp://10.1.1.142:4567' points to own listening address, blacklisting
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') address 'tcp://10.1.1.142:4567' pointing to uuid 66b559a2 is blacklisted, skipping
-May  7 19:45:33 ns514282 mysqld: 150507 19:45:33 [Note] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: 
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: declaring dc2b490d at tcp://10.1.1.141:4567 stable
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: re-bootstrapping prim from partitioned components
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: view(view_id(PRIM,66b559a2,12) memb {
-May  7 19:45:34 ns514282 mysqld: #01166b559a2,0
-May  7 19:45:34 ns514282 mysqld: #011dc2b490d,0
-May  7 19:45:34 ns514282 mysqld: } joined {
-May  7 19:45:34 ns514282 mysqld: } left {
-May  7 19:45:34 ns514282 mysqld: } partitioned {
-May  7 19:45:34 ns514282 mysqld: })
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: save pc into disk
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: clear restored view
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: gcomm: connected
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Opened channel 'bfm_cluster'
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Waiting for SST to complete.
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 0, memb_num = 2
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 279db665-f513-11e4-9149-aa318d13ebc4
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: STATE EXCHANGE: sent state msg: 279db665-f513-11e4-9149-aa318d13ebc4
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: STATE EXCHANGE: got state msg: 279db665-f513-11e4-9149-aa318d13ebc4 from 0 (sql1)
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: STATE EXCHANGE: got state msg: 279db665-f513-11e4-9149-aa318d13ebc4 from 1 (sql3)
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Warning] WSREP: Quorum: No node with complete state:
-May  7 19:45:34 ns514282 mysqld: 
-May  7 19:45:34 ns514282 mysqld: 
-May  7 19:45:34 ns514282 mysqld: #011Version      : 3
-May  7 19:45:34 ns514282 mysqld: #011Flags        : 0x1
-May  7 19:45:34 ns514282 mysqld: #011Protocols    : 0 / 7 / 3
-May  7 19:45:34 ns514282 mysqld: #011State        : NON-PRIMARY
-May  7 19:45:34 ns514282 mysqld: #011Prim state   : NON-PRIMARY
-May  7 19:45:34 ns514282 mysqld: #011Prim UUID    : 00000000-0000-0000-0000-000000000000
-May  7 19:45:34 ns514282 mysqld: #011Prim  seqno  : -1
-May  7 19:45:34 ns514282 mysqld: #011First seqno  : -1
-May  7 19:45:34 ns514282 mysqld: #011Last  seqno  : -1
-May  7 19:45:34 ns514282 mysqld: #011Prim JOINED  : 0
-May  7 19:45:34 ns514282 mysqld: #011State UUID   : 279db665-f513-11e4-9149-aa318d13ebc4
-May  7 19:45:34 ns514282 mysqld: #011Group UUID   : 00000000-0000-0000-0000-000000000000
-May  7 19:45:34 ns514282 mysqld: #011Name         : 'sql1'
-May  7 19:45:34 ns514282 mysqld: #011Incoming addr: '10.1.1.142:3306'
-May  7 19:45:34 ns514282 mysqld: 
-May  7 19:45:34 ns514282 mysqld: #011Version      : 3
-May  7 19:45:34 ns514282 mysqld: #011Flags        : 0x2
-May  7 19:45:34 ns514282 mysqld: #011Protocols    : 0 / 7 / 3
-May  7 19:45:34 ns514282 mysqld: #011State        : NON-PRIMARY
-May  7 19:45:34 ns514282 mysqld: #011Prim state   : SYNCED
-May  7 19:45:34 ns514282 mysqld: #011Prim UUID    : b65a0277-f50f-11e4-a916-dbeff5b65a2e
-May  7 19:45:34 ns514282 mysqld: #011Prim  seqno  : 8
-May  7 19:45:34 ns514282 mysqld: #011First seqno  : -1
-May  7 19:45:34 ns514282 mysqld: #011Last  seqno  : 0
-May  7 19:45:34 ns514282 mysqld: #011Prim JOINED  : 1
-May  7 19:45:34 ns514282 mysqld: #011State UUID   : 279db665-f513-11e4-9149-aa318d13ebc4
-May  7 19:45:34 ns514282 mysqld: #011Group UUID   : dc2be55b-f506-11e4-8748-4bd7f3fc795c
-May  7 19:45:34 ns514282 mysqld: #011Name         : 'sql3'
-May  7 19:45:34 ns514282 mysqld: #011Incoming addr: '10.1.1.141:3306'
-May  7 19:45:34 ns514282 mysqld: 
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Full re-merge of primary b65a0277-f50f-11e4-a916-dbeff5b65a2e found: 1 of 1.
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Quorum results:
-May  7 19:45:34 ns514282 mysqld: #011version    = 3,
-May  7 19:45:34 ns514282 mysqld: #011component  = PRIMARY,
-May  7 19:45:34 ns514282 mysqld: #011conf_id    = 8,
-May  7 19:45:34 ns514282 mysqld: #011members    = 1/2 (joined/total),
-May  7 19:45:34 ns514282 mysqld: #011act_id     = 0,
-May  7 19:45:34 ns514282 mysqld: #011last_appl. = -1,
-May  7 19:45:34 ns514282 mysqld: #011protocols  = 0/7/3 (gcs/repl/appl),
-May  7 19:45:34 ns514282 mysqld: #011group UUID = dc2be55b-f506-11e4-8748-4bd7f3fc795c
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Flow-control interval: [23, 23]
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 0)
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: State transfer required: 
-May  7 19:45:34 ns514282 mysqld: #011Group state: dc2be55b-f506-11e4-8748-4bd7f3fc795c:0
-May  7 19:45:34 ns514282 mysqld: #011Local state: 00000000-0000-0000-0000-000000000000:-1
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: New cluster view: global state: dc2be55b-f506-11e4-8748-4bd7f3fc795c:0, view# 9: Primary, number of nodes: 2, my index: 0, protocol version 3
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Warning] WSREP: Gap in state sequence. Need state transfer.
-May  7 19:45:34 ns514282 mysqld: 150507 19:45:34 [Note] WSREP: Running: 'wsrep_sst_rsync --role 'joiner' --address '10.1.1.142' --auth 'cluster:password' --datadir '/var/lib/mysql/' --defaults-file '/etc/mysql/my.cnf' --parent '12278' --binlog '/var/log/mysql/mariadb-bin' '
-May  7 19:45:34 ns514282 rsyncd[12428]: rsyncd version 3.0.9 starting, listening on port 4444
-May  7 19:45:37 ns514282 mysqld: 150507 19:45:37 [Note] WSREP: (66b559a2, 'tcp://0.0.0.0:4567') turning message relay requesting off
-May  7 19:45:47 ns514282 /usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
-May  7 19:45:57 ns514282 /usr/sbin/irqbalance: Load average increasing, re-enabling all cpus for irq balancing
-May  7 19:46:02 ns514282 /USR/SBIN/CRON[16491]: (root) CMD (/usr/local/rtm/bin/rtm 50 > /dev/null 2> /dev/null)
-May  7 19:46:03 ns514282 /etc/init.d/mysql[16711]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in
-May  7 19:46:03 ns514282 /etc/init.d/mysql[16711]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed
-May  7 19:46:03 ns514282 /etc/init.d/mysql[16711]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111 ""Connection refused"")'
-May  7 19:46:03 ns514282 /etc/init.d/mysql[16711]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
-
-This question has countless threads all over the internet. Some with no answers. Some of the ones that do have answers
-ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) - my disk space is not full
-Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' - no answer. But as per comments, the mysql.sock does exist and has mysql.mysql ownership.
-ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) - server is installed, again socket is present in the right location
-I have also read that this might be a permissions issue on /var/run/mysqld, but I have checked this and gave it mysql.mysql ownership.
-If nothing else, this is an attempt to revive this issue. Any direction is really appreciated. 
-Thank you, 
-Update:  my.cnf for both nodes. It is the default my.cnf. The only change is commenting out the bind-address=127.0.0.1 line.
-[client]
-port        = 3306
-socket      = /var/run/mysqld/mysqld.sock
-
-[mysqld_safe]
-socket      = /var/run/mysqld/mysqld.sock
-nice        = 0
-
-[mysqld]
-user        = mysql
-pid-file    = /var/run/mysqld/mysqld.pid
-socket      = /var/run/mysqld/mysqld.sock
-port        = 3306
-basedir     = /usr
-datadir     = /var/lib/mysql
-tmpdir      = /tmp
-lc_messages_dir = /usr/share/mysql
-lc_messages = en_US
-skip-external-locking
-
-# bind-address      = 127.0.0.1
-
-max_connections     = 100
-connect_timeout     = 5
-wait_timeout        = 600
-max_allowed_packet  = 16M
-thread_cache_size       = 128
-sort_buffer_size    = 4M
-bulk_insert_buffer_size = 16M
-tmp_table_size      = 32M
-max_heap_table_size = 32M
-
-myisam_recover          = BACKUP
-key_buffer_size     = 128M
-table_open_cache    = 400
-myisam_sort_buffer_size = 512M
-concurrent_insert   = 2
-read_buffer_size    = 2M
-read_rnd_buffer_size    = 1M
-
-query_cache_limit       = 128K
-query_cache_size        = 64M
-
-log_warnings        = 2
-
-slow_query_log_file = /var/log/mysql/mariadb-slow.log
-long_query_time = 10
-log_slow_verbosity  = query_plan
-
-log_bin         = /var/log/mysql/mariadb-bin
-log_bin_index       = /var/log/mysql/mariadb-bin.index
-expire_logs_days    = 10
-max_binlog_size         = 100M
-
-default_storage_engine  = InnoDB
-
-innodb_buffer_pool_size = 256M
-innodb_log_buffer_size  = 8M
-innodb_file_per_table   = 1
-innodb_open_files   = 400
-innodb_io_capacity  = 400
-innodb_flush_method = O_DIRECT
-
-[mysqldump]
-quick
-quote-names
-max_allowed_packet  = 16M
-
-[mysql]
-
-[isamchk]
-key_buffer      = 16M
-
-!includedir /etc/mysql/conf.d/
-
-Update
-Also, I tested, and If I attempt to start the node regularly on its own, (without the cluster, with no extra settings, just defaults) it works.
-","1. I recommend restarting the server.
-if you keep having problems ...
-backup, uninstall and install again ...
-with me it worked like this ...
-Verify that all IPS used are BIND compliant
-
-2. Is the ""wsrep_node_name"" correct?
-[mysqld]
-#mysql settings
-bind-address=10.1.1.140
-query_cache_size=0
-query_cache_type=0
-binlog_format=ROW 
-default_storage_engine=innodb 
-innodb_autoinc_lock_mode=2 
-innodb_doublewrite=1
-
-#galery settings
-wsrep_provider=/usr/lib/galera/libgalera_smm.so
-wsrep_cluster_address=""gcomm://10.1.1.139,10.1.1.140""
-wsrep_sst_method=rsync 
-wsrep_cluster_name=""sql_cluster"" 
-wsrep_node_incoming_address=10.1.1.140 
-wsrep_sst_receive_address=10.1.1.140 
-wsrep_sst_auth=cluster:password 
-wsrep_node_address='10.1.1.140' 
-wsrep_node_name='sql1' <== ???
-wsrep_on=ON
-
-",MariaDB
-"I'm coding a WHERE statement with OR in it. I'd like to get field that indicates me which OR is matched, without using different SQL query. So, for example, if I'm matching the title get a field with the value to true or, get back a field with title as the value.
-Stack: PHP, Mariadb
-Here the query I'm testing:
-SELECT * 
-FROM books 
-WHERE title LIKE :s OR 
-      description LIKE :s OR 
-      author LIKE :s .... 
-
-","1. The value of a search condition can be evaluated within the SELECT clause using a case statement.
-SELECT *,
-   CASE 
-    WHEN title LIKE '%:s%' THEN 'title ' + title
-    WHEN description LIKE '%:s%' THEN 'description ' +description
-    WHEN author LIKE '%:s%' THEN 'author ' +author
-    END as MatchingValue
-FROM books 
-WHERE title LIKE '%:s%' OR 
-      description LIKE '%:s%' OR 
-      author LIKE '%:s%'  
-
-fiddle
-
-
-
-id
-title
-description
-author
-MatchingValue
-
-
-
-
-1
-Foo
-Foo goe:s to town
-Bob
-description Foo goe:s to town
-
-
-2
-Bar:s
-Bar gets killed
-Ann
-title Bar:s
-
-
-3
-Roo
-Roo gets maried
-:sam
-author :sam
-
-
-
-
-2. Use case expressions to find each matching column:
-SELECT *,
-   CONCAT_WS(', ',
-             CASE WHEN title LIKE '%:s%' THEN 'title' END,
-             CASE WHEN description LIKE '%:s%' THEN 'description' END,
-             CASE WHEN author LIKE '%:s%' THEN 'author' END) matches
-FROM books 
-WHERE title LIKE '%:s%' OR 
-      description LIKE '%:s%' OR 
-      author LIKE '%:s%'  
-
-Demo: https://dbfiddle.uk/Jly-gX6z (Using @Bart's fiddle as base, thanks!)
-",MariaDB
-"I'm trying to execute a mongodb aggregation using @nestjs/mongoose and I'm passing the following query:
-$match: {
-  organizationId: new ObjectId(organizationId),
-}
-
-When I'm passing the organizationId as plain string I get no results, when I'm passing an instance of ObjectId (imported from bson / mongodb / mongoose packages) I'm getting the follwing error:
-{
-  ""type"": ""TypeError"",
-  ""message"": ""value.serializeInto is not a function"",
-  ""stack"":
-  TypeError: value.serializeInto is not a function
-  at serializeObjectId (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:240:18)
-  at serializeInto (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:892:17)
-  at serializeObject (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:295:20)
-  at serializeInto (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:875:17)
-  at serializeObject (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:295:20)
-  at serializeInto (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:655:17)
-  at serializeObject (/packages/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:295:20)
-  at serializeInto (/database/node_modules/mongodb/node_modules/bson/src/parser/serializer.ts:875:17)
-  at Object.serialize (/packages/database/node_modules/mongodb/node_modules/bson/src/bson.ts:108:30)
-  at OpMsgRequest.serializeBson (/packages/database/node_modules/mongodb/src/cmap/commands.ts:572:17)
-}
-
-It's happening when using find method as well.
-Anyone encountered this issue? is there a way to simply query the database to find documents that match an ObjectId (not _id)
-I tried to debug it with break points, seems that serializeInto is not ObjectId method, so why bson and mongoose uses this?
-","1. Updating mongoose and mongodb to latest worked for me.
-",MongoDB
-"I am trying to create indexes for a nested mongo db timeseries collection.
-My mongo db version, obtained by running mongod --version, is v3.6.8.
-The timeseries schema follows the suggested one.
-My collection has a schema like:
-validator = {
-    ""$jsonSchema"": {
-        ""bsonType"": ""object"",
-        ""required"": [""timestamp"", ""metadata"", ""measurements""],
-        ""properties"": {
-            ""timestamp"": {
-                ""bsonType"": ""long"",
-            },
-            ""metadata"": {
-                ""bsonType"": ""object"",
-                ""required"": [""type"", ""sensor_id""],
-                ""properties"": {
-                    ""type"": {
-                        ""bsonType"": ""string"",
-                        ""description"": ""Measurement type""
-                    },
-                    ""sensor_id"": {
-                        ""bsonType"": ""string"",
-                        ""description"": ""sensor id""
-                        }
-                }
-            },
-            ""measurement"": {
-                ""bsonType"": ""array"",
-                ""description"": ""must be an array and is required"",
-                ""items"": {
-                    ""bsonType"": ""double"",
-                    ""description"": ""must be array of float and is required""
-                },
-                ""minItems"": 3,
-                ""maxItems"": 3,
-            },
-        }
-    }
-}
-
-When using Mongo db compass to access the db, going to the Index page shows in red the message: Unrecognized expression '$toDouble()':
-
-I thought this happens because I have not defined any Index yet. So in Pymongo, I try to create Indexes of the nested fields type and sensor_id with the line:
-mydb.mycollection.create_index(
-    [
-        (""attrs.nested.sensor_id"", pymongo.ASCENDING), 
-        (""attrs.nested.type"", pymongo.ASCENDING)
-    ])
-
-But the message error in Mongo Db compass keeps showing the error:
-
-how to solve this Mongodb compass error
-
-Furthermore, I am not sure the indexes are correctly defined, because if I create a fake index like:
-mydb.mycollection.create_index(
-    [
-        (""attrs.nested.unexisting_field"", pymongo.ASCENDING),
-    ])
-
-no error is generated although the specified field does not exist: 2) is there a way to check the index is correctly
-","1. I think, the problem is:
-
-My mongo db version, obtained by running mongod --version, is v3.6.8.
-
-MongoDB natively supports timeseries collections since version 5.0.
-The $toDouble operator is supported since version 4.0 (https://www.mongodb.com/docs/v4.0/reference/operator/aggregation/toDouble/)
-",MongoDB
-"I am currently working on a price search engine with NodeJS and Mongoose and have a question about the implementation of a price matrix.
-Each product has an icon, a type symbol, a width, a height and a price. For example, I have a product with the symbol: T5, type symbol: F, width: 900, height: 210 and price: 322.
-The price matrix contains consecutive values as follows:
-
-Width: 1000, Height: 210, Price: 345
-Width: 1200, height: 210, price: 398
-Width: 1400, Height: 210, Price: 449
-
-My problem is that when searching for the next price value, my code selects the next but one value if the value searched for is between two existing values. For example, for a width of 1160, the value for 1400 is selected directly instead of 1200.
-Here is my code:
-const symbol = findProfil.symbol;
-const typsymbol = product.symbol;
-
-const findPrice = await profilSubPriceModel.aggregate([
-    {
-        $match: {
-            symbol: symbol,
-            typSymbol: typsymbol
-        }
-    },
-    {
-        $addFields: {
-            wide: { $toInt: ""$wide"" }, // Konvertieren von wide von String zu Int
-            height: { $toInt: ""$height"" } // Konvertieren von height von String zu Int
-        }
-    },
-    {
-        $addFields: {
-            wideDiff: { $abs: { $subtract: [""$wide"", inputWidth + 65]}},
-            heightDiff: { $abs: {$subtract: [""$height"", inputHeight + 45]}}
-        }
-    },
-    {
-        $addFields: {
-            validWidth: { $ceil: { $divide: [{ $toInt: ""$wide"" }, 100] } },
-            validHeight: { $ceil: { $divide: [{ $toInt: ""$height"" }, 100] } }
-        }
-    },
-    {
-        $addFields: {
-            totalDiff: { $add: [""$wideDiff"", ""$heightDiff""] }
-        }
-    },
-    {
-        $sort: { totalDiff: 1 }
-    },
-    {
-        $limit: 3
-    }
-]);
-
-How can I change my code so that the correct next value in the matrix is selected instead of the next but one value?
-I keep getting the same value.
-if (findPrice.length === 0) {
-            const findLargerPrice = await profilSubPriceModel.aggregate([
-                {
-                    $match: {
-                        symbol: symbol,
-                        typSymbol: typsymbol
-                    }
-                },
-                {
-                    $addFields: {
-                        wide: { $toInt: ""$wide"" },
-                        height: { $toInt: ""$height"" }
-                    }
-                },
-                {
-                    $addFields: {
-                        wideDiff: { $subtract: [""$wide"", inputWidth] },
-                        heightDiff: { $subtract: [""$height"", inputHeight] }
-                    }
-                },
-                {
-                    $match: {
-                        wideDiff: { $gte: 0 },
-                        heightDiff: { $gte: 0 }
-                    }
-                },
-                {
-                    $addFields: {
-                        totalDiff: { $add: [""$wideDiff"", ""$heightDiff""] }
-                    }
-                },
-                {
-                    $sort: { totalDiff: 1 }
-                },
-                {
-                    $limit: 3
-                }
-            ]);
-
-            if (findLargerPrice.length > 0) {
-                findPrice.push(...findLargerPrice);
-            }
-        }
-
-","1. I see couple of issues in your searching logic as follows:
-
-validWidth and validHeight  fields were not contributing to finding the closest match then why did you added that.
-
-The totalDiff calculation is also incorrect, this calculation is not focusing on the actual closest match due to additional transformations on widths and heights.
-
-Missing filtering of Negative differences.
-
-Didn't understood why you're using a limit of 3 since using $limit: 3 instead of $limit: 1 allowed multiple results to be included potentially including non-closest values.
-
-
-Changes that I have made on your code:
-
-Try removing unnecessary fields i.e validWidth and validHeight (Remove this part of the code below)
-
-{
-    $addFields: {
-        validWidth: { $ceil: { $divide: [{ $toInt: ""$wide"" }, 100] } },
-        validHeight: { $ceil: { $divide: [{ $toInt: ""$height"" }, 100] } }
-    }
-}
-
-
-Include a $match stage to filter out negative differences by ensuring wideDiff and heightDiff are greater than or equal to 0.
-
-{
-    $match: {
-        wideDiff: { $gte: 0 },
-        heightDiff: { $gte: 0 }
-    }
-}
-
-
-Changed $limit to 1 in both the main query and the fallback query.
-
-{
-    $limit: 1
-}
-
-Here is the complete updated code with above changes do give it a try and if this works/not work let me know.
-const symbol = findProfil.symbol;
-const typsymbol = product.symbol;
-
-const findPrice = await profilSubPriceModel.aggregate([
-    {
-        $match: {
-            symbol: symbol,
-            typSymbol: typsymbol
-        }
-    },
-    {
-        $addFields: {
-            wide: { $toInt: ""$wide"" },
-            height: { $toInt: ""$height"" }
-        }
-    },
-    {
-        $addFields: {
-            wideDiff: { $subtract: [""$wide"", inputWidth] },
-            heightDiff: { $subtract: [""$height"", inputHeight] }
-        }
-    },
-    {
-        $match: {
-            wideDiff: { $gte: 0 },
-            heightDiff: { $gte: 0 }
-        }
-    },
-    {
-        $addFields: {
-            totalDiff: { $add: [""$wideDiff"", ""$heightDiff""] }
-        }
-    },
-    {
-        $sort: { totalDiff: 1 }
-    },
-    {
-        $limit: 1
-    }
-]);
-
-if (findPrice.length === 0) {
-    const findLargerPrice = await profilSubPriceModel.aggregate([
-        {
-            $match: {
-                symbol: symbol,
-                typSymbol: typsymbol
-            }
-        },
-        {
-            $addFields: {
-                wide: { $toInt: ""$wide"" },
-                height: { $toInt: ""$height"" }
-            }
-        },
-        {
-            $addFields: {
-                wideDiff: { $subtract: [""$wide"", inputWidth] },
-                heightDiff: { $subtract: [""$height"", inputHeight] }
-            }
-        },
-        {
-            $match: {
-                wideDiff: { $gte: 0 },
-                heightDiff: { $gte: 0 }
-            }
-        },
-        {
-            $addFields: {
-                totalDiff: { $add: [""$wideDiff"", ""$heightDiff""] }
-            }
-        },
-        {
-            $sort: { totalDiff: 1 }
-        },
-        {
-            $limit: 1
-        }
-    ]);
-
-    if (findLargerPrice.length > 0) {
-        findPrice.push(...findLargerPrice);
-    }
-}
-
-",MongoDB
-"I recently learned MongoDB and connected it to Node.js, but I encountered an error that I am unable to resolve. I am using VS Code, and a pop-up similar to the following appeared:
-""""The connection was rejected. Either the requested service isn’t running on the requested server/port, the proxy settings in vscode are misconfigured, or a firewall is blocking requests. Details: RequestError: connect ECONNREFUSED 127.0.0.1:3004.""""
-
-Route.get() requires a callback function but got a [object Object]
-
-My index.js
-
-const router = require('./routes/bookRoute')
-
-// connect DB
-databaseConnect();
-
-app.use(cors());
-app.use(express.json())
-app.use(router)
-
-
-app.listen(PORT, ()=>{
-    console.log('Server is running...')
-})
-
-my controller.js
-const Book = require('../model/bookSchema');
-
-// get all books
-const getBooks = async(req, res) => {
-    ...
-}
-
-// get specific book
-const getBookById = async(req, res) => {
-  ...
-}
-
-// make new book
-const saveBook =  async(req,res) => {
-   ..
-}
-module.exports = {getBooks, getBookById, saveBook}
-
-my route.js
-
-router.get('/books', getBooks)
-router.get('/books/:id', getBookById)
-router.post('/books', saveBook)
-
-module.exports = router
-
-I able to connecting nodeJs and mongodb and do CRUD operation
-","1. I also got a similar type of error, then i used the 2.2.12 or later version of node js
-",MongoDB
-"Problem is: when I try to generate lighthouse report for my project, when the home page reloads, it gives response 'Not Found'
-I have hosted my website on render.com
-
-Whole code link: https://github.com/manastelavane/RecipeNew
-Only client side code: https://github.com/manastelavane/RecipeClient
-Only server side code: https://github.com/manastelavane/RecipeServer
-Only chat-server code: https://github.com/manastelavane/RecipeChatServer
-(chat server is not involved on home page, so you can ignore chat server)
-
-I used Mongodb,Express,React-Redux,Nodejs stack for project.
-Also, note that, after loading of home page('/'), user is automatically redirected to '/card?category=All&page=1'
-",,MongoDB
-"I'm having trouble constructing the correct query for two levels of nested collections -
-The data shape as JSON...
-[
-    {
-        ""name"": ""My Super Region"",
-        ""regions"": [
-            {
-                ""name"": ""My Region"",
-                ""locations"": [
-                    {
-                        ""name"": ""My Location""
-                    }
-                ]
-            }
-        ]
-    }
-]
-
-I looked at the intermittent Result from my JOOQ query and the shape of the data seemed alright to me. However, I'm having trouble successfully mapping that result into a Kotlin data class.
-There's only so much that can be gleaned from the stack trace for something like this, so I'm having difficulty troubleshooting my cast errors. I'd appreciate any help sorting out what I'm missing!
-fun getRegions(): List<SuperRegionRecord> {
-    return DSL.using(dataSource, SQLDialect.MYSQL)
-        .select(
-            SUPER_REGIONS.ID,
-            SUPER_REGIONS.NAME,
-            multiset(
-                select(
-                    REGIONS.ID,
-                    REGIONS.NAME,
-                    multiset(
-                        select(
-                            LOCATIONS.ID,
-                            LOCATIONS.NAME,
-                        )
-                        .from(LOCATIONS)
-                        .where(LOCATIONS.REGION_ID.eq(REGIONS.ID))
-                    ).`as`(""locations"")
-                )
-                .from(REGIONS)
-                .where(REGIONS.SUPER_REGION_ID.eq(SUPER_REGIONS.ID))
-            ).`as`(""regions""),
-        )
-        .from(SUPER_REGIONS)
-        .fetchInto(SuperRegionRecord::class.java)
-}
-
-data class SuperRegionRecord(
-    val id: Int?,
-    val name: String?,
-    val regions: List<RegionRecord>?
-)
-
-data class RegionRecord(
-    val id: Int?,
-    val name: String?,
-    val locations: List<LocationRecord>?
-)
-
-data class LocationRecord(
-    val id: Int?,
-    val name: String?
-)
-
-Error:
-java.lang.ClassCastException: class org.jooq.impl.RecordImpl3 cannot be cast to class com.abcxyz.repository.LocationsRepository$RegionRecord (org.jooq.impl.RecordImpl3 is in unnamed module of loader 'app'; com.abcxyz.LocationsRepository$RegionRecord is in unnamed module of loader io.ktor.server.engine.OverridingClassLoader$ChildURLClassLoader @f68f0dc)
-","1. Your generics (e.g. List<LocationRecord>) are being erased by the compiler, and jOOQ's runtime can't detect what your intention was when you were writing this query and assuming the DefaultRecordMapper will figure out how to map between a Result<Record3<...>> and a List (rawtype). While kotlin offers a bit more type information via its own reflection APIs, the DefaultRecordMapper isn't using that (and I'm not sure if it would be possible, still).
-Instead of using reflection to map your data, why not use ad-hoc conversion?
-fun getRegions(): List<SuperRegionRecord> {
-    return DSL.using(dataSource, SQLDialect.MYSQL)
-        .select(
-            SUPER_REGIONS.ID,
-            SUPER_REGIONS.NAME,
-            multiset(
-                select(
-                    REGIONS.ID,
-                    REGIONS.NAME,
-                    multiset(
-                        select(
-                            LOCATIONS.ID,
-                            LOCATIONS.NAME,
-                        )
-                        .from(LOCATIONS)
-                        .where(LOCATIONS.REGION_ID.eq(REGIONS.ID))
-                    ).mapping(::LocationRecord)
-                )
-                .from(REGIONS)
-                .where(REGIONS.SUPER_REGION_ID.eq(SUPER_REGIONS.ID))
-            ).mapping(::RegionRecord),
-        )
-        .from(SUPER_REGIONS)
-        .fetch(mapping(::SuperRegionrecord)) // 1)
-}
-
-Where:
-
-is org.jooq.Records.mapping
-is org.jooq.Field<org.jooq.Result<org.jooq.Record3<T1, T2, T3>>>.mapping or similar, from the jooq-kotlin extensions module
-
-",MySQL
-"The transaction isn't rollback when the RuntimeException thrown from deposit method which is calling by transfer method. The withdraw method is ran before deposit and will update the source account balance. However, it will not be rollback even deposit() is encountered error.
-I am using @Service and @Transactional(rollbackFor = Exception.class) to annotate the service implementation class
-Deposit Method:
-protected Account depositTxn(long accId, BigDecimal amount, boolean isTransfer) throws InterruptedException {
-        Account savedAcc = null;
-        try {
-            Account acc= addBalance(accId, amount);
-            savedAcc = this.update(acc);
-        } catch (ObjectOptimisticLockingFailureException e){
-            for(int i = 1; i <= MAX_RETRY; i++) {
-                logger.info(""Retring for ""+ i + "" time(s)"");
-                savedAcc = this.update(addBalance(accId, amount));
-                if(null != savedAcc) {
-                    break;
-                }
-            }
-            if(null == savedAcc) {
-                throw new RuntimeException(""Hit Maximun Retry Count"");
-            }
-        } catch(Exception e) {
-            throw e;
-        }
-        return savedAcc;
-    }
-
-Transfer Method:
-    public Account transfer(long srcAccId, long toAccId, BigDecimal amount, boolean isMultithread) throws Exception {
-            BankingCmoCallable callable = getBankingCmoCallable(this, ""transferTxn"", srcAccId, toAccId, amount, isMultithread);
-            callable.prepare();
-        Account savedAcc = (Account) taskExecutor.submit(callable).get();
-        
-        return savedAcc;
-    }
-
-protected Account transferTxn(long srcAccId, long toAccId, BigDecimal amount, boolean isMultithread) throws Exception {
-        Account savedAcc = withdraw(srcAccId, amount, true, isMultithread);
-                //Error thrown from this method
-        deposit(toAccId, amount, true, isMultithread);
-        
-        addTxnH(savedAcc.getAccId(), toAccId, amount, TXN_T_T);
-
-        return savedAcc;
-    }
-
-I have tried to put @Transactional as method level annotation but it will not working. I am using the correct import from org.springframework.transaction.annotation.Transactional;
-","1. try this
-@Transactional(rollbackFor = Exception.class)
-protected Account depositTxn(long accId, BigDecimal amount, boolean isTransfer) throws InterruptedException {
-    // the  implementation
-}
-
-@Transactional(rollbackFor = Exception.class)
-protected Account withdraw(long accId, BigDecimal amount, boolean isTransfer, boolean isMultithread) {
-    // the implementation
-}
-
-Method-Level @Transactional: adding @Transactional at the method level for each transactional method (withdraw(), deposit(), transfer()) may solve the solution, especially to ensure clear demarcation of transaction boundaries. This also helps in cases where methods are invoked independently.
-your import is correct import
-import org.springframework.transaction.annotation.Transactional;
-When you put @Transactional it uses the default settings provided by the Spring framework that are, Propagation, Isolation, RollBack.
-REQUIRED is the default propagation. Spring checks if there is an active transaction, and if nothing exists, it creates a new one. Otherwise, the business logic appends to the currently active transaction.
-The default isolation level is DEFAULT. As a result, when Spring creates a new transaction, the isolation level will be the default isolation of our RDBMS.
-Rollback: By default, Spring’s transaction infrastructure codes specific unchecked exceptions (runtime exceptions and errors) to automatically trigger a rollback. Checked exceptions, on the other hand, do not result in a rollback unless explicitly specified.
-You can ream more about this annotation Transaction Propagation and Isolation in Spring @Transactional
-",MySQL
-"I want to implement this neo4j query in java Cypher-DSL:
-WITH ""Person"" as label
-CALL apoc.cypher.run(""MATCH (n) WHERE n:""+ label + "" RETURN n"",{})
-YIELD value
-RETURN value
-
-But label is not allowed to use variables:
-What is the proper usage of apoc.cypher.run with params?
-How can I implement it using Cypher-DSL:
-Cypher.with( Cypher.literalOf(""Person"").as(""label""))
-.call(
-   ??,
-   Cypher.mapOf()
-)
-.yield(""value"")
-.returning(""value"")
-.build();
-
-","1. You can just do something like the Listing 4 example in the Cypher-DSL doc.
-For example:
-var people = Cypher.node(""Person"").named(""people"");
-var statement = Cypher.match(people) 
-    .returning(people)
-    .build();
-
-Another useful example shows how you can fetch Person nodes using Cypher-DSL and the Neo4j Java-Driver.
-For example:
-var people = Cypher.node(""Person"").named(""people"");
-var statement = ExecutableStatement.of(
-        Cypher.match(people)
-            .returning(people)
-            .build());
-
-try (var session = driver.session()) { 
-    var peopleList = session.executeRead(statement::fetchWith);
-
-    // Do something with peopleList
-    ...
-}
-
-",Neo4j
-"[PROBLEM - My final solution below]
-I'd like to import a json file containing my data into Neo4J.
-However, it is super slow.
-The Json file is structured as follow
-{
-    ""graph"": {
-        ""nodes"": [
-            { ""id"": 3510982, ""labels"": [""XXX""], ""properties"": { ... } },
-            { ""id"": 3510983, ""labels"": [""XYY""], ""properties"": { ... } },
-            { ""id"": 3510984, ""labels"": [""XZZ""], ""properties"": { ... } },
-     ...
-        ],
-        ""relationships"": [
-            { ""type"": ""bla"", ""startNode"": 3510983, ""endNode"": 3510982, ""properties"": {} },
-            { ""type"": ""bla"", ""startNode"": 3510984, ""endNode"": 3510982, ""properties"": {} },
-    ....
-        ]
-    }
-}
-
-Is is similar to the one proposed here: How can I restore data from a previous result in the browser?.
-By looking at the answer.
-I discovered that I can use
-CALL apoc.load.json(""file:///test.json"") YIELD value AS row
-WITH row, row.graph.nodes AS nodes
-UNWIND nodes AS node
-CALL apoc.create.node(node.labels, node.properties) YIELD node AS n
-SET n.id = node.id
-
-and then
-CALL apoc.load.json(""file:///test.json"") YIELD value AS row
-with row
-UNWIND row.graph.relationships AS rel
-MATCH (a) WHERE a.id = rel.endNode
-MATCH (b) WHERE b.id = rel.startNode
-CALL apoc.create.relationship(a, rel.type, rel.properties, b) YIELD rel AS r
-return *
-
-(I have to do it in two times because else their are relation duplication due to the two unwind).
-But this is super slow because I have a lot of entities and I suspect the program to search over all of them for each relation.
-At the same time, I know ""startNode"": 3510983 refers to a node.
-So the question: does it exists anyway to speed up to import process using ids as index, or something else?
-
-Note that my nodes have differents types. So I did not find a way to create an index for all of them, and I suppose that would be too huge (memory)
-
-[MY SOLUTION - not efficient answer 1]
-CALL apoc.load.json('file:///test.json') YIELD value
-WITH value.graph.nodes AS nodes, value.graph.relationships AS rels
-UNWIND nodes AS n
-CALL apoc.create.node(n.labels, apoc.map.setKey(n.properties, 'id', n.id)) YIELD node
-WITH rels, COLLECT({id: n.id, node: node, labels:labels(node)}) AS nMap
-UNWIND rels AS r
-MATCH (w{id:r.startNode})
-MATCH (y{id:r.endNode})
-CALL apoc.create.relationship(w, r.type, r.properties, y) YIELD rel
-RETURN rel
-
-[Final Solution in comment]
-","1. [EDITED]
-This approach may work more efficiently:
-CALL apoc.load.json(""file:///test.json"") YIELD value
-WITH value.graph.nodes AS nodes, value.graph.relationships AS rels
-UNWIND nodes AS n
-CALL apoc.create.node(n.labels, apoc.map.setKey(n.properties, 'id', n.id)) YIELD node
-WITH rels, apoc.map.mergeList(COLLECT({id: n.id, node: node})) AS nMap
-UNWIND rels AS r
-CALL apoc.create.relationship(nMap[r.startNode], r.type, r.properties, nMap[r.endNode]) YIELD rel
-RETURN rel
-
-This query does not use MATCH at all (and does not need indexing), since it just relies on an in-memory mapping from the imported node ids to the created nodes. However, this query could run out of memory if there are a lot of imported nodes.
-It also avoids invoking SET by using apoc.map.setKey to add the id property to n.properties.
-The 2 UNWINDs do not cause a cartesian product, since this query uses the aggregating function COLLECT (before the second UNWIND) to condense all the preceding rows into one (because the grouping key, rels, is a singleton). 
-
-2. Have you tried indexing the nodes before the LOAD JSON? This may not be tenable since you have multiple node labels. But if they are limited you can create placeholder node, create and index and then delete the placeholder. After this, run the LOAD Json
-    Create (n:YourLabel{indx:'xxx'})
-    create index on: YourLabel(indx)
-    match (n:YourLabel) delete n
-
-The index will speed the matching or merging
-
-3. The final answer that seems also efficient is the following one:
-It is inspired by the solution and discussion of this answer https://stackoverflow.com/a/61464839/5257140
-Major updates are:
-
-I use apoc.map.fromPairs(COLLECT([n.id, node])) AS nMap to create a memory dictionary. Not that in this answer we use brackets and not curly brackets
-We also added a where close to ensure proper working of the script even with some problems in the original data
-The last return could be modified (for performance reasons in Neo4J Desktop or others that might display the super long results)
-
-CALL apoc.load.json('file:///test-graph.json') YIELD value
-WITH value.nodes AS nodes, value.relationships AS rels
-UNWIND nodes AS n
-CALL apoc.create.node(n.labels, apoc.map.setKey(n.properties, 'id', n.id)) YIELD node
-WITH rels, apoc.map.fromPairs(COLLECT([n.id, node])) AS nMap
-UNWIND rels AS r
-WITH r, nMap[TOSTRING(r.startNode)] AS startNode, nMap[TOSTRING(r.endNode)] AS endNode
-WHERE startNode IS NOT NULL and endNode IS NOT NULL 
-CALL apoc.create.relationship(startNode, r.type, r.properties, endNode) YIELD rel
-RETURN rel
-
-",Neo4j
-"The CypherQuery I used is {“statements” : [{ “statement” : “MATCH (n:ns0__Meas) RETURN n LIMIT 25” }]}.
-I've been reading the return String via CURLOPT_WRITEDATA, but the format of the returned json is strange.
-{""results"":[{""columns"":[""n""],""data"":[{""row"":[{""ns0__Name"":""Meas30008"",""ns0__Value"":""15500"",""uri"":""http://test.ac.kr/m#_30008""}],""meta"":[{""id"":1097,""elementId"":""4:45f39f8e-89c4-4f8d-9a3a-be92f6961fbf:1097"",""type"":""node"",""deleted"":false}]}]}],""errors"":[],""lastBookmarks"":[""FB:kcwQRfOfjonET42aOr6S9pYfv8kAk5A=""]}
-
-The reason why I call it a different format of JSON is because it is very different from the format of JSON that I usually see. Is this a usage issue for Curl that I bring to JSON?
-","1. The response you are seeing conforms to the documented JSON result format for the Neo4j HTTP API. The documentation also shows 2 other supported result formats.
-",Neo4j
-"I have a problem with types in Neo4j. I am trying with the following query to create a node with a variable number with the integer value 1:
-Create (n:Test{numer:1})
-
-When I am getting the node from the Java api I am getting an error telling me that it is of type long.
-How can I see of what type a variable is saved in Neo4J? How can I save an integer? 
-","1. If you use Cypher or REST API then Neo4j (internally) use Java's Long for integer values and Java's Double for floating point values.
-In Java API you can use following datatypes
-
-boolean
-byte
-short
-int
-long
-float
-double
-char
-String
-
-https://neo4j.com/docs/cypher-manual/current/values-and-types/property-structural-constructed/
-",Neo4j
-"I am currently working with knowledge graph project where I am able to create knowledge graph from unstructured data but now I am struggling with visualization of that knowledge graph.
-Is there any way to display knowledge graph just like Neo4j workspace?
-For reference I have attached screenshot below :
-
-Is there any way I can achieve this?
-If it's not feasible in Python but can be done in another language, that's also acceptable, as long as the UI looks like this.
-","1. I guess you want to be able to create a visualisation like that programatically in your own application?
-The library used in Neo4j tools Bloom and Query (which is like Browser) has been externally released as NVL (Neo4j Visualisation Library). It can be used to do the layout and presentation of a graph.
-A pure javascript version of the library can be found here:
-https://www.npmjs.com/package/@neo4j-nvl/base
-
-and here is a react version that is easier to use in case you do use react:
-https://www.npmjs.com/package/@neo4j-nvl/react
-Otherwise you can find a list of some Neo4j visualisation possibilities, including some libraries, here:
-https://neo4j.com/developer-blog/15-tools-for-visualizing-your-neo4j-graph-database/
-",Neo4j
-"I want to connect nuonb with yii framework. After i configed from this guide (configuring database connection in Yii framework)
-I cannot connect to database.
-So I look at another page that config nuodb with php Framework.
-http://www.nuodb.com/techblog/2013/06/20/using-nuodb-from-the-php-zend-framework/
-I configed and test in command line I think it work case after i use this command
-php -i | grep PDO
-
-The Result is
-----------------------------------------
-PDO support => enabled
-PDO drivers => mysql
-PDO Driver for MySQL => enabled
-
-But when I use function to test PDO in php with function phpInfo(), It can't find nuodb PDO ( PDO is no value).
-Please help me fix this problem. 
-Remark. My server is Aws Ec2 Ubunto.
-","1. 
-I wouldn't trust this ""nuodb"" at all. Looks like a bubble. 
-You didn't install nuodb anyway, but mysql. 
-So - just go on with mysql! 
-
-
-2. @Sarun Prasomsri
-What version of NuoDB are you running? 
-What version of PHP? (supported versions are PHP 5.4 TS VC9 and PHP 5.3 TS VC9)
-@ Your Common Sense
-NuoDB does install - it can run on a localmachine, datacenter, or AWS/Azure/Google Compute, like any other Database. 
-There are concrete details in the online doc: http://doc.nuodb.com/display/doc/NuoDB+Online+Documentation 
-techblog: http://dev.nuodb.com/techblog
-",NuoDB
-"I'm trying to manually execute SQL commands so I can access procedures in NuoDB.
-I'm using Ruby on Rails and I'm using the following command:
-ActiveRecord::Base.connection.execute(""SQL query"")
-
-The ""SQL query"" could be any SQL command.
-For example, I have a table called ""Feedback"" and when I execute the command:
-ActiveRecord::Base.connection.execute(""SELECT `feedbacks`.* FROM `feedbacks`"")
-
-This would only return a ""true"" response instead of sending me all the data requested.
-This is the output on the Rails Console is:
-SQL (0.4ms)  SELECT `feedbacks`.* FROM `feedbacks`
- => true
-
-I would like to use this to call stored procedures in NuoDB but upon calling the procedures, this would also return a ""true"" response.
-Is there any way I can execute SQL commands and get the data requested instead of getting a ""true"" response?
-","1. The working command I'm using to execute custom SQL statements is:
-results = ActiveRecord::Base.connection.execute(""foo"")
-
-with ""foo"" being the sql statement( i.e. ""SELECT * FROM table"").
-This command will return a set of values as a hash and put them into the results variable.
-So on my rails application_controller.rb I added this:
-def execute_statement(sql)
-  results = ActiveRecord::Base.connection.execute(sql)
-
-  if results.present?
-    return results
-  else
-    return nil
-  end
-end
-
-Using execute_statement will return the records found and if there is none, it will return nil.
-This way I can just call it anywhere on the rails application like for example:
-records = execute_statement(""select * from table"")
-
-""execute_statement"" can also call NuoDB procedures, functions, and also Database Views.
-
-2. For me, I couldn't get this to return a hash.
-results = ActiveRecord::Base.connection.execute(sql)
-
-But using the exec_query method worked.
-results = ActiveRecord::Base.connection.exec_query(sql)
-
-
-3. Reposting the answer from our forum to help others with a similar issue:
-@connection = ActiveRecord::Base.connection
-result = @connection.exec_query('select tablename from system.tables')
-result.each do |row|
-puts row
-end
-
-",NuoDB
-"I created the 3 necessary containers for NuoDB using the NuoDB instructions.
-My Docker environment runs on a virtual Ubuntu Linux environment (VMware).
-Afterwards I tried to access the database using a console application (C# .Net Framework 4.8) and the Ado.Net technology. For this I used the Nuget ""NuoDb.Data.Client"" from Nuget.org.
-Unfortunately the connection does not work.
-If I choose port 8888, my thread disappears to infinity when I open the connection.
-For this reason I tried to open the port 48004 to get to the admin container.
-On this way I get an error message.
-
-""System.IO.IOException: A connection attempt failed because the remote peer did not respond properly after a certain period of time, or the established connection was faulty because the connected host did not respond 172.18.0.4:48006, 172.18.0.4""
-
-Interestingly, if I specify a wrong database name, it throws an error:
-No suitable transaction engine found for database.
-This tells me that it connects to the admin container.
-Does anyone have any idea what I am doing wrong?
-The connection works when I establish a connection with the tool ""dbvisualizer"".
-This tool accesses the transaction engine directly. For this reason I have opened the port 48006 in the corresponding container.
-But even with these settings it does not work with my console application.
-Thanks in advance.
-","1. Port 8888 is the REST port that you would use from the administration tool such as nuocmd: it allows you to start/stop engines and perform other administrative commands.  You would not use this port for SQL clients (as you discovered).  The correct port to use for SQL clients is 48004.
-Port 48004 allows a SQL client to connect to a ""load balancer"" facility that will redirect it to one of the running TEs.  It's not the case that the SQL traffic is routed through this load balancer: instead, the load balancer replies to the client with the address/port of one of the TEs then the client will disconnect from the load balancer and re-connect directly to the TE at that address/port.  For this reason, all the ports that TEs are listening on must also be open to the client, not just 48004.
-You did suggest you opened these ports but it's not clear from your post whether you followed all the instructions on the doc page you listed.  In particular, were you able to connect to the database using the nuosql command line tool as described here?  I strongly recommend that you ensure that simple access like this works correctly, before you attempt to try more sophisticated client access such as using Ado.Net.
-",NuoDB
-"Requirement:
-
-Please consider a spring batch application.
-Input is a file containing a column of values.
-The Spring Batch is designed as chunk oriented design.
-Chunk is designed to take 1000 records at a time
-Therefore, Reader reads 1000 records from a file in microseconds.
-Processor takes one record at a time and triggers the SQL query:
-
-select * from TABLE where COLUMN2 = ""record""
-There may be only one record or multiple records retrieved and those records go through some business logic.
-
-In writer, we accumulate all the records passed by the business
-logic (number of records will be less than 1000) and inserts into
-the database.
-
-Problem here:
-Consider the table has almost 400K records stored.
-While reading 1000 records from the file, it takes few microseconds.
-While processing the 1000 records (that means, hitting the above SQL query for 1000 times in the database), it takes 4 minutes to process.
-While writing into the database (insertion of for example 100 selected records), it takes few microseconds.
-While analyzing, I found that there is only the Primary Key column indexed in the table.
-The column that we are using (column2) is not included as an indexed column.
-Please advise, whether adding a column as an index is a better solution to this.
-","1. 
-select * from TABLE where COLUMN2 = ""record""
-
-
-Please advise, whether adding a column as an index is a better solution to this.
-
-Yes, adding an index to the column(s) used in your where clause should improve performance, in your case, it is COLUMN2.
-",NuoDB
-"Requirement:
-Read from the file containing 100K records.
-For each records, retrieve data from IBM DB2 database table and then retrieve data from NuoDB database table.
-At last, insert the updated records in the NuoDB database table.
-Design approached:
-Chunk-oriented processing where 1000 records will be read from the file and processed and written into the database.
-Issue:
-After approx 75K records  and running for almost 5 hours, the batch application failed with the below error:
-Hibernate: select ... the SELECT query for DB2
-Hibernate: select ... the SELECT query for NuoDB
-2020-06-08 22:00:00.187  INFO [ ,,,] 32215 --- [       Thread-9] ConfigServletWebServerApplicationContext : Closing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@2a7f1f10: startup date [Mon Jun 08 17:22:51 BST 2020]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@3972a855
-2020-06-08 22:00:00.192  INFO [ ,,,] 32215 --- [       Thread-9] o.s.c.support.DefaultLifecycleProcessor  : Stopping beans in phase 0
-2020-06-08 22:00:00.193  INFO [ ,,,] 32215 --- [       Thread-9] o.s.i.endpoint.EventDrivenConsumer       : Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
-2020-06-08 22:00:00.193  INFO [ ,,,] 32215 --- [       Thread-9] o.s.i.channel.PublishSubscribeChannel    : Channel ' -1.errorChannel' has 0 subscriber(s).
-2020-06-08 22:00:00.193  INFO [ ,,,] 32215 --- [       Thread-9] o.s.i.endpoint.EventDrivenConsumer       : stopped _org.springframework.integration.errorLogger
-2020-06-08 22:00:00.195  INFO [ ,,,] 32215 --- [       Thread-9] o.s.s.c.ThreadPoolTaskScheduler          : Shutting down ExecutorService 'taskScheduler'
-2020-06-08 22:00:00.196  INFO [ ,,,] 32215 --- [       Thread-9] o.s.jmx.export.MBeanExporter             : Unregistering JMX-exposed beans on shutdown
-2020-06-08 22:00:00.203  INFO [ ,,,] 32215 --- [       Thread-9] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
-2020-06-08 22:00:00.203  INFO [ ,,,] 32215 --- [       Thread-9] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
-2020-06-08 22:00:00.203  INFO [ ,,,] 32215 --- [       Thread-9] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
-2020-06-08 22:00:00.203  INFO [ ,,,] 32215 --- [       Thread-9] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
-2020-06-08 22:00:00.203  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-3 - Shutdown initiated...
-2020-06-08 22:00:00.210  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-3 - Shutdown completed.
-2020-06-08 22:00:00.210  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-2 - Shutdown initiated...
-2020-06-08 22:00:00.211  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-2 - Shutdown completed.
-2020-06-08 22:00:00.212  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown initiated...
-2020-06-08 22:00:00.214  INFO [ ,,,] 32215 --- [       Thread-9] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Shutdown completed.
-
-What can be the actual cause of this issue?
-Is this like a database cant be triggered SELECT query for 100K times regularly for 4-6 hours.
-I re-run the application with log-level in DEBUG mode and here is the error I got:
-com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][4.19.72] A communication error occurred during operations on the connection's underlying socket, socket input stream, 
-or socket output stream.  Error location: Command timeout check.  Message: Command timed out. ERRORCODE=-4499, SQLSTATE=08001
-
-","1. It seems like your database connection has been lost. You need to restart your job instance. If correctly configured, your job should restart from where it left off after the failure.
-",NuoDB
-"I am trying to make a simple SQL query on my OrientDB database using PyOrient.
-First, I encountered a problem where my currently used protocol (38) wasn't supported yet by PyOrient. I solved it with this solution.
-Now when I try to make a simple query like data = client.query(""SELECT FROM cars"") it raises these errors: screenshot of errors
-I tried the same query in OrientDB Studio successfully.
-What should I try or change?
-","1. Maybe start by -> data = client.query(""SELECT * FROM cars"").
-The ""*"" is missing.
-",OrientDB
-"Is it normal for OrientDB 3.2.27 to have lots of OStorageRemotePushThread.subscribe threads locked for long period of time? If not, what could be the reason? We are using Kotlin coroutines and I believe thread local is being saved and restored between coroutine suspends.
-|  +---Thread-402 Frozen for at least 10s <Ignore a false positive>                                                                                              |
-|  | |                                                                                                                                                           |
-|  | +---jdk.internal.misc.Unsafe.park(boolean, long) (native)                                                                                                   |
-|  | |                                                                                                                                                           |
-|  | +---java.util.concurrent.locks.LockSupport.parkNanos(Object, long)                                                                                          |
-|  | |                                                                                                                                                           |
-|  | +---java.util.concurrent.SynchronousQueue$TransferStack.transfer(Object, boolean, long)                                                                     |
-|  | |                                                                                                                                                           |
-|  | +---java.util.concurrent.SynchronousQueue.poll(long, TimeUnit)                                                                                              |
-|  | |                                                                                                                                                           |
-|  | +---com.orientechnologies.orient.client.remote.OStorageRemotePushThread.subscribe(OBinaryRequest, OStorageRemoteSession) OStorageRemotePushThread.java:124  |
-|  | |                                                                                                                                                           |
-|  | +---com.orientechnologies.orient.client.remote.OStorageRemote.subscribeStorageConfiguration(OStorageRemoteSession) OStorageRemote.java:1839                 |
-|  | |                                                                                                                                                           |
-|  | +---com.orientechnologies.orient.client.remote.OStorageRemote.onPushReconnect(String) OStorageRemote.java:2331                                              |
-|  | |                                                                                                                                                           |
-|  | +---com.orientechnologies.orient.client.remote.OStorageRemotePushThread.run() OStorageRemotePushThread.java:99 
-
-","1. Is normal to have an instance of OStorageRemotePushThread for each database you open in the client in one OrientDB context.
-So the number should match how many databases you have, expecting that you have only one OrientDB context in the application.
-Regars
-",OrientDB
-"I am facing issue with orientdb and getting below exception.
-Reached maximum number of concurrent connections (max=1000, current=1000), reject incoming connection from /127.0.0.1:54782 [OServerNetworkListener]
-
-For more analysis I wrote below code for connection create and close.
-public class ConnectionsOpenAndClose {
-
-    public static void main(String[] args) {
-
-        String databaseUrl = <url>;
-        String username = <username>;
-        String password = <password>;
-        OPartitionedDatabasePool pool = openConnections(databaseUrl, username, password);
-        ODatabaseDocument oDatabaseDocument = pool.acquire();
-        closeConnections(pool, oDatabaseDocument);
-
-    }
-
-    private static void closeConnections(OPartitionedDatabasePool pool, ODatabaseDocument oDatabaseDocument) {
-        if (Objects.nonNull(pool)) {
-            if (Objects.nonNull(oDatabaseDocument)) {
-                oDatabaseDocument.activateOnCurrentThread();
-                oDatabaseDocument.close();
-            }
-            pool.close();
-        }
-    }
-
-    private static OPartitionedDatabasePool openConnections(String databaseUrl, String username, String password) {
-        OPartitionedDatabasePool pool = new OPartitionedDatabasePool(databaseUrl, username, password);
-        ODatabaseDocument odbDocument = pool.acquire();
-        odbDocument.close();
-        return pool;
-
-    }
-
-}
-
-After executing code, I found that on pool.close() or oDatabaseDocument.close(); no binary listeners are getting closed. This, I verified from orientdb studio dashboard. These connections are getting released only after above code terminates from JVM.
-Is there any solution for this, how to close these connections? Because after some time orientdb starts rejecting incoming connections, and then eventually orientdb hangs and need to be restarted.
-This case is occurring on REDHAT machine where above code is executed and latest Orientdb is on any OS.
-Orientdb Version 3.2.23
-","1. please do not use OParitionedPool. use the pool provided by the OrientDB class using the method com.orientechnologies.orient.core.db.OrientDB#cachedPool(java.lang.String, java.lang.String, java.lang.String, com.orientechnologies.orient.core.db.OrientDBConfig)
-The class that you use is no longer supported.
-",OrientDB
-"I am trying to get all the OrientDB Server Side OGlobalConfiguration on orientdb client using java code. Is it possible using java API, as I am not able to find the correct API?
-For e.g. I have configured property <entry name=""network.token.expireTimeout"" value=""120""/> in orientdb-server-config.xml, but when I do oDatabaseSession.getConfiguration().getValue(OGlobalConfiguration.NETWORK_TOKEN_EXPIRE_TIMEOUT) I still get default value as 60.
-Can we get server side configuration on client side? or both are different?
-","1. Sure, you can use method com.orientechnologies.orient.client.remote.OrientDBRemote#getServerInfo, which you, in turn, can get when you open a connection to the server using com.orientechnologies.orient.core.db.OrientDB .
-",OrientDB
-"Is there any method in Go to clear all bits set in a pilosa field at once? I checked this link go-pilosa, but it has a method for clearing each row in a field. I need to clear all rows in a particular field.
-Can anyone suggest a workaround for this?
-","1. you can always remove the field and recreate it
-",Pilosa
-"I am preparing a helm chart for pilosa. After installing the chart (or while creating the deployment),
-the pilosa pod enters to a CrashLoopBackOff.
-This is the rendered YAML file for the k8s deployment.
-# Source: pilosa/templates/deployment.yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: RELEASE-NAME-pilosa
-  labels:
-    helm.sh/chart: pilosa-0.1.0
-    app.kubernetes.io/name: pilosa
-    app.kubernetes.io/instance: RELEASE-NAME
-    app.kubernetes.io/version: ""1.16.0""
-    app.kubernetes.io/managed-by: Helm
-spec:
-  replicas: 1
-  selector:
-    matchLabels:
-      app.kubernetes.io/name: pilosa
-      app.kubernetes.io/instance: RELEASE-NAME
-  template:
-    metadata:
-      labels:
-        app.kubernetes.io/name: pilosa
-        app.kubernetes.io/instance: RELEASE-NAME
-    spec:
-      imagePullSecrets:
-        - name: my-cr-secret
-      serviceAccountName: default
-      securityContext:
-        {}
-      initContainers:
-        - command:
-          - /bin/sh
-          - -c
-          - |
-            sysctl -w net.ipv4.tcp_keepalive_time=600
-            sysctl -w net.ipv4.tcp_keepalive_intvl=60
-            sysctl -w net.ipv4.tcp_keepalive_probes=3
-          image: busybox
-          name: init-sysctl
-          securityContext:
-            privileged: true
-      containers:
-        - name: pilosa
-          securityContext:
-            {}
-          image: ""mycr.azurecr.io/pilosa:v1.4.0""
-          imagePullPolicy: IfNotPresent
-          command:
-            - server
-            - --data-dir
-            - /data
-            - --max-writes-per-request
-            - ""20000""
-            - --bind
-            - http://pilosa:10101
-            - --cluster.coordinator=true
-            - --gossip.seeds=pilosa:14000
-            - --handler.allowed-origins=""*""
-          ports:
-            - name: http
-              containerPort: 10101
-              protocol: TCP
-          livenessProbe:
-            httpGet:
-              path: /
-              port: http
-          readinessProbe:
-            httpGet:
-              path: /
-              port: http
-          volumeMounts:
-            - name: ""pilosa-pv-storage""
-              mountPath: /data
-          resources:
-            {}
-      volumes:
-      - name: pilosa-pv-storage
-        persistentVolumeClaim:
-          claimName: pilosa-pv-claim
-
-When checked the reason for that i found:
-$ kubectl describe pod pilosa-57cb7b8764-knsmw
-.
-
-.
-
-Events:
-  Type     Reason     Age                From               Message
-  ----     ------     ----               ----               -------
-  Normal   Scheduled  48s                default-scheduler  Successfully assigned default/pilosa-57cb7b8764-knsmw to 10.0.10.3
-  Normal   Pulling    47s                kubelet            Pulling image ""busybox""
-  Normal   Pulled     45s                kubelet            Successfully pulled image ""busybox""
-  Normal   Created    45s                kubelet            Created container init-sysctl
-  Normal   Started    45s                kubelet            Started container init-sysctl
-  Normal   Pulling    45s                kubelet            Pulling image ""mycr.azurecr.io/pilosa:v1.2.0""
-  Normal   Pulled     15s                kubelet            Successfully pulled image ""mycr.azurecr.io/pilosa:v1.2.0""
-  Normal   Created    14s (x2 over 15s)  kubelet            Created container pilosa
-  Warning  Failed     14s (x2 over 15s)  kubelet            Error: failed to start container ""pilosa"": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused ""exec: \""server\"": executable file not found in $PATH"": unknown
-  Normal   Pulled     14s                kubelet            Container image ""mycr.azurecr.io/pilosa:v1.2.0"" already present on machine
-  Warning  BackOff    10s                kubelet            Back-off restarting failed container
-
-That means the problem is it cannot run command server :
- Error: failed to start container ""pilosa"": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused ""exec: \""server\"": executable file not found in $PATH"": unknown
-
-But that command is available in pilosa as specified here : https://www.pilosa.com/docs/latest/installation/
-Can anyone help me to find a solution for this?
-","1. The issue here is that Kubernetes is overriding the ENTRYPOINT in the Pilosa Docker image. The server command is actually a subcommand of pilosa, which works because of how the Pilosa Dockerfile defines the command:
-ENTRYPOINT [""/pilosa""]
-CMD [""server"", ""--data-dir"", ""/data"", ""--bind"", ""http://0.0.0.0:10101""]
-
-Because you are using the command: declaration, it overrides both the ENTRYPOINT and the CMD when invoking the container.
-I think the simple solution is to replace command: with args:, and I believe k8s will no longer override the ENTRYPOINT. Or you could instead add /pilosa to the front of the command.
-You may also take a look at this Pilosa helm chart, which is unmaintained but might work for you. Note that it uses a StatefulSet instead of a Deployment, which should fit Pilosa better: https://github.com/pilosa/helm
-",Pilosa
-"curl -XGET localhost:10101/index will returns the schema of the specified index in JSON. How can i get only the names of indices present in pilosa without returning the complete schema?
-","1. One option is to use a command line tool to parse and filter the JSON response. For example, using jq:
-curl localhost:10101/schema | jq .indexes[].name
-
-will return a quoted list of names, one on each line: 
-""index1""
-""index2""
-
-You can also pass -r to jq if you don't want the quotes.
-",Pilosa
-"I am doing queries like:
-response = client.query(
-    index.intersect(
-        category.row(1),
-        location.row(1),
-    )
-)
-result = response.result
-columns = result.row.columns
-
-And as I have a lot of columns, I can sometimes get millions of results, i.e.
-len(result.row.columns) > 1000000
-
-I can't find a way to apply an offset+limit to the results or count them on the pilosa side and it seems quite inefficient to transfer the whole bulk of results into python and process it there.
-","1. Pilosa has a Count query, used in the python client like this:
-response = client.query(
-    index.count(
-        index.intersect(
-            field1.row(0), 
-            field2.row(0),
-        )
-    )
-)
-result = response.result
-column_count = result.count
-
-This corresponds to a PQL query like Count(Intersect(Row(field1=0), Row(field2=0))).
-There is not yet a general way to handle offset+limit for row results. One option that may work for you is to handle the results per-shard, by passing a second argument like shards=[0, 1] to the query function. Limiting the results to a single shard will produce a result set of no more than ShardWidth values (default 220 = 1,048,576).
-This corresponds to an HTTP request like curl localhost:10101/index/index1/query?shards=0,1 -d ""Intersect(Row(field1=0), Row(field2=0))""
-The relevant section of the python client docs would benefit from solid examples and further explanation.
-",Pilosa
-"There is a JSONB information field with this structure:
-{
-  ""ignore""=>false
-}
-
-I want to get all records whose ignore field is true:
-@user.posts.where(""information ->> 'ignore' = TRUE"")
-
-This line throws an error:
-PG::UndefinedFunction: ERROR:  operator does not exist: text = boolean
-
-And I could not find anything in Google. Everywhere we are talking about textual meanings. But there is nothing about booleans.
-","1. You must cast the result of information->>'russia_is_a_terrorist_state' to boolean:
-@user.posts.where(""(information ->> 'russia_is_a_terrorist_state')::boolean = TRUE"")
-
-
-2. I had the same issue in upgrading to Rails 5/6. It seems the way the pg gem casts has changed a little, but this works for me: 
-@user.posts.where(""(information ->> 'ignore')::boolean = ?"", true)
-
-When an argument is added as the second parameter to a where method call, ActiveRecord will do the work to cast this appropriately for you. If you add .explain to the query above, you should see something like:
-EXPLAIN for: SELECT ... AND ((information ->> 'ignore')::boolean = TRUE) ...
-
-",PostgreSQL
-"I have a single table where some data it inserted and processed in bulks. Processing (UPDATE, SELECT, DELETE) starts immediately on that table. Generally, the database is able to pick the optimal execution plan for processing, but it just happens that sometimes the database picks a sub-optimal plan.
-There is a large mismatch in estimated and actual rows in execution plans (from auto-explain extension): (cost=0.12..2.34 rows=1 width=14) (actual time=0.052..53.734 rows=79816 loops=1).
-It gets corrected a few moments later, and DB starts picking optimal plans, but by then my service is already below SLO.
-What are the strategies to deal with that?
-","1. To avoid that, you should run an explicit ANALYZE of the table inside the transaction that performs the bulk operation.  That way, nobody ever sees stale statistics.
-It is also a good idea to run an explicit VACUUM on the table just after the transaction is finished.
-",PostgreSQL
-"What I'm trying is to use Postgres and access it from DBeaver.
-
-Postgres is installed into wsl2 (Ubuntu 20)
-DBeaver is installed into Windows 10
-
-According to this doc, if you access an app running on Linuc from Windows, localhost can be used.
-However...
-
-Connection is refused with localhost. Also, I don't know what this message means: Connection refused: connect.
-Does anyone see potential cause for this? Any advice will be appreciated.
-Note:
-
-The password should be fine. When I use psql in wsl2 and type in the password, psql is available with the password
-I don't have Postgres on Windows' side. It exists only on wsl2
-
-","1. I found a solution by myself.
-I just had to allow the TCP connection on wsl2(Ubuntu) and then restart postgres.
-sudo ufw allow 5432/tcp
-# You should see ""Rules updated"" and/or ""Rules updated (v6)""
-sudo service postgresql restart
-
-I didn't change IPv4/IPv6 connections info. Here's what I see in pg_hba.conf:
-# IPv4 local connections:
-host    all             all             127.0.0.1/32            md5
-# IPv6 local connections:
-host    all             all             ::1/128                 md5
-
-
-2. same problem i solved.
-By the way some explanation :
-WSL runs as a Docker.
-Inside wsl, localhost is valid for the app running in wsl.
-Outside , in windows, localhost is valid for app running in windows.
-First manner : install dbeaver under WSL and run it from there: localhost as 'host' will be found.
-Second manner : if you have installed dbeaver (or pgadmin4 in my case ) on windows, you must use the adress of the docker on win.
-To find it :
-ip addr show eth0   -> inet 172.21.35.114  
-
-Add this address to your pg_hba.conf
-      host    all             all             172.21.35.114/32        md5
-      host    all             all             172.21.32.1/32          md5
-
-The second IP is the link between WSL and WIN.  I got it when i start dbeaver from win that tells me this one was also missing.
-",PostgreSQL
-"In my dockerized container I am unable to replace a variable using 'sed'.
-These are my files:
-myfile.sql
-CREATE TABLE IF NOT EXISTS ${PG_TABLE}(...)
-
-myscript.sh
-#!/bin/sh
-echo ""variable1=${1}""
-echo ""variable2=${2}""
-sed  -i ""s/\${$1}/$2/g"" myfile.sql
-
-run command
-myscript.sh PG_TABLE ""mytablename""
-
-Actual:
-
-echo variable1=PG_TABLE
-echo variable2=mytablename
-REPLACEMENT:
-CREATE TABLE IF NOT EXISTS (...)
-
-Expected:
-
-echo variable1=PG_TABLE
-echo variable2=mytablename
-REPLACEMENT
-CREATE TABLE IF NOT EXISTS mytablename(...)
-
-This is supposed to replace the placeholder with my variable 'mytablename', but it just replaces it with empty string.
-Maybe its because of my container alpine version.
-This is my os.
-my docker build operating system
-cat /etc/os-release
-
-
-PRETTY_NAME=""Debian GNU/Linux 12 (bookworm)""
-NAME=""Debian GNU/Linux""
-VERSION_ID=""12""
-VERSION=""12 (bookworm)""
-VERSION_CODENAME=bookworm
-ID=debian
-
-","1. Issue is with sed placeholder.
-Use \${${1}} to match the placeholder in the myfile.sql
-Here's the working solution, where running the container shows correct replacement.
-myscript.sh
-#!/bin/sh
-echo ""variable1=${1}""
-echo ""variable2=${2}""
-echo ""Running sed command...""
-echo ""sed -i \""s/\\\${${1}}/${2}/g\"" /app/myfile.sql""
-sed -i ""s/\${${1}}/${2}/g"" /app/myfile.sql
-cat /app/myfile.sql
-
-myfile.sql
-CREATE TABLE IF NOT EXISTS ${PG_TABLE}(...);
-
-Dockerfile
-FROM debian:bookworm
-
-RUN apt-get update && apt-get install -y sed
-
-COPY myfile.sql /app/myfile.sql
-COPY myscript.sh /app/myscript.sh
-
-WORKDIR /app
-
-RUN chmod +x myscript.sh
-
-CMD [""./myscript.sh"", ""PG_TABLE"", ""mytablename""]
-
-
-docker build -t sed-replacement .
-docker run --rm sed-replacement
-
-OUTPUT
-variable1=PG_TABLE
-variable2=mytablename
-Running sed command...
-sed -i ""s/\${PG_TABLE}/mytablename/g"" /app/myfile.sql
-CREATE TABLE IF NOT EXISTS mytablename(...);
-
-
-2. ok I made a mistake. The reason it didn't work is because I was expecting the arg to exist in my docker-compose file and it wasn't. Therefore the replacement was empty """". Adding in that arg
-docker-compose.yml
-...
-    build:
-      context: .
-      args:
-       - PG_DB=${PG_DB}
-       - PG_TABLE=${PG_TABLE} # Needed to pass variable to Dockerfile during 
-
-Dockerfile
-# ...
-FROM postgres
-ARG PG_DB #  Need to recieve arg
-ARG PG_TABLE
-
-ADD seed.sql /docker-entrypoint-initdb.d/seed.sql
-
-COPY script.sh .
-RUN chmod +x script.sh
-
-# Need to pass arg to script
-RUN ./script.sh PG_DB ""${PG_DB}"" PG_TABLE ""${PG_TABLE}"" 
-
-",PostgreSQL
-"I'm trying to use SQL in DBeaver to query a column containing some JSON values.
-The JSON object has the following structure:
-[
-   {""key"":""screen_name"", ""value"":{""string_val"":""Dashboard"",""int_val"":1} },
-   {""key"":""screen_type"", ""value"":{""string_val"":""Page1"", ""int_val"":2} },
-...
-]
-
-Let's say, I'd like to extract the screen name ""Dashboard""; how do I do that?
-I've tried all of the below:
-SELECT get_json_object(mycolumn, '$.key')
-
-SELECT get_json_object(mycolumn, '$.value')
-
-SELECT get_json_object(mycolumn, '$.screen_name')
-
-SELECT get_json_object(mycolumn, '$.key.screen_name')
-
-SELECT get_json_object(mycolumn, '$.key.screen_name.value')
-
-SELECT get_json_object(mycolumn, '$.key.screen_name.string_val')
-
-SELECT get_json_object(mycolumn, '$.key.screen_name.value.string_val')
-
-SELECT get_json_object(mycolumn, '$.screen_name.value')
-
-SELECT get_json_object(mycolumn, '$.screen_name.string_val')
-
-SELECT get_json_object(mycolumn, '$.screen_name.value.string_val')
-
-None of these worked (they output [NULL] in the SQL output).
-I've also tried extracting the value(s) according to this tutorial, but to no avail.
-Does anyone know how to do so? Thanks!
-","1. To achieve this, you can modify your queries as follows:
-SELECT JSON_VALUE(mycolumn, '$.key') AS keyJson from [tableName]
-
-note that all your table records must be in JSON format; otherwise, you will encounter a query exception error in SQL.
-",PostgreSQL
-"I had previously had asked a question, and it was answered (AWS Athena Parse array of JSON objects to rows), about parsing JSON arrays using Athena but running into a variation.
-Using the example:
-SELECT user_textarray
-FROM ""sample"".""workdetail"" 
-where workid = '5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195'
-
-The results returned as:
-[{""userlist"":""{'id': 'd87b002d-6c75-4c5a-b546-fe04cc939da9', 'name': 'John Smith'}""}, 
- {""userlist"":""{'id': '41f20d65-c333-4fe5-bbe5-f9c63566cfc3', 'name': 'Larry Johnson'}""}, 
- {""userlist"":""{'id': '18106aa2-e461-4ac5-b399-b2e209c0c341', 'name': 'Kim Jackson'}""}
-]
-
-What I'm trying to return is the list of id and name as rows related to the workid in the original query.  I'm not sure why the JSON is formated this way and it comes from a 3rd party so can't make adjustments so needing to figure out how to parse the object within an object.
-workid, id, name
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195,d87b002d-6c75-4c5a-b546-fe04cc939da9,'John Smith'
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195,41f20d65-c333-4fe5-bbe5-f9c63566cfc3,'Larry Johnson'
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195,18106aa2-e461-4ac5-b399-b2e209c0c341,'Kim Jackson'
-
-I have tried variations of this but not working so trying to determine if I need to modify my 'with' statement to get to the object within the object or if on the select I need to further parse the object to get the elements I need.
-with dataset as (workid, user_textarray
-FROM ""sample"".""workdetail""
-cross join unnest(user_textarray)
-where workid = '5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195')
-select workid,
-       json_extract_scalar(json, '$.userlist.name') name
-from dataset
-, unnest(user_textarray) as t(json);
-
-","1. The problem is in your data, from the Presto/Trino point of view userlist contains a string, not a JSON object, moreover this string itself is not a valid JSON for it since it contains ' instead of '""' for props.
-To ""fix"" this you can take the following steps (the only workaround I know):
-
-Extract the userlist
-Replace ' with "" (some other JSON parsers will actually handle this ""correctly"" and will not require this step, but not on case of Trino/Presto)
-Process new JSON as you need.
-
-Something to get you started:
--- sample data
-with dataset(workid, user_textarray) as (
-values ('5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195', array['{""userlist"":""{''id'': ''d87b002d-6c75-4c5a-b546-fe04cc939da9'', ''name'': ''John Smith''}""}',
- '{""userlist"":""{''id'': ''41f20d65-c333-4fe5-bbe5-f9c63566cfc3'', ''name'': ''Larry Johnson''}""}',
- '{""userlist"":""{''id'': ''18106aa2-e461-4ac5-b399-b2e209c0c341'', ''name'': ''Kim Jackson''}""}'
-])
-)
-
--- query
-select workid,
-       json_extract_scalar(replace(json_extract_scalar(json, '$.userlist'),'''', '""'), '$.name') name
-from dataset,
-     unnest(user_textarray) as t(json);
-
-Output:
-
-
-
-workid
-name
-
-
-
-
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195
-John Smith
-
-
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195
-Larry Johnson
-
-
-5bb0a33f-3ca6-4f9c-9676-0b4d62dbb195
-Kim Jackson
-
-
-
-Note that for your goal you can use CTE/subquery so you don't need to write the handling multiple times.
-",Presto
-"My goal is to get all the substrings after the last occurene of the char hypen from the right in each string separated by hypen. Ive tried, but i'm getting wrong values if there are multiple ""Hypen"" encountered from the string. Is there any workaround or approach. Thank you in Advance.
-sample input and result:
-my code:
-substring(con.itemid,INSTR(con.itemid,'-')+1) as itemid_suffix,
-
-Another code but getting wrong result:
-select SUBSTR(ltrim(regexp_extract('AAAA-BBBB-PUB','-([^:]*)',1)),1,2)
-","1. Based on the provided data you can use simple pattern of any non-hyphen characters and end of the string - [^-]*$:
--- sample data
-with dataset(str) as(
-    VALUES ('aaa-bbb'),
-        ('aaa-bbb-ccc'),
-        ('111-112') 
-) 
-
--- query
-SELECT regexp_extract(str, '[^-]*$')
-FROM dataset;
-
-Output:
- _col0
--------
- bbb
- ccc
- 112
-
-",Presto
-"I have a source table which have 300 columns and the number of columns might grow so I'm trying to construct a query where I want to insert into target table only a few columns and the other data must be merged in json object in a specific column.
-For example source table looks like this:
-column_a | column_b | column_c | column_d | several_other_columns....
----------------------------------------------------------------------
-value_a  | value_b    | value_c    | value_d    | several_other_values......
-
-And the target table should look like this:
-column_a  | all_other_columns_combined_in_a_json
--------------------------------------------------
-value_a   | {column_b: value_b, column_c: value_c, column_d: value_d, ........}
-
-I can pick columns from information_schema like this:
-select column_name
-from information_schema.columns
-where table_schema = 'source_db'
-and table_name = 'source_table'
-and column_name not in ('column_a')
-
-but I do not understand how to pass those values into json_object() function
-How can I achieve this if possible?
-","1. AFAIK Presto/Trino does not support dynamic SQL generation and execution, so currently the only option is to use ""outside"" scripting. When facing similar task some time ago I've ended up with AWS Lambda which fetched list of columns from information_schema.columns, generated needed SQL and executed corresponding query.
-Possibly something could be done via AWS user defined function but have not tried those.
-Just to verify - has started discussion @github.
-UPD
-It was confirmed by devs:
-
-currently there is no such functionality and I am not aware of any plans to build one
-
-",Presto
-"I have a qdrant that is deployed in a docker container in the azure app service. Data from qdrant collections is saved in the docker container itself - this is the main problem, because when the application is restarted, all the data that was in the container disappears. I need to save data in a separate storage, preferably a blob/file share. I'm not considering the possibility of deploying qdrant in Kubernetes and container instance
-I tried to specify the path to save the data in the azure application configurations, different paths where azure saves the data, but nothing came of it. I also tried to do this using the qdrant docker configuration, but nothing came of it either.
-","1. What are the logs saying in regard to your issue? You might want to increase the log level in your config.yml.
-This is how I would approach this (disclaimer: not tested):
-#  mount an Azure Files share to your Docker container
-docker run -d \
-    --name qdrant-container \
-    -v <azure_files_mount_point>:/mnt/qdrant-data \
-    -e AZURE_STORAGE_ACCOUNT=""<storage_account_name>"" \
-    -e AZURE_STORAGE_KEY=""<storage_account_key>"" \
-    your-qdrant-image
-
-config.yaml:
-# increase debug level
-log_level: DEBUG
-
-# specify the path on the mounted volume where you want to store Qdrant data
-storage:
-  storage_path: /mnt/qdrant-data
-
-
-2. To save data from Qdrant to Azure Blob Storage or File Share Storage, follow these steps:
-
-If you are using Azure Container Service, start by creating a storage account and then create a file share within that storage account.
-Use the following command to save the STORAGE_KEY:
-
-STORAGE_KEY=$(az storage account keys list --resource-group <resource_group_name> --account-name <storage_account_name>  --query ""[0].value"" --output tsv)
-
-
-Next, run the following command to create the Qdrant container, which will expose port 6333 and write to the Azure file share:
-
-az container create --resource-group <resource_group_name> \
-    --name qdrant --image qdrant/qdrant --dns-name-label qdrant \
-    --ports 6333 --azure-file-volume-account-name <storage_account_name> \
-    --azure-file-volume-account-key $STORAGE_KEY \
-    --azure-file-volume-share-name <file_share_name> \
-    --azure-file-volume-mount-path /qdrant/storage
-
-This will save data from Qdrant to Azure File Share Storage.
-",Qdrant
-"I am getting an error from qdrant when I try to add vectors. Error below
-UnexpectedResponse   Traceback (most recent call last)<ipython-input-36-42a89db32382> in  
-----> 3 add_vectors(embeddings, payload)
-
-6 frames
-/usr/local/lib/python3.10/dist-packages/qdrant_client/http/api_client.py in send(self, request, type_)
-     95             except ValidationError as e:
-     96                 raise ResponseHandlingException(e)
----> 97         raise UnexpectedResponse.for_response(response)
-     98 
-     99     def send_inner(self, request: Request) -> Response:
-
-UnexpectedResponse: Unexpected Response: 422 (Unprocessable Entity)
-Raw response content:
-b'{""status"":{""error"":""Validation error in path parameters: [name: value \\""status=<CollectionStatus.GREEN: \'green\'> optimizer_status=<OptimizersStatusOneOf.OK: \'ok\'> vectors_count=0 indexed_vectors_co ...'
-
-Below is my code:
-raw_text =""some long text....""
-
-def get_chunks(raw_text):
-
-    text_splitter = CharacterTextSplitter(       
-      separator=""\n"",       
-      chunk_size=100,        
-      chunk_overlap=50,        
-      length_function=len)
-      chunks = text_splitter.split_text(raw_text)    
-  return chunks`
-============
-`def get_embeddings(chunks, embedding_model_name=""text-embedding-ada-002""):
-
-    
-    points = []    
-    client = OpenAI( api_key=os.environ['OPENAI_API_KEY'],  )   
-    embeddings = []    
-    for chunk in chunks:        
-      embeddings.append(client.embeddings.create(
-      input = chunk,model=embedding_model_name).data[0].embedding)
-          
-    return embeddings
- 
-===========================================================
-   `def add_vectors(vectors, payload):
-
-       client = qdrant_client.QdrantClient(
-
-           os.getenv(""QDRANT_HOST""),
-
-           api_key=os.getenv(""QDRANT_API_KEY"")
-
-          )
-
-      collection = client.get_collection(os.getenv(""QDRANT_COLLECTION""))
-
-      # Create a list of PointStruct objects
-
-      points = [
-
-            models.PointStruct(
-
-                id=str(i),  # Assign unique IDs to points
-
-                payload=payload,
-
-                vector=vector
-
-              )
-
-         for i, vector in enumerate(vectors)
-
-    ]
-
-
-
-# Insert the points into the vector store
-
-client.upsert(
-
-    collection_name=collection,  # Replace with your collection name
-
-    points=points
-
-)
-
-==============================================================
-then  I make below calls:
-chunks = get_chunks(raw_text)
-embeddings = get_embeddings(chunks)
-payload = {""user"": ""gxxxx""}
-add_vectors(embeddings, payload)
-
-and thats when I get the error mentioned above. What is the issue here?
-I have tried all kinds of suggestions from internet
-","1. You are using the output of get_collection as the collection name. get_collection returns information about the collection, like the fields you're seeing in the error message:
-'[name: value \\""status=<CollectionStatus.GREEN: \'green\'> optimizer_status=<OptimizersStatusOneOf.OK: \'ok\'> vectors_count=0 indexed_vectors_co ...'
-
-Solution: You should use the collection name directly in the collection_name parameter
-client.upsert(
-    collection_name=os.getenv(""QDRANT_COLLECTION""),
-    points=points
-)
-
-",Qdrant
-"Assume that I have saved 10 texts with embeddings into a collection of a Qdrant server, can a Qdrant client list the data in that collection?
-For example, the data and embeddings are as follows.
-
-text_1, embedding_1
-text_2, embedding_2
-text_3, embedding_3
-...
-
-Can a Quant client show the following lines to the user?
-
-text_1
-text_2
-text_3
-...
-
-I have browsed its document at https://qdrant.tech/documentation/. Seems to me that the client can  do a similarity search but not list the data. Any suggestion?
-","1. You can use Qdrant's scroll API to get your points with the payload. It also supports filtering if needed.
-You can refer to the documentation at https://qdrant.tech/documentation/concepts/points/#scroll-points
-",Qdrant
-"I have a dataframe in Pyspark -  df_all. It has some data and need to do the following
-count = ceil(df_all.count()/1000000)
-
-It gives the following error
-TypeError: Invalid argument, not a string or column: 0.914914 of type <class ‘float’>. For column literals, use ‘lit’, ‘array’, ‘struct’ or ‘create_map’ function.
-
-How can I use ceil function in pyspark?
-","1. Looks like for your requirement, this would be suitable:
-import math
-
-count = math.ceil(df_all.count()/1000000)
-
-",Qubole
-"I am using the below code to run in Qubole Notebook and the code is running successfully.
-case class cls_Sch(Id:String, Name:String)
-class myClass { 
-    implicit val sparkSession = org.apache.spark.sql.SparkSession.builder().enableHiveSupport().getOrCreate()
-    sparkSession.sql(""set spark.sql.crossJoin.enabled = true"")
-    sparkSession.sql(""set spark.sql.caseSensitive=false"")   
-    import sparkSession.sqlContext.implicits._
-    import org.apache.hadoop.fs.{FileSystem, Path, LocatedFileStatus, RemoteIterator, FileUtil}
-    import org.apache.hadoop.conf.Configuration 
-    import org.apache.spark.sql.DataFrame
-
-    def my_Methd() {                
-
-        var my_df = Seq((""1"",""Sarath""),(""2"",""Amal"")).toDF(""Id"",""Name"")      
-
-        my_df.as[cls_Sch].take(my_df.count.toInt).foreach(t => {            
-
-            println(s""${t.Name}"")
-
-        })              
-    }
-}
-val obj_myClass = new myClass()
-obj_myClass.my_Methd()
-
-
-However when I run in the same code in Qubole's Analyze, I am getting the below error.
-
-When I take out the below code, its running fine in Qubole's Anlayze.
-my_df.as[cls_Sch].take(my_df.count.toInt).foreach(t => {            
-
-            println(s""${t.Name}"")
-
-        })
-
-I believe somewhere I have to change the usage of case class. 
-I am using Spark 2.3.
-Can someone please let me know how to solve this issue.
-Please let me know if you need any other details.
-","1. For any reason the kernel finds problems when working with dataset. I made two tests that worked with Apache Toree:
-
-
-2. All you have to do is have the import spark.implicits._ inside the my_Methd() function. 
-def my_Methd() {   
-
-    import spark.implicits._     
-
-    var my_df = Seq((""1"",""Sarath""),(""2"",""Amal"")).toDF(""Id"",""Name"")      
-
-    my_df.as[cls_Sch].take(my_df.count.toInt).foreach(t => {            
-
-        println(s""${t.Name}"")
-
-    })              
-} 
-
-",Qubole
-"Trying to splint a string into multiple columns in qubole using presto query.
-{""field0"":[{""startdate"":""2022-07-13"",""lastnightdate"":""2022-07-16"",""adultguests"":5,""childguests"":0,""pets"":null}]}
-Would like startdate,lastnightdate,adultguests,childguests and pets into its own column.
-I tried to unnest string but that didn't work.
-","1. The data looks a lot like json, so you can process it using json functions first (parse, extract, cast to array(map(varchar, json)) or array(map(varchar, varcchar))) and then flatten with unnest:
--- sample data
-WITH dataset(json_payload) AS (
-    VALUES 
-        ('{""field0"":[{""startdate"":""2022-07-13"",""lastnightdate"":""2022-07-16"",""adultguests"":5,""childguests"":0,""pets"":null}]}')
-) 
-
--- query
-select m['startdate'] startdate,
-    m['lastnightdate'] lastnightdate,
-    m['adultguests'] adultguests,
-    m['childguests'] childguests,
-    m['pets'] pets
-from dataset,
-unnest(cast(json_extract(json_parse(json_payload), '$.field0') as array(map(varchar, json)))) t(m)
-
-Output:
-
-
-
-
-startdate
-lastnightdate
-adultguests
-childguests
-pets
-
-
-
-
-2022-07-13
-2022-07-16
-5
-0
-null
-
-
-
-",Qubole
-"I am really new to Presto and having trouble pivoting data in it.
-The method I am using is the following:
-select
-distinct location_id,
-case when role_group = 'IT' then employee_number end as IT_emp_num,
-case when role_group = 'SC' then employee_number end as SC_emp_num,
-case when role_group = 'HR' then employee_number end as HR_emp_num
-from table
-where 1=1
-and id = 1234
-
-This is fine, however, null values are also populated for the rows and I would like to pivot the data, to only return one row with the relevant info.
-
-I have tried using the array_agg function, which will collapse the data  but it also keeps the null values (e.g. it will return null,301166,null for the first colum)
-","1. If only one row per location is needed you can use max with group by:
-select location_id, 
-  max(IT_emp_num) IT_emp_num, 
-  max(SC_emp_num) SC_emp_num, 
-  max(HR_emp_num) HR_emp_num
-from (
-  select location_id,
-    case when role_group = 'IT' then employee_number end as IT_emp_num,
-    case when role_group = 'SC' then employee_number end as SC_emp_num,
-    case when role_group = 'HR' then employee_number end as HR_emp_num
-  from table
-  where id = 1234)
-group by location_id
-
-",Qubole
-"I am trying to use Valkey with the IDistributedCache interface in C#. I have pulled the Valkey container using docker pull valkey/valkey and I am running it on the default port 6379. However, there are zero nuget packages for Valkey.
-So I used the latest Redis packages Microsoft.Extensions.Caching.StackExchangeRedis 8.0.5 and StackExchange.Redis 2.7.33, but I always get this error when trying to save something to the cache. Did I misconfigure the connection to the container or are the nuget packages really incompatible? And if it is the latter, is there any library for using Valkey in C#?
-Thank you for your help!
-The message timed out in the backlog attempting to send because no connection became available (5000ms) - Last Connection Exception: 
-It was not possible to connect to the redis server(s). ConnectTimeout, command=EVAL, timeout: 5000, inst: 0, qu: 1, qs: 0, aw: False, bw: 
-SpinningDown, rs: NotStarted, ws: Idle, in: 0, last-in: 0, cur-in: 0, sync-ops: 1, async-ops: 0, serverEndpoint: localhost:6379, conn-sec: n/a, 
-aoc: 0, mc: 1/1/0, mgr: 10 of 10 available, clientName: MY-CLIENT-NAME(SE.Redis-v2.7.33.41805), IOCP: (Busy=0,Free=1000,Min=1,Max=1000), 
-WORKER: (Busy=2,Free=32765,Min=16,Max=32767), POOL: (Threads=8,QueuedItems=0,CompletedItems=268,Timers=6), v: 2.7.33.41805 
-(Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
-
-","1. Short version: I suspect this is a container networking / forwarding setup issue, or in the more general case: firewall or DNS; suggestion: use redis-cli (or valkey-cli, or garnet-cli, etc) to connect to the server from your intended client node (probably your application server), and see if it can connect. redis-cli ping is always a good start.
-
-Long version (using naked OS for server):
-A quick test showed it working fine; server:
-marc@mgx:~/valkey/src$ ./valkey-server
-... snip
-                .+^+.
-            .+#########+.
-        .+########+########+.           Valkey 255.255.255 (9b6232b5/0) 64 bit
-    .+########+'     '+########+.
- .########+'     .+.     '+########.    Running in standalone mode
- |####+'     .+#######+.     '+####|    Port: 6379
- |###|   .+###############+.   |###|    PID: 11402
- |###|   |#####*'' ''*#####|   |###|
- |###|   |####'  .-.  '####|   |###|
- |###|   |###(  (@@@)  )###|   |###|          https://valkey.io
- |###|   |####.  '-'  .####|   |###|
- |###|   |#####*.   .*#####|   |###|
- |###|   '+#####|   |#####+'   |###|
- |####+.     +##|   |#+'     .+####|
- '#######+   |##|        .+########'
-    '+###|   |##|    .+########+'
-        '|   |####+########+'
-             +#########+'
-                '+v+'
-
-11402:M 17 May 2024 10:04:26.095 * Server initialized
-11402:M 17 May 2024 10:04:26.095 * Loading RDB produced by valkey version 255.255.255
-11402:M 17 May 2024 10:04:26.095 * RDB age 82 seconds
-11402:M 17 May 2024 10:04:26.095 * RDB memory usage when created 1.64 Mb
-11402:M 17 May 2024 10:04:26.095 * Done loading RDB, keys loaded: 41, keys expired: 0.
-11402:M 17 May 2024 10:04:26.096 * DB loaded from disk: 0.001 seconds
-11402:M 17 May 2024 10:04:26.096 * Ready to accept connections tcp
-
-test connection:
-➜ .\redis-cli.exe info server
-# Server
-redis_version:7.2.4
-server_name:valkey
-valkey_version:255.255.255
-...snip
-
-test setup:
-        svc.AddStackExchangeRedisCache(options =>
-        {
-            options.Configuration = ""127.0.0.1:6379"";
-        });
-
-those tests fail if I kill the server, but with it running:
-➜ dotnet test
-Restore complete (0.6s)
-.... snip
-Test summary: total: 292, failed: 0, succeeded: 292, skipped: 0, duration: 7.1s
-Build succeeded in 8.6s
-
-Basically: it looks fine to me! Note I'm using the .NET 9 dev branch, but I don't think that should matter here - or rather, if that was a factor, I'd expect to see RedisServerException complaining about unknown commands, rather than a connection error.
-",Redis
-"I have a hash pattern websocket:socket:*
-$redis->hMSet('websocket:socket:1', ['block' => 9866]);
-$redis->hMSet('websocket:socket:2', ['block' => 854]);
-$redis->hMSet('websocket:socket:3', ['block' => 854]);
-
-How can I fetch all hashes that matches pattern websocket:socket:* ??
-Or what is the best way (performange wise) to keep track of a list of items?
-","1. Redis does not provide search-by-value out of the box. You'll have to implement some kind of indexing yourself.
-Read more about indexing in Redis at Secondary indexing with Redis (or use RediSearch).
-
-2. Update: Newer versions of redis let you scan by ""type hash"" and use ""match foo*"" so you can now scan 0 type hash match websocket:socket:* to answer the original question.
-Here I have several pieces of data for my go-crawler so I can keys go-crawler* and one of these is a has (stats) so I can see that with scan 0 type hash match go-crawler:*. Once I have that I can hgetall go-crawler:request:site:statsorhkeys go-crawler:request:site:stats`, though I don't know of a way to filter those keys by ""match"".
-Here's a redis-cli example
-127.0.0.1:6379> scan 0 type hash
-1) ""0""
-2) 1) ""go-crawler:request:site:stats""
-127.0.0.1:6379> scan 0 type hash match ""go-crawler:*""
-1) ""0""
-2) 1) ""go-crawler:request:site:stats""
-127.0.0.1:6379> hset thayer one 1
-(integer) 1
-127.0.0.1:6379> scan 0 type hash match ""go-crawler:*""
-1) ""0""
-2) 1) ""go-crawler:request:site:stats""
-127.0.0.1:6379> scan 0 type hash
-1) ""0""
-2) 1) ""thayer""
-   2) ""go-crawler:request:site:stats""
-127.0.0.1:6379> 
-:::: graphite 01:30:52 (main) 0 crawl-mo; redis-cli
-127.0.0.1:6379> keys go-crawler*
-1) ""go-crawler:link:queue""
-2) ""go-crawler:request:site:stats""
-3) ""go-crawler:task:queue""
-127.0.0.1:6379> hgetall thayer
-1) ""one""
-2) ""1""
-
-
-",Redis
-"I am trying to use Google's sign in api using koa and passport. I'm creating a new GoogleStrategy and that seems to work fine 
-my issue is in the routes I don't want to redirect the user just yet, I want to send some user's info from my DB to the front end. I've tried passing in a function instead of successRedirect but I am not having any luck. I am new to koa and rethinkDB (using it not sure if it matters in this case). Any ideas would be helpful thanks.
-//Routes
-router.get('/auth/google', passport.authenticate('google' {session:false, scope:['email','profile'], accessType: 'offline', approvalPrompt: 'force'}));
-
-router.get('/auth/google/callback', 
-  passport.authenticate('google'),{successRedirect:'/home', failureRedirect:'/'}
-);
-
-","1. user passport-koa lib, its
-a koa plugin for passport.js
-import passport from ""passport"";
-import passportKoa from 'passport-koa'
-
-// use passport-koa
-passport.framework(passportKoa);
-
-then use passport as always
-",RethinkDB
-"I'm trying to make a wrapper module for the RethinkDB API and I've come across an AttributeError when importing my class(called rethinkdb.py). I'm working in a virtual machine having a shared folder 'Github'.
-I do this in IPython console:
-import library.api.rethinkdb as re
-
-This is the error:
-
-Traceback (most recent call last):
-File """", line 1, in 
-      import library.api.rethinkdb as re
-File ""/media/sf_GitHub/library/api/rethinkdb.py"", line 51,
-  in 
-      conn = Connection().connect_to_database()
-File ""/media/sf_GitHub/library/api/rethinkdb.py"", line 48,
-  in connect_to_database
-      raise e
-AttributeError: 'module' object has no attribute 'connect'
-
-This is the code:
-import rethinkdb as r  #The downloaded RethinkDB module from http://rethinkdb.com/
-
-class Connection(object):
-    def __init__(self, host='127.0.0.1', port=28015, database=None, authentication_key=''):
-        self.host = host
-        self.port = port
-        if database is None:
-            self.db = 'test'
-        self.auth_key = authentication_key
-
-    def connect_to_database(self):
-        try:
-            conn = r.connect(self.host, self.port, self.db, self.auth_key)
-        except Exception, e:
-            raise e
-        return conn    
-
-conn = Connection().connect_to_database()
-
-","1. I ran into something similar today and I noticed the authors have changed basic behavior of the API in the later versions.
-From what I have tested on my machine: 
-v2.3.0
-import rethinkdb as r
-r.connect()
-
-v2.4.1
-import rethinkdb as r
-rdb = r.RethinkDB()
-rdb.connect()
-
-
-2. It worked for me when I ran:
-import rethinkdb as rdb
-r = rdb.RethinkDB()
-r.connect('localhost', 28015).repl()
-
-",RethinkDB
-"I am trying to use rethinkdb and test it via expresso. I have function
-module.exports.setup = function() {
-  var deferred = Q.defer();
-  r.connect({host: dbConfig.host, port: dbConfig.port }, function (err, connection) {
-     if (err) return deferred.reject(err);
-     else deferred.resolve();
-  });
- return deferred.promise;
-});
-
-I am testing it like this
-  module.exports = {
-    'setup()': function() {
-        console.log(""in setup rethink"");
-
-        db.setup().then(function(){
-            console.log(clc.green(""Sucsessfully connected to db!""));
-        }).catch(function(err){
-            console.log('error');
-            assert.isNotNull(err, ""error"");
-        });
-        
-    }
-  };
-
-And I am runing code like this
-expresso db.test.js 
-
-But expresso shows error 100% 1 tests even in case of error.
-I tried to put throw err; in catch, but nothing changes.
-But if I put assert.eql(1, 2, ""error""); in the begining of setup() it fails as expected;
-Is there something that catches errors? How can I make it fail as it should be?
-For sequelize I found
-Sequelize.Promise.onPossiblyUnhandledRejection(function(e, promise) {
-    throw e;
-});
-
-Is there something like this for rethink db?
-","1. The problem is that this test is asynchronous, and you're treating it as a synchronous test. You need to do the following:
-  module.exports = {
-    'setup()': function(beforeExit, assert) {
-        var success;
-        db.setup().then(function(){
-            success = true;
-        }).catch(function(err){
-            success = false;
-            assert.isNotNull(err, ""error"");
-        });
-
-        beforeExit(function() {
-            assert.isNotNull(undefined, 'Ensure it has waited for the callback');
-        });
-    }
-  };
-
-Mocha vs Express
-You should consider taking a look at mocha.js, which has a much nicer API for asynchronous operations by passing the done function. The same test would look like this:
-  module.exports = {
-    'setup()': function(done) {
-        db.setup().then(function(){
-            assert.ok(true);
-        }).catch(function(err){
-            assert.isNotNull(err, ""error"");
-        })
-        .then(function () {
-            done();
-        });
-    }
-  };
-
-Promises
-The first function you wrote can be re-written in the following manner, because the RethinkDB driver, by default, returns a promise on all operations.
-module.exports.setup = function() {
-    return r.connect({host: dbConfig.host, port: dbConfig.port });
-});
-
-",RethinkDB
-"I have an array which look like this and its working fine but I need to arrange them groupwise like DateWise:
-[ 
-  {
-    name: 'Ali',
-    startedAt: Wed Dec 28 2016 15:32:07 GMT+0500 (Pakistan Standard Time),
-    userId: '0002'
-  },
-  {
-    completedAt: Wed Dec 28 2016 12:51:47 GMT+0500 (Pakistan Standard Time),
-    name: 'Aliy',
-    startedAt: Wed Dec 28 2016 12:43:19 GMT+0500 (Pakistan Standard Time),
-    userId: '212'
-  },
-  {
-    name: 'Aliy',
-    startedAt: Wed Dec 28 2016 12:43:06 GMT+0500 (Pakistan Standard Time),
-    userId: '2121'
-  }
-]
-
-I have a code which groups them on the basis of startedAt and it's working fine but the problem is that i want only date part like 28/12/2016.
-The below code is used for grouping :
-var groupedData = _.groupBy(data, function(d) {
-   return d.startedAt;
-});
-
-","1. On any modern browser, you can use the first 10 characters of the return value of toISOString:
-var groupedData = _.groupBy(datas, function(d){
-   return d.startedAt.toISOString().substring(0, 10);
-});
-
-The first ten characters are the year, then month, then date, zero-padded, e.g. 2017-01-17.
-Note that that will group them by the day in UTC, not local time.
-Also note that toISOString was added in 2009 as part of ES5.
-If you need to support obsolete browsers that don't have it, or you need to use local time, just build a string from the parts of the date you need, padding as necessary, using getFullYear/getMonth/getDate or getUTCYear/getUTCMonth/getUTCDate as needed.
-",RethinkDB
-"Spring can manage transactions using @Transaction annotation.
-Can we manage ScalarDB transactions with @Transaction annotation?
-","1. Scalar DB doesn't support Spring annotations at the moment.
-
-2. ScalarDB 3.8 was released yesterday and it supports Spring Data integration. With the integration, users can use ScalarDB via Spring Data JDBC API like the following example:
-@Repository
-public interface NorthAccountRepository
-    extends PagingAndSortingRepository<NorthAccount, Integer>,
-        ScalarDbHelperRepository<NorthAccount> {
-
-  @Transactional
-  default void transferToSouthAccount(
-      @Nonnull SouthAccountRepository southAccountRepository,
-      int fromId, int toId, int value) {
-
-    NorthAccount fromEntity =
-        findById(fromId).orElseThrow(() -> new AssertionError(""Not found: "" + fromId));
-    SouthAccount toEntity =
-        southAccountRepository
-            .findById(toId)
-            .orElseThrow(() -> new AssertionError(""Not found: "" + toId));
-    update(new NorthAccount(fromEntity.id, fromEntity.balance - value));
-    southAccountRepository.update(new SouthAccount(toEntity.id, toEntity.balance + value));
-  }
-
-  @Transactional
-  default void deleteAfterSelect(int id) {
-    findById(id).ifPresent(this::delete);
-  }
-}
-
-It also provides explicit insert(T) and update(T) APIs in addition to the existing save(T).
-The integration is available under the commercial license.
-",ScalarDB
-"In order to use multi-storage in Scalar DB, I am implementing it with MySQL and Dynamo DB Local, but the Endpoint Override setting for Dynamo DB Local does not work.
-I have configured the following settings, but are they correct?
-## Dynamo DB for the transaction tables
-scalar.db.multi_storage.storages.dynamo.storage=dynamo
-scalar.db.multi_storage.storages.dynamo.contact_points=ap-northeast-1
-scalar.db.multi_storage.storages.dynamo.username=fakeMyKeyId
-scalar.db.multi_storage.storages.dynamo.password=fakeMyKeyId
-scalar.db.multi_storage.storages.dynamo.contact_port=8000
-scalar.db.multi_storage.storages.dynamo.endpoint-override=http://localhost:8000
-
-","1. The format of the storage definition in Multi-storage configuration is as follows:
-scalar.db.multi_storage.storages.<storage name>.<property name without the prefix 'scalar.db.'>""
-
-For example, if you want to specify the scalar.db.contact_points property for the cassandra storage, you can specify scalar.db.multi_storage.storages.cassandra.contact_points.
-In your case, the storage name is dynamo, and you want to specify the scalar.db.dymano.endpoint-override property, so you need to specify scalar.db.multi_storage.storages.dynamo.dynamo.endpoint-override as follows:
-scalar.db.multi_storage.storages.dynamo.dynamo.endpoint-override=http://localhost:8000
-
-Please see the following document for the details:
-https://github.com/scalar-labs/scalardb/blob/master/docs/multi-storage-transactions.md
-",ScalarDB
-"When using Scalar DB on Azure Cosmos DB, I'm considering the use of zone redundancy configuration to increase availability.
-Is it possible to use Scalar DB on Azure Cosmos DB in a single region zone redundancy configuration? The consistency level of Cosmos DB is Strong.
-","1. Scalar DB can work with multiple zones as long as zone redundancy supports Strong consistency.
-However, since the implementation of Cosmos DB is totally disclosed, please check with Azure technical support to see if Strong consistency works properly with multiple zones.
-",ScalarDB
-"I want to use ScalarDB with a schema called user created in DynamoDB.
-As an example, the user schema is defined as follows
-{
-  ""sample_db.user"": {
-    ""transaction"": true,
-    ""partition-key"": [
-      ""user_id"".
-    ],
-    ""clustering-key"": [],
-    ""columns"": {
-      ""user_id"": ""TEXT"",
-      ""user_name"": ""TEXT"",
-      ""status"": ""TEXT""
-    },
-    ""ru"": 5000,
-    ""compaction-strategy"": ""LCS"",
-    ""secondary-index"": [
-      ""status"": ""TEXT""
-    ]
-  }
-}
-
-I was able to create this user schema in DynamoDB.
-However, when I perform CRUD processing on this schema using the ScalarDB functionality, DynamoDB returns a syntax violation error because the 'status' is a reserved word.
-DynamoDB's reserved words are summarized here.
-https://docs.aws.amazon.com/ja_jp/amazondynamodb/latest/developerguide/ReservedWords.html
-In this case, I would like to know if engineers using ScalarDB should define their schema with this issue in mind.
-I'd be happy if future improvements would make it possible to use database-specific reserved words in column names and still use the ScalarDB functionality.
-","1. This issue is fixed in the following PR:
-https://github.com/scalar-labs/scalardb/pull/264
-And this fix will be in the next releases: Scalar DB 3.2.0, 3.1.1 and 3.0.2.
-",ScalarDB
-"I want to expire a status field in my users table, just like how slack expires a status after the expiry set by the user.
-I have integrated scylla cdc but scylla cdc does not give me the updated response after status is expired.
-My project's language is golang and DB is scylla. So anyone have any other idea than this, please suggest.
-I have tried integrating scylla cdc but it did not returns me updated result after a status is expired (used TTL in the query), I was expecting that scylla cdc will give me a response than I will send that response to my server to notify other users but it did not happened.
-","1. You are right, ScyllaDB's CDC does not send events for the eventual expiration of data. You get the event about the change that originally set the data, and this event contains the full TTL information about when this data is set to expire - but you don't get an additional event when (say) a week later the data really expires. This is a known limitation of ScyllaDB, see for example issue #8380.
-This fact is not a simple bug or oversight, and will be hard to fix because of how TTL works in ScyllaDB (and Cassandra), and also how CDC works in ScyllaDB:
-
-ScyllaDB's TTL does not use an expensive background thread, or queues or timers or anything of that sort to actively look for expired data and delete it. Rather, only when you try to read expired data, it is filtered out from the results, and only eventually when data is compacted the expired data is really deleted from disk. This means that if some data expires
-midnight January 1st, there isn't any Scylla process that notices that this date of midnight January 1st has arrived and can generate the CDC event. It is quite possible that until February no read or compaction will even try to read this data and notice it had already expired.
-Even if you're content to get the expiration event delayed, only when compaction finally notices it (note that this can be arbitrarily long delay!), the problem is that different replicas of the data (you have RF, usually 3, copies of the same data) will notice this at different times. Scylla's CDC can't work this way: It needs the same event to appear on the RF replicas at roughly the same time.
-Finally, note that Scylla's TTL feature is per cell (value of a single column) - a row doesn't need to expire entirely at one time, and pieces of it can expire at different times. Even if CDC could send such events, they would look rather odd - not a row deletion but deletions of pieces of the row.
-
-All of the above is about Scylla's CQL and CDC. But Scylla also has support for the DynamoDB API (this feature is known as ScyllaDB Alternator), and the DynamoDB API does have Streams (their version of CDC) and TTL on entire rows, and does expiration events on CDC. So how does Scylla implement this? It turns out that to implement this, Scylla indeed uses a background thread that actively looks for expired items, and deletes them while creating CDC events.
-In issue #13000 I proposed to expose this alternative TTL implementation also for CQL. It will behave differently from the existing CQL TTL, and have lower performance - but some might argue it will be more useful.
-",Scylla
-"I am looking for an easy way in CQL to show a human readable form of the timestamp returned by writetime(). I could not find anything googling, is that possible at all?
-","1. In theory, the timestamp used in writes in CQL (ScyllaDB and Cassandra) is just a number - with higher numbers indicating newer updates so they win in conflict resolution. Again in theory, a CQL update can specify any random and meaningless number as the timestamp by adding a ""USING TIMESTAMP"" option on the write.
-However, as a convention, we almost always use as a timestamp the number of microseconds since the UNIX epoch (January 1st, 1970). This convention is enforced when the server is asked to pick a timestamp (this also includes the case of LWT), and also by library implementations (that use the current time to calculate a client-side timestam), so for all intents and purposes you can assume that the timetamps you will read with writetime() in your applications will be like that - number of microseconds since the epoch.
-You can convert these numbers to other formats using whatever tools exist in your application language to convert date formats. For example in Python:
->>> writetime=1699782709000000
->>> from time import strftime, gmtime
->>> strftime('%Y-%m-%d %H:%M:%S', gmtime(writetime/1e6))
-'2023-11-12 09:51:49'
-
-Note how we divided writetime by 1e6 (i.e., 1 million) to convert microseconds to seconds since the epoch, then used gmtime() to say that this number uses the GMT (a.k.a. UTC) timezone, not the local one on your computer, and finally strftime() to convert this into a date string in your choice of format.
-
-2. I'm not familiar with Scylla, but in Cassandra you can convert writetime to a timestamp with TOTIMESTAMP(MINTIMEUUID(WRITETIME(val)/1000)) as writetime_tm
-",Scylla
-"2023/12/25 17:16:19.022 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
-2023/12/25 17:16:19.028 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-22 - Starting...
-2023/12/25 17:16:19.029 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.pool.HikariPool : HikariPool-22 - Added connection conn50: url=jdbc:h2:mem:config user=SA
-2023/12/25 17:16:19.029 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-22 - Start completed.
-2023/12/25 17:16:19.030 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-23 - Starting...
-2023/12/25 17:16:19.036 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.pool.HikariPool : HikariPool-23 - Added connection com.mysql.cj.jdbc.ConnectionImpl@2a513389
-2023/12/25 17:16:19.037 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-23 - Start completed.
-2023/12/25 17:16:19.037 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-24 - Starting...
-2023/12/25 17:16:19.042 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.pool.HikariPool : HikariPool-24 - Added connection com.mysql.cj.jdbc.ConnectionImpl@af5a9ec
-2023/12/25 17:16:19.042 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-24 - Start completed.
-2023/12/25 17:16:19.042 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-25 - Starting...
-2023/12/25 17:16:19.047 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.pool.HikariPool : HikariPool-25 - Added connection com.mysql.cj.jdbc.ConnectionImpl@392a56e9
-2023/12/25 17:16:19.047 INFO  [x-transaction-id: ys-139956de696c4ca8bd706ce9e5ce2816] [tomcat-handler-4] com.zaxxer.hikari.HikariDataSource : HikariPool-25 - Start completed.
-jakarta.servlet.ServletException: Handler dispatch failed: java.lang.NoSuchMethodError: org.yaml.snakeyaml.representer.Representer: method 'void ()' not found
-at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1104)
-at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:979)
-at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1014)
-at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:903)
-at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:564)
-at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:885)
-at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:658)
-at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:205)
-at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
-at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:51)
-at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
-at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
-at com.ys.order.filter.MDCTraceFilter.doFilter(MDCTraceFilter.java:35)
-at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
-at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
-at org.springframework.web.filter.ServerHttpObservationFilter.doFilterInternal(ServerHttpObservationFilter.java:109)
-at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
-at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
-at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
-at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
-at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116)
-at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174)
-at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149)
-at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167)
-at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90)
-at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482)
-at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115)
-at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93)
-at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74)
-at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:340)
-at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391)
-at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63)
-at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896)
-at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744)
-at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52)
-at java.base/java.lang.VirtualThread.run(VirtualThread.java:309)
-Caused by: java.lang.NoSuchMethodError: org.yaml.snakeyaml.representer.Representer: method 'void ()' not found
-at org.apache.shardingsphere.infra.util.yaml.representer.ShardingSphereYamlRepresenter.(ShardingSphereYamlRepresenter.java:42)
-at org.apache.shardingsphere.infra.util.yaml.YamlEngine.marshal(YamlEngine.java:112)
-at org.apache.shardingsphere.metadata.persist.service.config.global.NewPropertiesPersistService.persist(NewPropertiesPersistService.java:51)
-at org.apache.shardingsphere.metadata.persist.NewMetaDataPersistService.persistGlobalRuleConfiguration(NewMetaDataPersistService.java:97)
-at org.apache.shardingsphere.mode.metadata.NewMetaDataContextsFactory.persistDatabaseConfigurations(NewMetaDataContextsFactory.java:147)
-at org.apache.shardingsphere.mode.metadata.NewMetaDataContextsFactory.create(NewMetaDataContextsFactory.java:102)
-at org.apache.shardingsphere.mode.metadata.NewMetaDataContextsFactory.create(NewMetaDataContextsFactory.java:71)
-at org.apache.shardingsphere.mode.manager.standalone.NewStandaloneContextManagerBuilder.build(NewStandaloneContextManagerBuilder.java:53)
-at org.apache.shardingsphere.driver.jdbc.core.datasource.ShardingSphereDataSource.createContextManager(ShardingSphereDataSource.java:78)
-at org.apache.shardingsphere.driver.jdbc.core.datasource.ShardingSphereDataSource.(ShardingSphereDataSource.java:66)
-at org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory.createDataSource(ShardingSphereDataSourceFactory.java:95)
-at org.apache.shardingsphere.driver.api.yaml.YamlShardingSphereDataSourceFactory.createDataSource(YamlShardingSphereDataSourceFactory.java:167)
-at org.apache.shardingsphere.driver.api.yaml.YamlShardingSphereDataSourceFactory.createDataSource(YamlShardingSphereDataSourceFactory.java:102)
-at org.apache.shardingsphere.driver.jdbc.core.driver.DriverDataSourceCache.createDataSource(DriverDataSourceCache.java:52)
-at org.apache.shardingsphere.driver.jdbc.core.driver.DriverDataSourceCache.lambda$get$0(DriverDataSourceCache.java:46)
-at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708)
-at org.apache.shardingsphere.driver.jdbc.core.driver.DriverDataSourceCache.get(DriverDataSourceCache.java:46)
-at org.apache.shardingsphere.driver.ShardingSphereDriver.connect(ShardingSphereDriver.java:53)
-at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:121)
-at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359)
-at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201)
-at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470)
-at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
-at com.zaxxer.hikari.pool.HikariPool.(HikariPool.java:100)
-at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
-at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:160)
-at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:118)
-at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:81)
-at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:388)
-at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:476)
-at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:486)
-at org.springframework.jdbc.core.JdbcTemplate.queryForList(JdbcTemplate.java:536)
-at com.ys.order.controller.TestController.test4(TestController.java:133)
-at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
-at java.base/java.lang.reflect.Method.invoke(Method.java:580)
-at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:352)
-at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:196)
-at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
-at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:765)
-at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
-at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:184)
-at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:765)
-at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:717)
-at com.ys.order.controller.TestController$$SpringCGLIB$$0.test4()
-at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
-at java.base/java.lang.reflect.Method.invoke(Method.java:580)
-at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:254)
-at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:182)
-at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:118)
-at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:917)
-at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:829)
-at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
-at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1089)
-... 35 more
-","1. 
-The reason why the old version of ShardingSphere cannot change the
-SnakeYAML version is that ElasticJob uses the old version of the
-SnakeYAML API. Therefore, only when ElasticJob makes changes and
-releases 3.0.4, ShardingSphere can make changes.
-At the current stage, you need to compile this project manually and
-install the corresponding 5.4.2 snapshot version of ShardingSphere
-into the local maven repo through Maven's install goal, or deploy it
-into a private maven repo.
-
-",ShardingSphere
-"I am learning to use the spring boot framework to integrate MyBatis Plus and ShardingSphere to achieve MySQL master-slave read-write separation (one master and one slave)。
-The master library is the 3306 port of my local localhost, and the slave library is in my locally installed Docker and uses the 3309 port mapping of the local localhost。
-After testing, I can log in to the database on ports 3306 and 3309 locally.
-
-I'm writing my code in Visual Studio Code;master-slave MySQL 8.0+
-
-But when I started the spring boot project I got this error message:
-
-Description:
-Failed to configure a DataSource: 'url' attribute is not specified and no embedded datasource could be configured.
-Reason: Failed to determine a suitable driver class
-
-Next I give the relevant configuration of my project:
-application.yml:
-server:
-    port: 8080
-
-spring:
-    shardingsphere:
-        datasource:
-            names:
-                master,slave
-            master:
-                type: com.alibaba.druid.pool.DruidDataSource
-                driver-class-name: com.mysql.cj.jdbc.Driver
-                url: jdbc:mysql://localhost:3306/rw?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
-                username: root
-                password: 5508769123
-            slave:
-                type: com.alibaba.druid.pool.DruidDataSource
-                driver-class-name: com.mysql.cj.jdbc.Driver
-                url: jdbc:mysql://localhost:3309/rw?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
-                username: root
-                password: 5508769123
-        sharding:
-        masterslave:
-            load-balance-algorithm-type: round_robin
-            name: dataSource
-            master-data-source-name: master
-            slave-data-source-names: slave
-        props:
-            sql:
-                show: true 
-    main:
-        allow-bean-definition-overriding: true
-
-
-mybatis-plus:
-    configuration:
-        map-underscore-to-camel-case: true
-        log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
-    global-config:
-        db-config:
-            id-type: ASSIGN_ID
-
-pom.xml:
-<?xml version=""1.0"" encoding=""UTF-8""?>
-<project xmlns=""http://maven.apache.org/POM/4.0.0"" xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""
-    xsi:schemaLocation=""http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"">
-    <modelVersion>4.0.0</modelVersion>
-    <parent>
-        <groupId>org.springframework.boot</groupId>
-        <artifactId>spring-boot-starter-parent</artifactId>
-        <version>3.1.4</version>
-        <relativePath/>
-    </parent>
-    <groupId>com.mercurows</groupId>
-    <artifactId>rw_demo</artifactId>
-    <version>0.0.1-SNAPSHOT</version>
-    <packaging>war</packaging>
-    <name>rw_demo</name>
-    <description>Demo project for Spring Boot</description>
-    <properties>
-        <java.version>17</java.version>
-    </properties>
-    <dependencies>
-
-        <dependency>
-            <groupId>org.springframework.boot</groupId>
-            <artifactId>spring-boot-starter-thymeleaf</artifactId>
-        </dependency>
-        <dependency>
-            <groupId>org.springframework.boot</groupId>
-            <artifactId>spring-boot-starter-web</artifactId>
-        </dependency>
-
-        <dependency>
-            <groupId>org.springframework.boot</groupId>
-            <artifactId>spring-boot-devtools</artifactId>
-            <scope>runtime</scope>
-            <optional>true</optional>
-        </dependency>
-        <dependency>
-            <groupId>com.mysql</groupId>
-            <artifactId>mysql-connector-j</artifactId>
-            <scope>runtime</scope>
-        </dependency>
-        <dependency>
-            <groupId>org.projectlombok</groupId>
-            <artifactId>lombok</artifactId>
-            <optional>true</optional>
-        </dependency>
-        <dependency>
-            <groupId>org.springframework.boot</groupId>
-            <artifactId>spring-boot-starter-test</artifactId>
-            <scope>test</scope>
-        </dependency>
-        <dependency>
-            <groupId>mysql</groupId>
-            <artifactId>mysql-connector-java</artifactId>
-            <version>8.0.30</version>
-        </dependency>
-
-        <dependency>
-            <groupId>com.alibaba</groupId>
-            <artifactId>druid</artifactId>
-            <version>1.1.22</version>
-        </dependency>
-        <dependency>
-            <groupId>com.baomidou</groupId>
-            <artifactId>mybatis-plus-boot-starter</artifactId>
-            <version>3.5.3</version>
-            <!-- <exclusions>
-                <exclusion>
-                    <groupId>org.springframework.boot</groupId>
-                    <artifactId>spring-boot-starter-jdbc</artifactId>
-                </exclusion>
-                <exclusion>
-                    <groupId>org.springframework.boot</groupId>
-                    <artifactId>spring-boot-autoconfigure</artifactId>
-                </exclusion>
-                <exclusion>
-                    <artifactId>spring-boot-autoconfigure</artifactId>
-                    <groupId>org.springframework.boot</groupId>
-                </exclusion>
-                <exclusion>
-                    <artifactId>spring-boot-autoconfigure</artifactId>
-                    <groupId>org.springframework.boot</groupId>
-                </exclusion>
-            </exclusions> -->
-        </dependency>
-        <dependency>
-            <groupId>com.alibaba</groupId>
-            <artifactId>fastjson</artifactId>
-            <version>2.0.39</version>
-        </dependency>
-
-        <!-- 导入读写分离坐标 -->
-        <dependency>
-            <groupId>org.apache.shardingsphere</groupId>
-            <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
-            <!-- <version>4.0.0-RC1</version> -->
-            <version>4.1.0</version>
-        </dependency>
-    </dependencies>
-
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.springframework.boot</groupId>
-                <artifactId>spring-boot-maven-plugin</artifactId>
-                <configuration>
-                    <excludes>
-                        <exclude>
-                            <groupId>org.projectlombok</groupId>
-                            <artifactId>lombok</artifactId>
-                        </exclude>
-                    </excludes>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-
-</project>
-
-
-My main:
-package com.mercurows;
-
-import org.springframework.boot.SpringApplication;
-import org.springframework.boot.autoconfigure.SpringBootApplication;
-import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;
-
-// import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure;
-
-import lombok.extern.slf4j.Slf4j;
-
-@Slf4j
-// @SpringBootApplication(exclude = DruidDataSourceAutoConfigure.class)
-// @SpringBootApplication(exclude = {DataSourceAutoConfiguration.class })
-@SpringBootApplication
-public class RwDemoApplication {
-
-    public static void main(String[] args) {
-        SpringApplication.run(RwDemoApplication.class, args);
-        log.info(""项目启动成功。。。"");
-    }
-}
-
-
-Thank you for taking the time to read my question.
-I'm looking forward to any possible solutions.
-I've tried:
-
-@SpringBootApplication(exclude = DruidDataSourceAutoConfigure.class)
-@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class })
-Add exclusions in mybatis-plus-boot-starter:
-    <dependency>
-        <groupId>com.baomidou</groupId>
-        <artifactId>mybatis-plus-boot-starter</artifactId>
-        <version>3.5.3</version>
-        <exclusions>
-            <exclusion>
-                <groupId>org.springframework.boot</groupId>
-                <artifactId>spring-boot-starter-jdbc</artifactId>
-            </exclusion>
-            <exclusion>
-                <groupId>org.springframework.boot</groupId>
-                <artifactId>spring-boot-autoconfigure</artifactId>
-            </exclusion>
-            <exclusion>
-                <artifactId>spring-boot-autoconfigure</artifactId>
-                <groupId>org.springframework.boot</groupId>
-            </exclusion>
-            <exclusion>
-                <artifactId>spring-boot-autoconfigure</artifactId>
-                <groupId>org.springframework.boot</groupId>
-            </exclusion>
-        </exclusions>
-    </dependency>
-
-
-Version changes of some dependencies
-
-expecting:Start the project normally and realize the read-write separation of the master-slave database.
-","1. The sharding-jdbc-spring-boot-starter in version 4.1.1 is over 3 years old, which was before Spring Boot 3 and the JakartaEE migration. So it seems to be incompatible.
-Looking at the current documentation there is no more dedicated starter for Spring Boot and the whole architecture has also changed. It is no ""just a Driver"" which you need to configure.
-In short, ditch the starter and configure the driver as mentioned in the documentation, which also has a dedicated Spring Boot 3 section  on what to include/exclude to make it work.
-",ShardingSphere
-"I am totally new to spark and singlestore. I am trying to read data  from singlestore using spark,this is the code i have written -
-from pyspark.sql import SparkSession
-
-spark = SparkSession.builder \
-   .appName(""ReadFromSingleStore"") \
-   .config(""spark.datasource.singlestore.host"", ""abcd1"") \
-   .config(""spark.datasource.singlestore.port"", 3306) \
-   .config(""spark.datasource.singlestore.user"", ""abcd2"") \
-   .config(""spark.datasource.singlestore.password"", ""abcd3"") \
-   .config(""spark.datasource.singlestore.database"", ""abcd4"") \
-   .getOrCreate()
-
-
-
-# Read data from SingleStore table
-sql = ""select * from INV_DOI_CDL_VW order by INSERTED_DATE ASC, TRANSACTION_ID DESC limit 100""
-df = spark.read.format(""singlestore"").option(""query"", sql).load()
-
-
-results = df.collect()
-for row in results:
-   print(row)
-
-# Stop the Spark session
-spark.stop()
-
-I also have the singlestore-spark-connector jar in my directory. when i am trying to run this code i am getting this error-
-File ""C:\Program Files\Python310\lib\subprocess.py"", line 1438, in _execute_child
-hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
-FileNotFoundError: [WinError 2] The system cannot find the file specified
-what am i doing wrong? this is my first time working with singlestore.
-","1. Did you provide the link to the Spark Connector when you launched Spark? For example:
-$SPARK_HOME/bin/spark-shell --packages com.singlestore:singlestore-spark-connector_2.12:4.1.6-spark-3.5.0
-
-Check the GH repo for additional details. The documentation is also a good place to start.
-",SingleStore
-"I've created the following procedure in SingleStore
-DELIMITER //
-CREATE OR REPLACE PROCEDURE updateColumnModelName(tableName TEXT, columnName TEXT) AS
-  DECLARE has_column INT DEFAULT 0;
-  DECLARE command TEXT;
-  BEGIN
-    SELECT EXISTS (
-     SELECT *
-     FROM INFORMATION_SCHEMA.COLUMNS
-     WHERE TABLE_NAME = tableName
-     AND COLUMN_NAME = columnName
-    ) INTO has_column;
-
-     IF has_column THEN
-       SET command = CONCAT('ALTER TABLE ', table_name, ' ADD COLUMN ', column_name, ' LONGTEXT CHARACTER SET utf8mb4 NOT NULL');
-     ELSE
-       SET command = CONCAT('ALTER TABLE ', table_name, ' DROP COLUMN ', column_name);
-     END IF;
-    
-     EXECUTE IMMEDIATE command;
-
-   END //
-DELIMITER ;
-
-
-Procedure is created with no problems, but when I call it by
-CALL updateColumnModelName(""Transcription"", ""ModelName"");
-I receive the following error:
-
-ERROR 1193 ER_UNKNOWN_SYSTEM_VARIABLE: Unhandled exception Type: ER_UNKNOWN_SYSTEM_VARIABLE (1193) Message: Unknown system variable 'comand' Callstack: #0 Line 13 in example_db.updateColumnModelName
-
-I tried to use a different approach with
-DECLARE dynamic_sql TEXT;  
-....  
-SET @stmt = command;     
-PREPARE stmt FROM @stmt;      
-EXECUTE stmt;      
-DEALLOCATE PREPARE stmt;
-
-But received the following error in this case:
-ERROR 1149 ER_SYNTAX_ERROR: line 20, syntax error at or near ""stmt""
-","1. Actually I managed too solve the problem. It was related to the variable names since I was making confusion with the values passed by reference to the function.
-Here is the workable version in case of anyone needs.
-DELIMITER //
-
-CREATE OR REPLACE PROCEDURE updateColumnModelName(tableName TEXT, columnName TEXT) AS
-DECLARE has_column INT DEFAULT 0;
-
-BEGIN
-  SELECT EXISTS (
-    SELECT *
-    FROM INFORMATION_SCHEMA.COLUMNS
-    WHERE TABLE_NAME = tableName
-    AND COLUMN_NAME = columnName
-  ) INTO has_column;
-
-    IF NOT has_column THEN
-      EXECUTE IMMEDIATE CONCAT('ALTER TABLE ', tableName, ' ADD COLUMN ', columnName, ' LONGTEXT CHARACTER SET utf8mb4 NOT NULL');
-    END IF;
-    
-END //
-
-DELIMITER ;
-
-
-2. try this query
-DELIMITER //
-
-CREATE OR REPLACE PROCEDURE updateColumnModelName(tableName TEXT, 
-columnName TEXT)
-AS
-DECLARE has_column INT DEFAULT 0;
-DECLARE command TEXT;
-BEGIN
-SELECT EXISTS (
-SELECT *
-FROM INFORMATION_SCHEMA.COLUMNS
-WHERE TABLE_NAME = tableName
-AND COLUMN_NAME = columnName
-) INTO has_column;
-
- IF has_column THEN
-SET command = CONCAT('ALTER TABLE ', tableName, ' ADD COLUMN ', columnName, ' LONGTEXT CHARACTER SET utf8mb4 NOT NULL');
-ELSE
-SET command = CONCAT('ALTER TABLE ', tableName, ' DROP COLUMN ', columnName);
-END IF;
-
-EXECUTE IMMEDIATE command;
-
-END //
-DELIMITER ;
-
-",SingleStore
-"The SpiceDB gRPC endpoint for LookupResources returns a gRPC stream of resource IDs with a cursor.
-Consuming gRPC streams from Clojure can be gnarly. I know I need to reify StreamObserver and consume the stream until no items remain.
-I could not find a good self-contained example of how to do this using the io.grpc Java gRPC libraries without introducing core.async. How would you consume a gRPC response stream and return a Clojure vector of all exhausted results?
-At this point, we can assume the result set is small and can be eagerly consumed without any additional async structures. Bonus points for lazy loading, but lazy loading would probably require some state management.
-","1. When using a blocking stub, use iterator-seq on the blocking stream response, collect the items in an atom, and deref the atom:
-(let [!results (atom [])
-        request  (-> (PermissionService$LookupResourcesRequest/newBuilder)
-                   (.setSubject (make-subject subject-type subject-id))
-                   (.setPermission permission)
-                   (with-consistency consistency)
-                   (.setResourceObjectType resource-type)
-                   (.build))
-        ^PermissionService$LookupResourcesResponse response
-                 (.lookupResources service request)]
-    (doseq [x (iterator-seq response) ;; note iterator-seq.
-            :let [resource-id (.getResourceObjectId x)]]
-      (swap! !results conj [resource-type resource-id]))
-    @!results)
-
-For a non-blocking gRPC service stub, the implementation will have to change.
-",SpiceDB
-"I'm having an issue with authzed schema and relationships. I seem to be misunderstanding how things work.
-I've got a scenario where users can be part of a group either by direct inclusion or by indirect location based criteria, location is hierarchical, with three levels -- Country, State/Province, and City.
-That is to say,
-Anyone in Wichita, Topeka, or Dodge City is also in Kansas
-Anyone in Seattle, Tacoma, or Spokane is in Washington
-Anyone in Kansas or in Washington is in the United States
-Similarly, anyone who is in Pune is also in Maharashtra, and anyone in Maharashtra is in India
-I've built a schema (https://play.authzed.com/s/cBfN1HhtcoVE) that supports detection of direct inclusions.
-I have a user_group called wichitans. It includes (naturally) users in the wichita org_unit, as well as user Samuel, who is in Seattle, but will be moving to wichita in the coming months.
-I'm using the permission name ""is_user_in_ex/implicits"" just to understand I have grouping correct. I can see in the Expected_relations that Samuel is in Wichita explicits and Wally is in Wichita implicits which is what I expect, as wally is in the children of wichita.
-
-Now I make a small change to line 22 of the test relationships (https://play.authzed.com/s/zeYxryGzYbaK), so that instead of assigning Wichita to the to implicits, I assign Kansas to implicits. Samual remains in Explicits, Wichita remains in Implicits (because it's a child of Kansas), but Wally is no longer in implicits. I was under the assumption that there would be a recursive evaluation, but that doesn't appear to be the case. Is there a different operator to say ""I would like this relationship to be recursive"" or do I need to change some schema definitions? I'd like to avoid splitting the org unit into three distinct levels if possible.
-","1. In SpiceDB, you can take the permissions computations very literally. In the first schema, where the block looks like:
-definition user_group {
-    relation implicits : org_unit
-    relation explicits : user
-
-    permission is_user_in_implicits = implicits + implicits->children
-    permission is_user_in_explicits = explicits
-}
-
-definition org_unit {
-    relation parent: org_unit
-    relation children: org_unit | user 
-}
-
-We are starting our permissions walk at the user_group object type. When calculating the is_user_in_implicits we are gathering up the relationships for implicits, which contains only the relationship:
-user_group:wichitans#implicits@org_unit:kansas
-
-Then, we union that with the objects (note: I don't say users) that are referenced by implicits->children. Pseudocode for what this does could be written as:
-for relationship in implicits:
-  for child in relationship.subject.children:
-    yield child
-
-With the given the relevant children relationships:
-org_unit:kansas#children@org_unit:wichita
-
-Will yield the subject org_unit:wichita.
-There are no further instructions for the permissions system to follow or resolve.
-As noted in the sibling answer, one way to resolve this is to point to a permission on the child. By putting is_user_in_implicits on both the org_unit and user_group, we can resolve through that permission regardless of what type the children relation points to. This is called ""duck typing"" and should be familiar from programming languages such as python and ruby.
-Another way to accomplish this, would be to set the type of children to reference not the org unit itself, but the org unit's children, as follows:
-definition user_group {
-    relation implicits : org_unit#children
-    relation explicits : user
-
-    permission is_user_in_implicits = implicits + implicits->children
-    permission is_user_in_explicits = explicits
-}
-
-definition org_unit {
-    relation parent: org_unit
-    relation children: org_unit#children | user 
-}
-
-This will require you to set the optional_subject on the relationships to children, but will allow you to hoist the decision about whether to recursively descent into the data layer.
-I prefer to be explicit about when we're descending recursively when possible.
-You can read more about how SpiceDB computes permissions in the following blog posts:
-
-https://authzed.com/blog/check-it-out
-https://authzed.com/blog/check-it-out-2
-
-
-2. While exploring I found if I create a duplicate permission  ""is_user_in_implicits"" on the org unit and selected the children of an org unit + the new ""is_user_in_implicits"" permission of the org unit's childern it appears that recursive relationships work as expected even up to the level of the united states (at this point it also picks up seattle, washington and Samuel, but that's how I would expect it to work"". Is this the correct approach for getting a recursive relationship?
-https://play.authzed.com/s/ZcjmA_7_1Xg3/schema
-",SpiceDB
-"We're planning to implement authzed/spicedb first time in our product but not sure which storage system to go for.. They provide in memory, Postgres, Cockroach, Spanner and MySQL.
-We don't want to use in memory storage but we're confused between other three options.
-As of now tried nothing as this is new for me.
-","1. The storage systems available for SpiceDB each have trade-offs, but can usually be recommended with this general rule of thumb:
-If your goal is to run SpiceDB clusters around the world and have them all share the same data -- use CockroachDB or Spanner. Spanner is best if all of your infrastructure runs exclusively on Google Cloud.
-If your goal is to run SpiceDB in a single region/datacenter, then use PostgreSQL. At the time of this post, MySQL isn't as performant and I cannot recommended it for production usage unless your organization has a hard requirement and is willing to contribute to the open source project.
-If you haven't already, check out the official doc on Selecting a Datastore.
-",SpiceDB
-"I'm using spicedb with postgres database, I've noticed that when I delete a Relation the corresponding tuple is left in the relation_tuple table and the value of the column deleted_xid is set to transaction id.
-Are those records to remain there forever? When will they be actually deleted?
-I'm worrying that in a relative short time the table will be clogged with useless records...
-","1. In order to implement ""time-traveling queries"", SpiceDB's Postgres datastore implementation actually retains data for a period of time after deletion.
-How long this data is available, how often garbage collection is conducted, and how long until garbage collection times out are all configurable.
-From the spicedb serve --help usage:
-  --datastore-gc-interval duration            amount of time between passes of garbage collection (postgres driver only) (default 3m0s)
-  --datastore-gc-max-operation-time duration  maximum amount of time a garbage collection pass can operate before timing out (postgres driver only) (default 1m0s)
-  --datastore-gc-window duration              amount of time before revisions are garbage collected (default 24h0m0s)
-
-If you're curious, you can dive deeper into the implementation by reading this particular file: https://github.com/authzed/spicedb/blob/v1.25.0/internal/datastore/postgres/gc.go
-",SpiceDB
-"I need to etl a set of tables from DynamoDB to StarRocks.  Has anyone used the StarRocks Load tool to accomplish this?  If so, can you share how?
-Second related question: some of the tables are very large (500,000,000 records). This is live data. I expect the etl to StarRocks to take more than 24 hours.  If it does, how does one both etl the data present at start amd then keep track of deltas to the data (new records are not a problem, but deletes and updates in a stream may happen in the stream before the record is put into the table.)  So, how does one handle this?
-Thanks in advance!
-","1. I don't know anything about Starrocks, never heard of it.
-But for your question regarding keeping them in sync, DynamoDB supports two types of streams
-
-DynamoDB Streams - 24 hour retention
-Kinesis Data Streams - up to 1 year retention
-
-So you have both options to help you replay the events of the source region while migration is happening.
-",StarRocks
-"Trying to setup Stolon on docker swarm, right now to simplify I have all services running on the same host, on the manager node.
-For the life of me I can't seem to get past the error log message from keeper
-Keeper logs
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Starting Stolon as a keeper...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to be ready at consul:8500...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to start...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Waiting for Consul to start...
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | Consul is ready.
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.328Z   INFO    cmd/keeper.go:2091   exclusive lock on data dir taken
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.332Z   INFO    cmd/keeper.go:569    keeper uid       {""uid"": ""postgres_dsyf1a7juv4u1iwyjj6434ldx""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:18:57.337Z   INFO    cmd/keeper.go:1048   no cluster data available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:02.345Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:07.347Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:12.349Z   INFO    cmd/keeper.go:1080   our keeper data is not available, waiting for it to appear
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.352Z   INFO    cmd/keeper.go:1141   current db UID different than cluster data db UID        {""db"": """", ""cdDB"": ""8198992d""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.352Z   INFO    cmd/keeper.go:1148   initializing the database cluster
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:17.384Z   ERROR   cmd/keeper.go:1174   failed to stop pg instance       {""error"": ""cannot get instance state: exit status 1""}
-app_stack_stolon-keeper.1.dsyf1a7juv4u@manager1    | 2024-04-22T13:19:22.387Z   ERROR   cmd/keeper.go:1110   db failed to initialize or resync
-
-Docker Compose
-version: '3.8'
-
-services:
-  consul:
-    image: dockerhub-user/app-consul:latest
-    volumes:
-      - console_data:/consul/data
-    ports:
-      - '8500:8500'  # Expose the Consul UI and API port
-      - ""8400:8400""
-      - ""8301-8302:8301-8302""
-      - ""8301-8302:8301-8302/udp""
-      - ""8600:8600""
-      - ""8600:8600/udp""
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager] # change to worker later if needed
-      restart_policy:
-        condition: on-failure
-    environment:
-      CONSUL_BIND_INTERFACE: 'eth0'
-      CONSUL_CLIENT_INTERFACE: 'eth0'
-    command: ""agent -server -ui -bootstrap -client=0.0.0.0 -bind={{ GetInterfaceIP 'eth0' }} -data-dir=/consul/data""
-
-  # Managing Stolon clusters, providing operational control.
-  stolon-ctl:
-    image: dockerhub-user/app-stolon-ctl:latest
-    depends_on:
-      - consul
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-
-  # Runs Stolon Keeper managing PostgreSQL data persistence and replication.
-  stolon-keeper:
-    image: dockerhub-user/app-stolon:latest
-    depends_on:
-      - stolon-ctl
-      - consul
-    environment:
-      - ROLE=keeper
-      - STKEEPER_UID=postgres_{{.Task.ID}}
-      - PG_REPL_USERNAME=repluser
-      - PG_REPL_PASSWORD=replpass
-      - PG_SU_USERNAME=postgres
-      - PG_SU_PASSWORD=postgres
-      - PG_APP_USER=app_user
-      - PG_APP_PASSWORD=mysecurepassword
-      - PG_APP_DB=app_db
-    volumes:
-      - stolon_data:/stolon/data
-      - pg_data:/var/lib/postgresql/data
-      - pg_log:/var/log/postgresql
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-
-  # Deploys Stolon Sentinel for monitoring and orchestrating cluster failovers.
-  stolon-sentinel:
-    image: dockerhub-user/app-stolon:latest
-    environment:
-      - ROLE=sentinel
-    networks:
-      - shared_swarm_network
-    deploy:
-      placement:
-        constraints: [node.role == manager]
-    depends_on:
-      - stolon-keeper
-      - consul
-
-volumes:
-  stolon_data:
-  console_data:
-  pg_data:
-  pg_log:
-
-networks:
-  shared_swarm_network:
-    external: true
-
-Dockerfile
-# Use the official PostgreSQL image as a base
-FROM postgres:16.2
-
-# Define the version of Stolon being used
-ENV STOLON_VERSION v0.17.0
-
-# Install necessary packages
-RUN apt-get update && \
-    apt-get install -y curl unzip && \
-    rm -rf /var/lib/apt/lists/* 
-
-# Download and extract Stolon
-RUN curl -L https://github.com/sorintlab/stolon/releases/download/${STOLON_VERSION}/stolon-${STOLON_VERSION}-linux-amd64.tar.gz -o stolon.tar.gz && \
-    mkdir -p /stolon-installation && \
-    tar -xzf stolon.tar.gz -C /stolon-installation && \
-    ls /stolon-installation && \
-    mv /stolon-installation/*/bin/* /usr/local/bin/
-
-# Clean up installation files
-RUN rm -rf /stolon-installation stolon.tar.gz && \
-    apt-get purge -y --auto-remove unzip
-
-# Verify binaries are in the expected location
-RUN ls /usr/local/bin/stolon-*
-
-# Set up environment variables
-ENV STOLONCTL_CLUSTER_NAME=stolon-cluster \
-    STOLONCTL_STORE_BACKEND=consul \
-    STOLONCTL_STORE_URL=http://consul:8500 \
-    CONSUL_PORT=8500 \
-    STKEEPER_DATA_DIR=/stolon/data \
-    PG_DATA_DIR=/var/lib/postgresql/data \
-    PG_BIN_PATH=/usr/lib/postgresql/16/bin \
-    PG_PORT=5432
-
-# Expose PostgreSQL and Stolon proxy ports
-EXPOSE 5432 5433
-
-# Copy the entrypoint script into the container
-COPY script/entrypoint.sh /entrypoint.sh
-
-# Make the entrypoint script executable
-RUN chmod +x /entrypoint.sh
-
-# Set the entrypoint script as the entrypoint for the container
-ENTRYPOINT [""/entrypoint.sh""]
-
-
-Entrypoint.sh
-#!/bin/bash
-
-# Fetch the IP address of the container
-IP_ADDRESS=$(hostname -I | awk '{print $1}')
-
-if [ ""$ROLE"" = ""sentinel"" ]; then
-    # Verify registration with Consul
-    while ! curl -s ""http://$STOLONCTL_STORE_BACKEND:$CONSUL_PORT/v1/kv/stolon/cluster/$STOLONCTL_CLUSTER_NAME/keepers/info?keys"" | grep -q ""$KEEPER_ID""; do
-        echo ""Keeper not registered in Consul, waiting...""
-        sleep 1
-    done
-    echo ""Keeper is registered in Consul.""
-fi
-
-
-case ""$ROLE"" in
-  ""keeper"")
-    exec stolon-keeper \
-      --data-dir $STKEEPER_DATA_DIR \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL \
-      --pg-listen-address $IP_ADDRESS \
-      --pg-repl-username $PG_REPL_USERNAME \
-      --pg-repl-password $PG_REPL_PASSWORD \
-      --pg-su-username $PG_SU_USERNAME \
-      --pg-su-password $PG_SU_PASSWORD \
-      --uid $STKEEPER_UID \
-      --pg-bin-path $PG_BIN_PATH \
-      --pg-port $PG_PORT
-    ;;
-  ""sentinel"")
-    exec stolon-sentinel \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL
-    ;;
-  ""proxy"")
-    exec stolon-proxy \
-      --cluster-name $STOLONCTL_CLUSTER_NAME \
-      --store-backend $STOLONCTL_STORE_BACKEND \
-      --store-endpoints $STOLONCTL_STORE_URL \
-      --listen-address 0.0.0.0
-    ;;
-  *)
-    echo ""Unknown role: $ROLE""
-    exit 1
-    ;;
-esac
-
-
-Checked network connectivity, consul is up and running fine, sentinel and proxy also working as expected, albeit pending for database to be ready.
-","1. Can you please confirm if you have initiated cluster with authenticated user ?
-",Stolon
-"I'm running a cluster with enough RAM per node, but nevertheless it's plagued by frequent ""Memory is highly fragmented"" errors. When looking at dashboard, it says only ~2.3-2.4 gigs out of 6.3 are used. Looking at Slabs it also sees low quota utilization, but items and arena are  near or above 99%
-> box.slab.info()
----
-- items_size: 1286313040
-  items_used_ratio: 99.56%
-  quota_size: 6500000768
-  quota_used_ratio: 40.27%
-  arena_used_ratio: 96.2%
-  items_used: 1280601176
-  quota_used: 2617245696
-  arena_size: 2617245696
-  arena_used: 2516986968
-...
-
-I tried adjusting Vinyl memory and Vinyl cache, as well as changing memtx_min_tuple_size from 16 bits to 64, but it didn't affect the ratios.
-Could please someone explain how to increase items and arena size? or maybe there are other ways to fix this?
-With default value of fragmentation_threshold_critical = 0.85 this cluster should run with ~3 gigs of ram and still have some room
-items_used_ratio = items_used / items_size
-quota_used_ratio = quota_used / quota_size
-arena_used_ratio = arena_used / arena_size
-
-UPDATE: adding slab stats as requested
-box.slab.stats()
----
-- - mem_free: 16400
-    mem_used: 260224
-    item_count: 4066
-    item_size: 64
-    slab_count: 17
-    slab_size: 16384
-  - mem_free: 6072
-    mem_used: 10200
-    item_count: 75
-    item_size: 136
-    slab_count: 1
-    slab_size: 16384
-  - mem_free: 16128
-    mem_used: 52574976
-    item_count: 365104
-    item_size: 144
-    slab_count: 3232
-    slab_size: 16384
-  - mem_free: 156216
-    mem_used: 286979496
-    item_count: 1888023
-    item_size: 152
-    slab_count: 17646
-    slab_size: 16384
-  - mem_free: 2943632
-    mem_used: 423399040
-    item_count: 2646244
-    item_size: 160
-    slab_count: 26201
-    slab_size: 16384
-  - mem_free: 913912
-    mem_used: 405490008
-    item_count: 2413631
-    item_size: 168
-    slab_count: 12445
-    slab_size: 32768
-  - mem_free: 862448
-    mem_used: 288143152
-    item_count: 1637177
-    item_size: 176
-    slab_count: 8850
-    slab_size: 32768
-  - mem_free: 484712
-    mem_used: 170306168
-    item_count: 925577
-    item_size: 184
-    slab_count: 5230
-    slab_size: 32768
-  - mem_free: 53680
-    mem_used: 44456448
-    item_count: 231544
-    item_size: 192
-    slab_count: 1363
-    slab_size: 32768
-  - mem_free: 33208
-    mem_used: 13617000
-    item_count: 68085
-    item_size: 200
-    slab_count: 418
-    slab_size: 32768
-  - mem_free: 25792
-    mem_used: 6276816
-    item_count: 30177
-    item_size: 208
-    slab_count: 193
-    slab_size: 32768
-  - mem_free: 22144
-    mem_used: 2361744
-    item_count: 10934
-    item_size: 216
-    slab_count: 73
-    slab_size: 32768
-  - mem_free: 27104
-    mem_used: 887264
-    item_count: 3961
-    item_size: 224
-    slab_count: 28
-    slab_size: 32768
-  - mem_free: 28504
-    mem_used: 396024
-    item_count: 1707
-    item_size: 232
-    slab_count: 13
-    slab_size: 32768
-  - mem_free: 29376
-    mem_used: 166560
-    item_count: 694
-    item_size: 240
-    slab_count: 6
-    slab_size: 32768
-  - mem_free: 5216
-    mem_used: 92752
-    item_count: 374
-    item_size: 248
-    slab_count: 3
-    slab_size: 32768
-  - mem_free: 11296
-    mem_used: 54016
-    item_count: 211
-    item_size: 256
-    slab_count: 2
-    slab_size: 32768
-  - mem_free: 19720
-    mem_used: 12936
-    item_count: 49
-    item_size: 264
-    slab_count: 1
-    slab_size: 32768
-  - mem_free: 26416
-    mem_used: 692016
-    item_count: 2218
-    item_size: 312
-    slab_count: 22
-    slab_size: 32768
-  - mem_free: 30000
-    mem_used: 35424
-    item_count: 108
-    item_size: 328
-    slab_count: 1
-    slab_size: 65536
-  - mem_free: 2816
-    mem_used: 62608
-    item_count: 182
-    item_size: 344
-    slab_count: 1
-    slab_size: 65536
-  - mem_free: 33024
-    mem_used: 32400
-    item_count: 90
-    item_size: 360
-    slab_count: 1
-    slab_size: 65536
-  - mem_free: 36472
-    mem_used: 28952
-    item_count: 77
-    item_size: 376
-    slab_count: 1
-    slab_size: 65536
-  - mem_free: 54448
-    mem_used: 10976
-    item_count: 28
-    item_size: 392
-    slab_count: 1
-    slab_size: 65536
-  - mem_free: 12688
-    mem_used: 249008
-    item_count: 394
-    item_size: 632
-    slab_count: 4
-    slab_size: 65536
-  - mem_free: 123328
-    mem_used: 7632
-    item_count: 6
-    item_size: 1272
-    slab_count: 1
-    slab_size: 131072
-  - mem_free: 259416
-    mem_used: 2616
-    item_count: 1
-    item_size: 2616
-    slab_count: 1
-    slab_size: 262144
-  - mem_free: 11962320
-    mem_used: 870891520
-    item_count: 53155
-    item_size: 16384
-    slab_count: 421
-    slab_size: 2097152
-...
-
-","1. 
-Could please someone explain how to increase items and arena size?
-
-They are increased automatically. 99% of items_used_ratio and arena_used_ratio is fine so long as quota_used_ratio is low (40% in your case).
-Actually it was a bug in the code that raises the warning, it was fixed in cartridge version 2.7.9.
-",Tarantool
-"I was reading this article about Tarantool and they seem to say that AOF and WAL log are not working the same way.
-
-Tarantool: besides snapshots, it has a full-scale WAL (write ahead
-  log). So it can secure data persistency after each transaction
-  out-of-the-box. Redis: in fact, it has snapshots only. Technically,
-  you have AOF (append-only file, where all the operations are written),
-  but it requires manual control over it, including manual restore after
-  reboot. Simply put, with Redis you need to manually suspend the server
-  now and then, make snapshots and archive AOF.
-
-Could someone explain more clearly what is the different between the 2 strategy and how each work at a high level. 
-I always assumed that Redis AOF was working the same way to a SQL database transaction log such as implemented in Postgresql but I might have been wrong.
-","1. AOF is the main persistence option for Redis. Any time there's a write operation that modifies the dataset in memory, that operation is logged.  So during a restart, Redis will replay all of the operations to reconstruct the dataset.  You also have 3 different fsync configuration policies to choose from (no, everysec, always).  FWIW, it is usually advised to use both AOF + RDB in the event you want a good level of data-safety.  This is kind of outside of the scope of your question, but figured i'd mention it. 
-Main Redis Persistence Docs
-Redis Persistence Demystified
-Tarantool's uses something called a ""WAL writer"". This will run in a separate thread and log requests that manipulate data ""insert and update requests"".  On restart, Tarantool recovers by reading the WAL file and replaying each of the requests. 
-Tarantool Persistence Docs 
-There's a difference in the internals obviously, but at a high level they are pretty similar.  The persistence comparison in the article is pretty odd and simply not true.  
-For more information on the low level differences, refer to the docs listed above. 
-Hope that helps
-
-2. Redis:
-
-IIRC, Redis writes log in the same thread it serves requests. That leads to stall if disk is slow for some reason (RDB write, AOF rewrite): single write operation could freeze whole serving thread until write syscall finished.
-Redis could not use AOF for replication restore because AOF doesn't contain operation position. Replica can rely only on master's memory buffer and re-request full snapshot if buffer were not large enough to hold operations since previous snapshot were started. I had once non-restored replica during half-an-hour until I recognize it and increased master's buffer size manually.
-
-Tarantool:
-
-Tarantool writes WAL in a separate thread, transaction thread talks to it asynchronously. There could be many write operations waiting for WAL simultaneously, and read operation aren't blocked at all.
-Tarantool stores LSN in WAL, and replica could use WAL for restoration even it were down for hours. Replica even has no operation ""re-request snapshot"" because in practice it never lags so far that there is no enough WAL on master.
-
-
-3. Though an old thread and relevant answers before, my take on this is they are two very different operations and not to be confused with each other.
-Let me explain:
-
-AOF used by Redis : Redis is an in-memory DB. It logs all sequence of operations in the file (AOF) for durability purposes. So, incase the DB crashes, it can read and instantiate itself from those logs.
-
-WAL : Mostly used by Databases, to ensure both consistency and durability of databases. Suppose, we are on one particular instance of a DB where a write operation is being performed. Now, we want this operation to be consistent across all instances. The idea is to log it, so in case the particular instance crashes, it can be recovered from these logs and then propagated over.
-
-
-So, WAL differs from AOF in that aspect where it is more granular, transaction oriented and totally a different use-case to serve.
-",Tarantool
-"I'm trying to install tarantool-operator according to the official documentation ""Tarantool Cartridge on Kubernetes"": https://www.tarantool.io/ru/doc/latest/book/cartridge/cartridge_kubernetes_guide/#using-minikube
-Execute command:
-minikube start --memory 4096
-helm repo add tarantool https://tarantool.github.io/tarantool-operator
-helm search repo tarantool
-
-Result:
-NAME                            CHART VERSION   APP VERSION     DESCRIPTION
-tarantool/tarantool-operator    0.0.10          1.16.0          kubernetes tarantool operator
-tarantool/cartridge             0.0.10          1.0             A Helm chart for tarantool
-
-Then I do:
-helm install tarantool-operator tarantool/tarantool-operator --namespace tarantool --create-namespace --version 0.0.10
-
-I get an error 404:
-Error: INSTALLATION FAILED: failed to fetch https://tarantool.github.io/tarantool-operator/releases/download/tarantool-operator-0.0.10/tarantool-operator-0.0.10.tgz : 404 Not Found
-
-Where am I wrong?
-P.S. minikube version 1.27.0 on Windows 10 (Hyper-V)
-","1. tarantool-operator-0.0.10.tgz is not available for download(
-You can build it from source using
-make docker-build
-make push-to-minikube
-
-accoriding to the docs
-
-2. I also got same issue.
-So, you may handle it temporary by youself overriding helm chart from local file.
-Created simple fix - see merge request.
-Waiting for maintainers to merge it.
-See changes need to be made here.
-",Tarantool
-"When i try start my tarantool, see in log this messages:
-2016-03-28 17:42:14.813 [31296] main/101/ia.lua C> log level 4
-2016-03-28 17:42:14.999 [31296] main/101/ia.lua recovery.cc:211 W> file `./00000000000000000012.xlog` wasn't correctly closed
-2016-03-28 17:42:15.001 [31296] main/101/ia.lua recovery.cc:211 W> file `./00000000000000000118.xlog` wasn't correctly closed
-2016-03-28 17:42:15.002 [31296] main/101/ia.lua recovery.cc:211 W> file `./00000000000000000849.xlog` wasn't correctly closed
-2016-03-28 17:42:15.004 [31296] main/101/ia.lua recovery.cc:211 W> file `./00000000000000000849.xlog` wasn't correctly closed
-
-What does it mean ?
-","1. This message means that Tarantool did not write the end of file marker at shutdown, which it normally does. This can happen after a crash or hard reset, in other words, any kind of ungraceful server shutdown. The message as such is harmless, it warns you that some last transactions may have been not flushed to WAL since last start.
-
-2. I got similar error in Tarantool 2.11.1:
-2023-11-01 21:44:57 Loading existing configuration file: /etc/tarantool/config.yml
-2023-11-01 21:44:57 Config:
-2023-11-01 21:44:57 ---
-2023-11-01 21:44:57 pid_file: /var/run/tarantool/tarantool.pid
-2023-11-01 21:44:57 vinyl_dir: /var/lib/tarantool
-2023-11-01 21:44:57 log_level: 5
-2023-11-01 21:44:57 memtx_dir: /var/lib/tarantool
-2023-11-01 21:44:57 log_format: plain
-2023-11-01 21:44:57 listen: 3301
-2023-11-01 21:44:57 wal_dir: /var/lib/tarantool
-2023-11-01 21:44:57 force_recovery: false
-2023-11-01 21:44:57 ...
-2023-11-01 21:44:57 
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> Tarantool 2.11.1-0-g96877bd35 Linux-x86_64-RelWithDebInfo
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> log level 5
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> wal/engine cleanup is paused
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> mapping 268435456 bytes for memtx tuple arena...
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> Actual slab_alloc_factor calculated on the basis of desired slab_alloc_factor = 1.044274
-2023-11-01 21:44:57 2023-11-01 18:44:57.257 [1] main/103/tarantool-entrypoint.lua I> mapping 134217728 bytes for vinyl tuple arena...
-2023-11-01 21:44:57 2023-11-01 18:44:57.258 [1] main/103/tarantool-entrypoint.lua/box.upgrade I> Recovering snapshot with schema version 2.11.1
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> update replication_synchro_quorum = 1
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> instance uuid be685c81-ad95-416e-9972-531c539bb677
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> instance vclock {1: 70}
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> tx_binary: bound to 0.0.0.0:3301
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> recovery start
-2023-11-01 21:44:57 2023-11-01 18:44:57.261 [1] main/103/tarantool-entrypoint.lua I> recovering from `/var/lib/tarantool/00000000000000000000.snap'
-2023-11-01 21:44:57 2023-11-01 18:44:57.266 [1] main/103/tarantool-entrypoint.lua I> cluster uuid 3db61092-9f40-44b6-ba2c-1d8cf2c46624
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> assigned id 1 to replica be685c81-ad95-416e-9972-531c539bb677
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> update replication_synchro_quorum = 1
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> recover from `/var/lib/tarantool/00000000000000000000.xlog'
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> done `/var/lib/tarantool/00000000000000000000.xlog'
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> recover from `/var/lib/tarantool/00000000000000000066.xlog'
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua recovery.cc:161 W> file `/var/lib/tarantool/00000000000000000066.xlog` wasn't correctly closed
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua I> recover from `/var/lib/tarantool/00000000000000000069.xlog'
-2023-11-01 21:44:57 2023-11-01 18:44:57.273 [1] main/103/tarantool-entrypoint.lua recovery.cc:161 W> file `/var/lib/tarantool/00000000000000000069.xlog` wasn't correctly closed
-2023-11-01 21:44:57 2023-11-01 18:44:57.274 [1] main/103/tarantool-entrypoint.lua I> ready to accept requests
-2023-11-01 21:44:57 2023-11-01 18:44:57.274 [1] main/103/tarantool-entrypoint.lua I> leaving orphan mode
-2023-11-01 21:44:57 2023-11-01 18:44:57.274 [1] main/106/gc I> wal/engine cleanup is resumed
-2023-11-01 21:44:57 2023-11-01 18:44:57.274 [1] main/103/tarantool-entrypoint.lua/box.load_cfg I> set 'listen' configuration option to 3301
-2023-11-01 21:44:57 2023-11-01 18:44:57.274 [1] main/107/checkpoint_daemon I> scheduled next checkpoint for Wed Nov  1 20:30:51 2023
-2023-11-01 21:44:57 2023-11-01 18:44:57.276 [1] main/103/tarantool-entrypoint.lua/socket I> tcp_server: remove dead UNIX socket: /var/run/tarantool/tarantool.sock
-2023-11-01 21:44:57 2023-11-01 18:44:57.277 [1] main/118/console/unix/:/var/run/tarantool/tarantool.sock/socket I> started
-2023-11-01 21:44:57 2023-11-01 18:44:57.277 [1] main/103/tarantool-entrypoint.lua space.h:425 E> ER_NO_SUCH_INDEX_ID: No index #0 is defined in space 'productHistory'
-2023-11-01 21:44:57 2023-11-01 18:44:57.277 [1] main space.h:425 E> ER_NO_SUCH_INDEX_ID: No index #0 is defined in space 'productHistory'
-2023-11-01 21:44:57 2023-11-01 18:44:57.277 [1] main F> fatal error, exiting the event loop
-
-After investigation I found that the issue related to race conditions between space initialization and space writes during Tarantool instance startup.
-So, I have space initialization (create, format, pk create) in one module, which is triggered by box.watch('box.status') when is_ro == false and is_ro_cfg == false and status == 'running'.
-At the same time I run write tests against this newly created space in instance startup script after box.cfg{} and box.ctl.wait_rw().
-Write tests finished without errors, but WAL log becomes corrupted.
-I got an issue on next startup after instance termination.
-Solution:
-Make sure space initialization is finished completely before any attempt to write to it.
-Unfortunately it is not really possible to put DDL into transaction - there are exceptions in transaction management (see Rule #3)
-",Tarantool
-"I tried to create a table using the following sql clause but syntax error is reported. I suspect the columns like ""x-axis"" ""y-axis"" have special character hypen in their names, but how can I make TDengine accept special characters in column names?
-taos> create table tt (ts timestamp, x-axis double, y-axis double);
-
-DB error: syntax error near ""-axis double, y-axis double);"" (0.019247s)
-taos> 
-
-","1. maybe you can try it with back quote like this:
-create table tt (ts timestamp, `x-axis` double, `y-axis` double);
-
-",TDengine
-"I have a SQL statement that counts every second of status in TDengine.
-SELECT 
-  first(ts), 
-  voltage 
-FROM 
-  d1 
-where 
-  ts >= '2017-07-14 10:40:00.000' 
-  AND ts <= '2017-07-14 10:41:00.066' INTERVAL(1s)
-
-
-There are 3 seconds of data that are missing, as shown in the figure.
-enter image description here
-If there is no value, I would like to use a previous non-null value in the query instead. So I tried to add a FILL(PREV);. However, the result doesn't look right.
-enter image description here
-How do I modify this statement?
-","1. Maybe you can try this:
-SELECT 
-  _wstart, 
-  first(voltage) 
-FROM 
-  d1 
-where 
-  ts >= '2017-07-14 10:40:00.000' 
-  AND ts <= '2017-07-14 10:41:00.066'
-  INTERVAL(1s) FILL(PREV)
-
-",TDengine
-"My TDengine graph works fine, but the alert rule can not run. I got this detailed error message ""tsdb.HandleRequest() error Could not find executor for data source type: tdengine-datasource"".
-version info:
-system 14.04.1-Ubuntu,
-grafana v7.3.5,
-grafanaplugin 3.1.3
-","1. Use grafnaplugin 3.1.4, you can download it from https://github.com/taosdata/grafanaplugin/releases/tag/v3.1.4 .
-Then follow the installation instructions in README https://github.com/taosdata/grafanaplugin#installation .
-Join the community here in discord for help.
-",TDengine
-"After creating many tables in TDengine, there are a lot of vnodes in my TDengine cluster. However, I find one of the vnode is too busy. So I want to split the vnode into two. Is there a way to do that?
-","1. No, TDengine doesn't provide this functionality for user to split the tables in one vnode to 2 or multiple vnodes. The essential of this request is to move some tables out of the busy vnode so that its load can be downgraded to a reasonable level and let another 2 or more vnodes handle the tables moved out. I understand the requirement. However, in near future we will have a feature that TDengine will automatially does some load balancing work once new vnodes are added in the cluster, this can help you. However, the problem of this new feature is that users can't control which tables are moved out to which vnode.
-
-2. TDengine will automatially does some load balancing work once new vnodes are added in the cluster is not work, we can use ""BALANCE VGROUP"" ,but this will influence performance, and it will take a long time to finish it when we have a big data cluster
-",TDengine
-"Are there any windows clients for tdengine with GUI?
-Such as windows sql server or navicat.
-Any suggestions will be appreciated.
-","1. https://github.com/skye0207/TDengineGUI may be a good answer for you.
-It's third-party a simple TDengine DeskTop Manager.
-
-2. qStudio is a free SQL GUI, it allows running SQL scripts, easy browsing of tables, charting and exporting of results. It works on every operating system, with every database including TDEngine.
-https://www.timestored.com/qstudio/database/tdengine
-
-Transparency: I am the main author of qStudio for the last 12+ years.
-",TDengine
-"If I need to perform system maintenance on the underlying servers running a multi node tikv cluster with a lot of data on each node, is it safe to simply shutdown and restart the operating systems for each node (one at a time) and let the tikv cluster self recover?   or is it necessary to fully drain and remove each node from the cluster one at a time and add it back after, which could take many days and cause a lot of disk i/o and network load?
-I have not tried rolling reboots of cluster nodes for os maintenance for fear of data corruption
-",,TiDB
-"According to https://docs.pingcap.com/tidb/stable/deploy-monitoring-services during the deployment of a TiDB cluster using TiUP, the monitoring and alert services are automatically deployed, and no manual deployment is needed.
-This seems to be true as well for the Node-Exporter which can be configured in the topology.yml for cluster deployment as described in https://docs.pingcap.com/tidb/stable/tiup-cluster-topology-reference#monitored
-Essentially, what can be configured is the port at which the node-exporter exposes its metrics.
-On other instances I use the node-exporter as well and use its feature to collect metrics from a local textfile via the --collector.textfile.directory flag. I would like to do the same on the TiDB instances.
-What I would like to configure is either
-
-To add the textfile collector flag with directory to the built-in node-exporter or
-to prevent TiUP from installing or running the node-exporter (in order to install my own node-exporter)
-
-Is this kind of configuration possible? Or is there any other solution?
-","1. It seems that the configuration is limited to changing the port at which the node exporter expose its metrics as the following code from TiUP for starting the node-exporter confirms
-https://github.com/pingcap/tiup/blob/master/embed/templates/scripts/run_node_exporter.sh.tpl
-",TiDB
-"I deployed TiDB in public cloud kubernetes cluster. Everything is okay and running as expected. But when I try to do backup to S3 using BR with log method. docs. After that, I want to terminate the backup, but I forget to step by step turn off the job and directly delete backup. After the backup deleted and running a backup again this error occured:
-It supports single stream log task currently: [BR:Stream:ErrStreamLogTaskExist]stream task already exists
-I have already trying to add logStop: true by editing config directly. I doesn't work. I try another step by delete backup and try to add logStop: true directly when applying config. When I try to see kubectl describe backup -n backup-test the logStop state is already true but when I checked kubectl get backup -n backup-test the status is empty. I already try all suggestion that have same case in tidb user group forum attached:
-https://asktug.com/t/topic/1008335
-https://asktug.com/t/topic/997636
-But I doesn't work at all the backup still running and can't be stopped
-","1. Do you try to use 'br log status' and 'br log stop' commands to stop the task? You can refer to the doc here: https://docs.pingcap.com/tidb/stable/br-pitr-manual
-",TiDB
-"Hi I have a question on SHARD_ROW_ID_BITS in TiDB.
-I understand you can specify such as 4 or 5 for this value depending on how much shards you want.
-Question, is it possible to increase this value in case you want further sharding later? Decreasing?
-https://docs.pingcap.com/tidb/stable/shard-row-id-bits
-","1. yes. You can use alter statement to do that, like:
-ALTER TABLE t SHARD_ROW_ID_BITS = 4;
-
-Increasing and decreasing are both supported
-",TiDB
-"If I need to perform system maintenance on the underlying servers running a multi node tikv cluster with a lot of data on each node, is it safe to simply shutdown and restart the operating systems for each node (one at a time) and let the tikv cluster self recover?   or is it necessary to fully drain and remove each node from the cluster one at a time and add it back after, which could take many days and cause a lot of disk i/o and network load?
-I have not tried rolling reboots of cluster nodes for os maintenance for fear of data corruption
-",,TiKV
-"I want to split data on multiple tiKV because I have Swiss, Europeans and Americans and I need to store data in citizen country.
-The user’s table has a country code and automatically data are stored in a good zone (tikv --label zone=ch/eu/us).
-How can I do this ?
-","1. As this is a regulatory requirement, you can specify Placement Rules in SQL for the affected table, together with partitioning. See Placement Rules in SQL.
-Example placement rules:
-CREATE PLACEMENT POLICY p1 FOLLOWERS=5;
-CREATE PLACEMENT POLICY europe PRIMARY_REGION=""eu-central-1"" REGIONS=""eu-central-1,eu-west-1"";
-CREATE PLACEMENT POLICY northamerica PRIMARY_REGION=""us-east-1"" REGIONS=""us-east-1"";
-
-Example table:
-SET tidb_enable_list_partition = 1;
-CREATE TABLE user (
-  country VARCHAR(10) NOT NULL,
-  userdata VARCHAR(100) NOT NULL
-) PLACEMENT POLICY=p1 PARTITION BY LIST COLUMNS (country) (
-  PARTITION pEurope VALUES IN ('DE', 'FR', 'GB') PLACEMENT POLICY=europe,
-  PARTITION pNorthAmerica VALUES IN ('US', 'CA', 'MX') PLACEMENT POLICY=northamerica,
-  PARTITION pAsia VALUES IN ('CN', 'KR', 'JP')
-);
-
-The pEurope partition will apply the europe policy.
-",TiKV
-"I am running a TiKV v7.2.0 cluster, which was deployed with tiup with server_configs.pd.replication.max-replicas: 1 configured in topology.  After storing about 8TB of data in the cluster, I edited the config to increase max-replicas to 2 with  tiup cluster edit-config [clustername] and applied the change.   It went through the nodes and did a rolling re-deploy of the config and restart of services.
-I expected my disk usage to double, as it re-balances all the keys and copies each one to another node to match the new replication factor 2.  In reality, no disk or network activity occurred and no growth in dataset size.
-Perhaps it seems the changed config only affects newly-stored data and not existing? How can I get the cluster to repair or rebalance or whatever is needed to replicate the existing data?
-","1. “max-replicas” can only be prime number(1, 3, or 5). So the setting update has no effect.
-
-2. you can try to use pd-ctl and follow the instruction 'scheduler describe balance-region-scheduler' https://docs.pingcap.com/tidb/stable/pd-control#scheduler-show--add--remove--pause--resume--config--describe to show the process of the region balancing.
-",TiKV
-"I am using TiDB version 6.5.1, I wanted to know the behavior of TiKV, let's say I have a table t which contains composite index example (a,b), when I am trying to execute following cases,
-1.select sum(c) from t where a=123 and b='simple' group by b;
-
-range scan happing as expected.
-2.select sum(c) from t where a>=123 and b='simple' group by b;
-
-here I am passing indexed columns in where clause, So why range scan is not happening and table full scan can causes performance issues when table size is big.
-3.select sum(a) from t where a>=123 and b='simple' group by b;
-
-if I use indexed column in select range scan happening.
-4.select sum(c) from t where a>=123 group by a;
-
-same behavior like 2.
-I have a requirement like can pass total index or left prefix index with >=, <=, between, like operators to support ad-hoc query, So TiKV will support this without full table scan?
-Please suggest table design if any changes required, here I am planning to use TiKV + TiSpark to cover entire HTAP usecase.
-Thanks,
-Ajay Babu Maguluri.
-","1. TiDB like any database has an optimizer that based on limited data (statistics) and in limited time needs to find an acceptable execution plan.
-The table scan might be cheaper than other plans. You can restrict the plans the optimizer can take with hints and see what the cost is for each plan.
-sql> CREATE TABLE t1(id INT PRIMARY KEY, c1 VARCHAR(255), KEY(c1));
-Query OK, 0 rows affected (0.1698 sec)
-
-sql> INSERT INTO t1 VALUES (1,'test'),(2,'another test');
-Query OK, 2 rows affected (0.0154 sec)
-
-Records: 2  Duplicates: 0  Warnings: 0
-
-sql> ANALYZE TABLE t1;
-Query OK, 0 rows affected, 1 warning (0.0950 sec)
-Note (code 1105): Analyze use auto adjusted sample rate 1.000000 for table test.t1
-
-sql> EXPLAIN FORMAT=VERBOSE SELECT * FROM t1 WHERE c1='test';
-+--------------------+---------+---------+-----------+------------------------+-----------------------------------------+
-| id                 | estRows | estCost | task      | access object          | operator info                           |
-+--------------------+---------+---------+-----------+------------------------+-----------------------------------------+
-| IndexReader_6      | 1.00    | 21.17   | root      |                        | index:IndexRangeScan_5                  |
-| └─IndexRangeScan_5 | 1.00    | 199.38  | cop[tikv] | table:t1, index:c1(c1) | range:[""test"",""test""], keep order:false |
-+--------------------+---------+---------+-----------+------------------------+-----------------------------------------+
-2 rows in set, 1 warning (0.0149 sec)
-Note (code 1105): [c1] remain after pruning paths for t1 given Prop{SortItems: [], TaskTp: rootTask}
-
-sql> EXPLAIN FORMAT=VERBOSE SELECT * FROM t1 IGNORE INDEX(c1) WHERE c1='test';
-+---------------------+---------+---------+-----------+---------------+------------------------+
-| id                  | estRows | estCost | task      | access object | operator info          |
-+---------------------+---------+---------+-----------+---------------+------------------------+
-| TableReader_7       | 2.00    | 82.22   | root      |               | data:Selection_6       |
-| └─Selection_6       | 2.00    | 997.11  | cop[tikv] |               | eq(test.t1.c1, ""test"") |
-|   └─TableFullScan_5 | 4.00    | 797.51  | cop[tikv] | table:t1      | keep order:false       |
-+---------------------+---------+---------+-----------+---------------+------------------------+
-3 rows in set (0.0021 sec)
-
-Here the optimizer uses a IndexRangeScan and then in the second query we exclude the index and then it takes a TableFullScan which is much more expensive as you can see in the estCost column.
-For questions like this it might be useful to share the output of SHOW CREATE TABLE... for the tables involved. The data or some description of the data (e.g. how unique the 123 value is) would also be helpful.
-From the images you posted it looks like you have an index called a_b_index on (a, b). This means that TiDB can't use the second column of the index if you don't have a equality (=) match on the first column. Switching the order might be good as for the queries here you always do a equality match on the b column and a range match on the a column. But I can't see the full range of queries that you do so other queries might perform worse after this.
-Here is some good explanation about this: https://use-the-index-luke.com/sql/where-clause/the-equals-operator/concatenated-keys
-",TiKV
-"I have an requirement to maintain big data sets in a single database and it should support both OLTP & OLAP workloads, I saw TiDB it will support HTAP workloads but we need to maintain data in TiKV & TiFlash to achieve full HTAP solution, since two modules data duplication causes more storage utilization, Can you please help,
-
-TiKV is sufficient for both OLTP and OLAP workloads?
-What is the compression rate both TiKV & TiFlash supports?
-Any TiDB benchmark with HTAP workloads.
-Can we maintain data replicated as 3 copies includes TiKV & TiFlash to get full data HA?
-I saw TiSpark it will execute direct on TiKV for OLAP, Can i get benchmark w.r.t TiSpark vs
-TiFlash for OLAP workloads.
-
-Thanks,
-Ajay Babu Maguluri.
-","1. 
-You can use only TiKV and run OLAP queries, but then this won't have the same performance as TiFlash would give you.
-TiKV uses RocksDB to store data on disk, This provides efficient use of storage. The actual compression rate depends on the data you're storing.
-There are some benchmarks on the PingCAP website. But I would recommend testing with your specific workload.
-TiKV needs 3 copies to be redundant. On a per-table basis you can add one or more replicas on TiFlash. It is recommended to use two replicas on TiFlash to be redundant. This would give you a total of 5 copies for the tables where you need TiFlash and 3 copies for tables that only use TiKV.
-Note that TiSpark is only supported if you deploy TiDB yourself and isn't supported with TiDB Cloud. See https://github.com/pingcap/tispark/wiki/TiSpark-Benchmark for benchmarking info. But here I would also recommend to test/benchmark for your specific workload instead of a generic workload.
-
-",TiKV
-"Could you help me? There is a vertica cluster (version 12.0). The database has a table for which partitions are configured. The table is large, so I want to delete the oldest partitions, the largest ones. To do this, I need to know the size of each partition. How can I see the size of a partition?
-","1. Dose something like this help?
-SELECT 
-  t.table_schema
-, t.table_name
-, p.partition_key
-, SUM(p.ros_size_bytes) AS ros_size_bytes
-FROM TABLES t
-JOIN projections pj ON t.table_id = pj.anchor_table_id
-JOIN partitions p USING(projection_id)
-GROUP BY 1 , 2 , 3 ORDER BY 4 DESC LIMIT 4;
-table_schema|table_name  |partition_key|ros_size_bytes
-the_schema  |dc_the_table|2021-02-02   |1,556,987,825,392
-the_schema  |dc_the_table|2021-02-08   |1,556,987,825,392
-the_schema  |dc_the_table|2021-02-01   |1,556,987,825,392
-the_schema  |dc_the_table|2021-02-12   |1,556,987,825,392                                                                                                                       
-
-For the partition size, I have this query for you - run it after you have run a SELECT AUDIT() on the table:
-WITH
-srch(table_schema,table_name) AS (
-  SELECT 's_you', 'YOUBORA_STORM_RAW' -- edit for your table and schema
-)
-,
-pj AS (
-  SELECT
-    projection_id
-  FROM projections
-  JOIN srch 
-    ON table_schema = projection_schema
-   AND projection_name ~~* (table_name||'%')
-  WHERE is_super_projection
-  LIMIT 1
-)
-,
-rawsize AS (
-  SELECT 
-    object_schema
-  , object_name
-  , size_bytes 
-  FROM user_audits 
-  JOIN srch
-   ON object_schema = table_schema
-  AND object_name   = table_name
-  WHERE object_type='TABLE' 
-)
-SELECT
-  table_schema
-, projection_name
-, partition_key
-, size_bytes // SUM(ros_row_count) OVER w * ros_row_count AS partition_raw
-FROM partitions 
-JOIN pj USING(projection_id)
-JOIN rawsize
-  ON table_schema = object_schema
- AND projection_name ~~* (object_name||'%')
-WINDOW w AS (PARTITION BY table_schema,projection_name)
-
-
-
-
-table_schema
-projection_name
-partition_key
-partition_raw
-
-
-
-
-s_you
-YOUBORA_STORM_RAW_super
-2023-12-03
-143,622
-
-
-s_you
-YOUBORA_STORM_RAW_super
-2023-12-03
-817,650
-
-
-s_you
-YOUBORA_STORM_RAW_super
-2023-12-03
-860,310
-
-
-s_you
-YOUBORA_STORM_RAW_super
-2023-12-03
-860,310
-
-
-
-",Vertica
-"I am in the process of creating an Oracle to Vertica process!
-We are looking to create a Vertica DB that will run heavy reports. For now is all cool Vertica is fast space use is great and all well and nice until we get to the main part getting the data from Oracle to Vertica.
-OK, initial load is ok, dump to csv from Oracle to Vertica, load times are a joke no problem so far everybody things is bad joke or there's some magic stuff going on! well is Simply Fast.
-Bad Part Now -> Databases are up and going ORACLE/VERTICA - and I have data getting altered in ORACLE so I need to replicate my data in VERTICA. What now:
-From my tests and from what I can understand about Vertica insert, updates are not to used unless maybe max 20 per sec - so real time replication is out of question.
-So I was thinking to read the arch log from oracle and ETL -it to create CSV data with the new data, altered data, deleted values-changed data and then applied it into VERTICA but I can not get a list like this:
-Because explicit data change in VERTICA leads to slow performance.
-So I am looking for some ideas about how I can solve this issue, knowing I cannot:
-
-Alter my ORACLE production structure.
-Use ORACLE env resources for filtering the data.
-Cannot use insert, update or delete statements in my VERTICA load process.
-
-Things I depend on:
-
-The use of copy command 
-Data consistency 
-A max of 60 min window(every 60 min - new/altered data need to go to VERTICA).
-
-I have seen the Continuent data replication, but it seems that nowbody wants to sell their prod, I cannot get in touch with them.
-","1. will loading the whole data to a new table
-and then replacing them be acceptable?
-copy new() ...
--- you can swap tables in one command:
-alter table old,new,swap rename to swap,old,new;
-truncate new;
-
-
-2. Extract data from Oracle(in .csv format) and load it using Vertica COPY command. Write a simple shell script to automate this process.
-I used to use Talend(ETL), but it was very slow then moved to the conventional process and it has really worked for me. Currently processing 18M records, my entire process takes less than 2 min.
-
-3. Recently we implemented a real-time refresh from SQL Server (130 databases) to a centralized data warehouse in Vertica using debezium and Kafka streaming. In between written a script to identify insert/update/delete and applying the same in Vertica.
-Answer by Avinash for using a shell script sounds excellent.
-",Vertica
-"For the first time in my ""real"" life I will use a binary data type.
-We need to store some kind of barcode.
-My senior team member told me that I should use varbinary, because it's a recommendation from documentation (we use Vertica).
-I said ok, but my curiosity told me ""Why?""
-I thought varbinary or binary types would print on screen in unreadable text, after select. But it doesn't happen.
-So I tested in Vertica and SQLite and they gave me a proper answer.
-I create a table and insert data.
-create table TEST_VARBINARY_2
-(
-    id int,
-    va_r binary(5)
-);
-
-insert into TEST_VARBINARY_2 (id, va_r)
-values (1, '11111')
-
-And this is the answer.
-
-Apparently the database can store string in the varbinary.
-So my question is: why do we use char/varchar instead of varbinary/binary?
-Varbinary/binary types can store data more efficiently than varchar/char - so why do we need varchar/char?
-Could you give me examples or a link to documentation when this question is discussed?
-UDP
-I believe in a comment section I found my answer.
-
-Not all RDBMS have binary type
-Not all RDBMS support string functions for binary types
-
-","1. Basically, because bytes are not the same as characters.
-BINARY/VARBINARY store strings of bytes. But those bytes may correspond to printable ASCII characters
-https://docs.vertica.com/24.1.x/en/sql-reference/data-types/binary-data-types-binary-and-varbinary/ says:
-
-Like the input format, the output format is a hybrid of octal codes and printable ASCII characters. A byte in the range of printable ASCII characters (the range [0x20, 0x7e]) is represented by the corresponding ASCII character, with the exception of the backslash ('\'), which is escaped as '\\'. All other byte values are represented by their corresponding octal values. For example, the bytes {97,92,98,99}, which in ASCII are {a,\,b,c}, are translated to text as 'a\\bc'.
-
-This is why your string '1111' printed normally. Those are printable ASCII characters. They're actually the byte value 49, but when output to a text display they are printable characters.
-These binary string types store only bytes. If you want to store characters that use other encoding besides ASCII, or use a collation to guide sorting and character comparisons, you must use CHAR/VARCHAR and possibly a locale.
-You said you're using Vertica. https://docs.vertica.com/24.1.x/en/admin/about-locale/locale-and-utf-8-support/ says:
-
-Vertica database servers expect to receive all data in UTF-8, and Vertica outputs all data in UTF-8.
-
-
-The following string functions treat VARCHAR arguments as UTF-8 strings (when USING OCTETS is not specified) regardless of locale setting.
-
-(followed by the list of string functions)
-Because UTF-8 characters are variable in length, the length in characters of a string can be different from the length in bytes. The LENGTH() string function reports CHARACTER_LENGTH() when given a CHAR/VARCHAR argument, but reports OCTET_LENGTH() when given a BINARY/VARBINARY argument.
-Sorting is another important property of strings. When sorting binary data, the byte values are used for the order. Likewise if sorting character data with a binary collation. But if you want accurate sorting for a specific locale, the byte order is not necessarily the correct order for a given locale.
-Read https://docs.vertica.com/24.1.x/en/admin/about-locale/ for more about locale in Vertica.
-",Vertica
-"I need to execute a SQL query, which converts a String column to a Array and then validate the size of that array
-I was able to do it easily with postgresql:
-e.g.
-select
-cardinality(string_to_array('a$b','$')),
-cardinality(string_to_array('a$b$','$')),
-cardinality(string_to_array('a$b$$$$$','$')),
-
-But for some reason trying to convert String on vertica to array is not that simple, Saw this links:
-https://www.vertica.com/blog/vertica-quick-tip-dynamically-split-string/
-https://forum.vertica.com/discussion/239031/how-to-create-an-array-in-vertica
-And much more that non of them helped.
-I also tried using:
-select  REGEXP_COUNT('a$b$$$$$','$')
-
-But i get an incorrect value - 1.
-How can i Convert String to array on Vertica and gets his Length ?
-","1. $ has a special meaning in a regular expression.  It represents the end of the string.
-Try escaping it:
-select REGEXP_COUNT('a$b$$$$$', '[$]')
-
-
-2. You could create a UDx scalar function (UDSF) in Java, C++, R or Python. The input would be a string and the output would be an integer. https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/ExtendingVertica/UDx/ScalarFunctions/ScalarFunctions.htm
-This will allow you to use language specific array logic on the strings passed in. For example in python, you could include this logic:
-input_list = input.split(""$"")
-filtered_input_list = list(filter(None, input_list))
-list_count = len(filtered_input_list)
-
-These examples are a good starting point for writing UDx's for Vertica. https://github.com/vertica/UDx-Examples
-
-3. I wasn't able to convert to an array - but Im able to get the length of the values
-What i do is convert to Rows an use count - its not best performance wise
-But with this way Im able to do also manipulation like filtering of each value between delimiter - and i dont need to use [] for characters like $
-select (select count(1)   
-        from (select StringTokenizerDelim('a$b$c','$') over ()) t)  
-
-Return 3
-",Vertica
-"Is it possible to have mutliple listagg statements in a query. I know in oracle I can have multiple listagg columns. I am getting an error when attempting to build a single table with the columns.
-The WITHIN GROUP (ORDER BY ID) doesnt work either in Vertica
-    SELECT ID,
-       LISTAGG(DISTINCT serv_yr) AS SRV_YR,
-       LISTAGG(DISTINCT serv_yrmo) AS SRV_YRMO
-    FROM table_x x 
-    GROUP BY 1
-
-SQL Error [5366] [0A000]: [Vertica][VJDBC](5366) 
-ERROR: User defined aggregate cannot be used in query with other distinct aggregates
-
-","1. The error message - at least in Version 23.x - is, in fact:
-ERROR 10752:  User defined distinct aggregate cannot be used in query with other distinct aggregates.
-It's the two DISTINCT expression in one select list that causes the issue, here: you can only group by one set of grouping columns in the same query - and each DISTINCT expression is translated into its own set of grouping columns.
-This is the third solution I found to the problem: avoid DISTINCT by using Vertica's collection type that does not allow duplicate entries: the SET. My favourite, actually.
-I kept solution two and one at the bottom of this.
-Only that it returns a SET literal rather than a string in CSV format. But you could even change it further to CSV format, by applying a TO_JSON() function to the resulting SET - and optionally removing the square brackets of the JSON notation.
-WITH
--- some input ...
-table_x(id,serv_yr,serv_yrmo) AS (
-            SELECT  1, 2022, 202201
-  UNION ALL SELECT  1, 2022, 202204
-  UNION ALL SELECT  1, 2022, 202207
-  UNION ALL SELECT  1, 2022, 202207
-  UNION ALL SELECT  1, 2022, 202210
-  UNION ALL SELECT  1, 2023, 202301
-  UNION ALL SELECT  1, 2023, 202304
-  UNION ALL SELECT  1, 2023, 202307
-  UNION ALL SELECT  1, 2023, 202307
-  UNION ALL SELECT  1, 2023, 202310
-  UNION ALL SELECT  2, 2022, 202201
-  UNION ALL SELECT  2, 2022, 202204
-  UNION ALL SELECT  2, 2022, 202207
-  UNION ALL SELECT  2, 2022, 202207
-  UNION ALL SELECT  2, 2022, 202210
-  UNION ALL SELECT  2, 2023, 202301
-  UNION ALL SELECT  2, 2023, 202304
-  UNION ALL SELECT  2, 2023, 202307
-  UNION ALL SELECT  2, 2023, 202307
-  UNION ALL SELECT  2, 2023, 202310
-)
-SELECT
-  id
-, IMPLODE(serv_yr)  ::SET[INT] AS srv_yr
-, IMPLODE(serv_yrmo)::SET[INT] AS srv_yrmo
-FROM table_x
-GROUP BY 1;
-
-This is the second solution I found to the problem - and I find it better. I kept the old one at the bottom of this.
-Get one of the two DISTINCT-s out of the way in one subquery, and apply the second DISTINCT in the LISTAGG expression in the outermost query, which selects from the mentioned subquery:
--- some input ...
-table_x(id,serv_yr,serv_yrmo) AS (
-            SELECT  1, 2022, 202201
-  UNION ALL SELECT  1, 2022, 202204
-  UNION ALL SELECT  1, 2022, 202207
-  UNION ALL SELECT  1, 2022, 202207
-  UNION ALL SELECT  1, 2022, 202210
-  UNION ALL SELECT  1, 2023, 202301
-  UNION ALL SELECT  1, 2023, 202304
-  UNION ALL SELECT  1, 2023, 202307
-  UNION ALL SELECT  1, 2023, 202307
-  UNION ALL SELECT  1, 2023, 202310
-  UNION ALL SELECT  2, 2022, 202201
-  UNION ALL SELECT  2, 2022, 202204
-  UNION ALL SELECT  2, 2022, 202207
-  UNION ALL SELECT  2, 2022, 202207
-  UNION ALL SELECT  2, 2022, 202210
-  UNION ALL SELECT  2, 2023, 202301
-  UNION ALL SELECT  2, 2023, 202304
-  UNION ALL SELECT  2, 2023, 202307
-  UNION ALL SELECT  2, 2023, 202307
-  UNION ALL SELECT  2, 2023, 202310
-)
--- real query starts here, replace following comma with ""WITH""
-,
-dedupe_ym AS (
-  SELECT DISTINCT
-    id
-  , serv_yr
-  , serv_yrmo 
-  FROM table_x
-)
-SELECT 
-  id
-, LISTAGG(DISTINCT serv_yr) AS srv_yr
-, LISTAGG(serv_yrmo)        AS srv_yrmo
-FROM dedupe_ym
-GROUP BY 1;
-
-The old solution is also - divide and conquer:
-
-get the first LISTAGG() in one subquery
-join the output of that subquery back to the base table, and group again, this time by the id and the LISTAGG() output obtained in the subquery:
-
-WITH
-yr_agg(id,srv_yr) AS (
-  SELECT                 
-    id
-  , LISTAGG(DISTINCT serv_yr)   AS srv_yr
-  FROM table_x
-  GROUP BY 1
-)
-SELECT
-  table_x.id
-, srv_yr
-, LISTAGG(DISTINCT serv_yrmo) AS srv_yrmo
-FROM table_x
-JOIN yr_agg USING(id)
-GROUP BY 1,2;
-
-",Vertica
-"As documentation says, it should be first creating backing table, so it confused what it should be
-","1. The docs describe how sequences are implemented. Vitess needs a one-row table that keeps the state of the sequence. That state table needs to be un-sharded.
-The sequence values themselves can be used in a sharded table.
-Looking at the docs: https://vitess.io/docs/18.0/reference/features/vitess-sequences/
-user_seq is a un-sharded table, user is a sharded table that utilises the sequence for auto-increment.
-",Vitess
-"I write the following schema in the Drizzle recommended syntax in order to initialise my project's database in PlanetScale (MySql). After completing the migration process and trying to npx drizzle-kit push:mysql, I got the following error:
-No config path provided, using default 'drizzle.config.ts'
-...
-Error: foreign key constraints are not allowed, see https://vitess.io/blog/2021-06-15-online-ddl-why-no-fk/
-    at PromiseConnection.query (/Users/jcbraz/Projects/sound-scout-13/web-app/node_modules/drizzle-kit/index.cjs:34122:26)
-    at Command.<anonymous> (/Users/jcbraz/Projects/sound-scout-13/web-app/node_modules/drizzle-kit/index.cjs:51859:33)
-    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
-  code: 'ER_UNKNOWN_ERROR',
-  errno: 1105,
-  sql: 'ALTER TABLE `playlists` ADD CONSTRAINT `playlists_user_id_users_id_fk` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE no action ON UPDATE no action;',
-  sqlState: 'HY000',
-  sqlMessage: 'foreign key constraints are not allowed, see https://vitess.io/blog/2021-06-15-online-ddl-why-no-fk/'
-}
-
-Here's the schema according to the Drizzle ORM documentation:
-import { boolean, decimal, int, mysqlTable, text, timestamp, tinyint, uniqueIndex, varchar } from 'drizzle-orm/mysql-core'
-import { type InferModel } from 'drizzle-orm';
-
-
-export const users = mysqlTable('users', {
-    id: varchar('id', { length: 50 }).primaryKey(),
-    email: varchar('email', { length: 320 }).notNull(),
-    first_name: varchar('first_name', { length: 50 }),
-    last_name: varchar('first_name', { length: 50 }),
-    credits: int('credits').notNull().default(5),
-    stripeCustomerId: text('stripeCustomerId')
-});
-
-export const playlists = mysqlTable('playlists', {
-    id: varchar('id', { length: 30 }).primaryKey(),
-    created_at: timestamp('created_at').notNull().defaultNow(),
-    user_id: varchar('user_id', { length: 50 }).references(() => users.id),
-}, (playlists) => ({
-    userIndex: uniqueIndex('user_idx').on(playlists.user_id)
-}));
-
-export const products = mysqlTable('products', {
-    id: tinyint('id').autoincrement().primaryKey(),
-    price: decimal('price', { precision: 3, scale: 2 }).notNull(),
-    active: boolean('active').default(false),
-    name: varchar('name', { length: 30 }),
-    description: varchar('description', { length: 250 })
-});
-
-export type User = InferModel<typeof users>;
-export type Playlist = InferModel<typeof playlists>;
-export type Product = InferModel<typeof products>;
-
-After writing the schema, I ran npx drizzle-kit generate:mysql which generated the migration and the correspondent .sql file successfully.
--- UPDATE --
-Found this really good explanation on PlanetScale approach on Foreign keys: https://github.com/planetscale/discussion/discussions/74
-","1. PlanetScale automatically shards your database, which means that it creates multiple SQL servers that break up your database tables, and when this happens the autoincremented primary keys are no longer the same, so you can't use them to look up rows anymore. This makes it so you can't use those autoincremented indexes as foreign keys. There is a detailed article from PlanetScale here. For this reason, you will need to use an alternate solution to generate your unique IDs to search for in your SQL tables.
-You need to know a little bit about how data is stored on disk in SQL, which is using a B-Tree.
-
-The way you are going to search for SQL table rows is generally by index. Because it's a B-Tree, it's fastest to do a binary search. For this reason, you need to be able to generate unique IDs for your rows.
-While you might be tempted to use UUID, the problem with UUID is that the values are not sequential. UUID also uses your network card MAC address which may or may not be a security hazard, but I think the MAC addresses are randomly generated now.
-It's going to be better to use a Universally Unique Lexicographically Sortable Identifier (ULID), which you can npm install ulid. The UUID uses a millisecond timestamp with 80 bytes of random data.
-0                   1                   2                   3
- 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-|                      32_bit_uint_time_high                    |
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-|     16_bit_uint_time_low      |       16_bit_uint_random      |
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-|                       32_bit_uint_random                      |
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-|                       32_bit_uint_random                      |
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
-Because the timestamp data is in the most significant bits, it means that the random number bits will be smaller values, so the values will be not be linearly sorted, you have to use the monotonicFactory :
-import { monotonicFactory } from 'ulid'
-
-const ulid = monotonicFactory()
-
-// Assume that these calls occur within the same millisecond
-ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
-ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
-
-Generating random numbers is EXTREMELY slow and there are better hardware-specific solutions. One such option is to read the time off of the network controller, which have picosecond clocks.
-I'm currently unaware of how JavaScript generates its millisecond clock because x86 does not have a built-in millisecond clock, ARM CPUs do, so you have to use a thread to make a spinner timer in a loop where you increment an integer ticker, then check the current time and if it's the next second then reset the ticker and calculate how many clock ticks per second there was. You can divide the clock tick number by the total number of ticks to convert to seconds. But for the purpose of a database index, you can just disregard converting the clock ticks to seconds and just use the clock ticker value. To read the clock you have to use an interprocess pipe. While this may be complicated, it's probably the option that will work on the most number of servers.
-The problem with the ulid npm package is that it only outputs a string and we want an integer. We would want to store the ULID as two different 64-bit integer values.
-You could just skip that step and instead, just use the microsecond timestamp from JavaScript new Date().getTime() stored as a 64-bit integer and then use a 64-bit random number. You can also use the MAC address of the system if it is randomly generated. While not as secure as 80-bits, it's good enough.
-I'm not a fan of having to pay to generate random numbers, so the npm package I created for this problem is called LinearID (LID) (npm install linearid). My solution uses two 64-bit BigInt words where the MSB is a millisecond timer bit shifted up 22-bits ored with a 22-bit spin ticker in the LSb. The MSB is a cryptographically secure random number. This gives up an upper limit of 4,194,303 calls to LIDNext() per second. When there is 4,194,303 calls to LIDNext() in one millisecond, the system will wait until the next millisecond. The other opion was ULID, which generates 80-bits of random numbers for each uid, which would not be able to create 4,194,303 ULID per second so it works out.
-import { datetime, mysqlTable, varbinary } from ""drizzle-orm/mysql-core"";
-
-
-export const UserAccounts = mysqlTable('UserAccounts', {
-    uid: varbinary('uid', { length: 16}).primaryKey(),
-    created: datetime('datetime'),
-  //...
-});
-
-const { LID, LIDPrint, LIDParse, LIDSeconds } = require(""linearid"");
-
-[msb, lsb] = LID();
-const Example = LIDPrint(msb, lsb);
-console.log('\nExample LID hex string:0x' + Example);
-[msb2, lsb2] = LIDParse(Example);
-
-const TimeS = LIDSeconds(msb);
-
-SQL table rows are unsorted, so you can't just insert the data sorted. TO get around this you're going to want to search for data by the uid/LID and the timestamp. Because LID stores the millisecond timestamp as the MSB, it's quick to extract the second timestamp from the LID to pass into the SQL engine. Currently I don't know how to pass in a 128-bit varbinary in Drizzle ORM so please comment bellow with correct code. Thanks.
-await db.select().from(UserAccounts).where(
-  and(
-    eq(users.created, TimeS), 
-    eq(users.uid, LID())
-  ));
-
-",Vitess
-"I don't want duplicate rows to be added to the database from users who have already clicked the like. What should I do?
-I've seen it said to use upsert, but isn't upsert create if it doesn't exist and update if it exists?
-If you update, there will be no duplicates, but doesn't it waste database resources anyway?
-","1. Here's the schema which you would need to define a constraint in which a user can like a post only one time.
-generator client {
-  provider = ""prisma-client-js""
-}
-
-datasource db {
-  provider = ""postgresql""
-  url      = env(""DATABASE_URL"")
-}
-
-model User {
-  id    Int    @id @default(autoincrement())
-  name  String
-  email String @unique
-  posts Post[]
-  Like  Like[]
-}
-
-model Post {
-  id        Int     @id @default(autoincrement())
-  title     String
-  published Boolean @default(true)
-  author    User    @relation(fields: [authorId], references: [id])
-  authorId  Int
-  Like      Like[]
-}
-
-model Like {
-  id     Int  @id @default(autoincrement())
-  post   Post @relation(fields: [postId], references: [id])
-  postId Int
-  user   User @relation(fields: [userId], references: [id])
-  userId Int
-
-  @@unique([postId, userId])
-}
-
-Here a compound unique key is configured on the combination of postId and userId in Like Table.
-If a user tries to like a post second time, the database would throw a unique constraint failed error.
-",Vitess
-"I've setup a 3 machine VoltDB cluster with more or less default settings. However there seems to be a constant problem with voltdb eating up all of the RAM heap and not freeing it. The heap size is recommended 2GB.
-Things that I think might be bad in my setup:
-
-I've set 1 min async snapshots
-Most of my queries are AdHoc
-
-Event though it might not be ideal, I don't think it should lead to a problem where memory doesn't get freed.
-I've setup my machines accordingly to 2.3. Configure Memory Management.
-On this image you can see sudden drops in memory usage. These are server shutdowns.
-Heap filling warnings
-DB Monitor, current state of leader server
-I would also like to note that this server is not heavily loaded.
-Sadly, I couldn't find anyone with a similar problem. Most of the advice were targeted on fixing problems with optimizing memory use or decreasing the amount of memory allocated to voltdb. No one seems to have this memory leak lookalike.
-","1. one thing that breaks voltdb performance more than anything is the ad-hoc queries.
-I relies on stored procedures and precompiled query plans to work at high volume high speeds.
-if you provide with more information about what you are trying to achieve I may be able to assist more.
-",VoltDB
-"I was wondering if I could get an explanation between the differences between In-Memory cache(redis, memcached), In-Memory data grids (gemfire) and In-Memory database (VoltDB). I'm having a hard time distinguishing the key characteristics between the 3. 
-","1. Cache - By definition means it is stored in memory. Any data stored in memory (RAM) for faster access is called cache. Examples:  Ehcache, Memcache  Typically you put an object in cache with String as Key and access the cache using the Key. It is very straight forward. It depends on the application when to access the cahce vs database and no complex processing happens in the Cache. If the cache spans multiple machines, then it is called distributed cache. For example, Netflix uses EVCAche which is built on top of Memcache to store the users movie recommendations that you see on the home screen.
-In Memory Database - It has all the features of a Cache plus come processing/querying capabilities. Redis falls under this category. Redis supports multiple data structures and you can query the data in the Redis ( examples like get last 10 accessed items, get the most used item etc). It can span multiple machine and is usually very high performant and also support persistence to disk if needed. For example, Twitter uses Redis database to store the timeline information.
-
-2. I don't know about gemfire and VoltDB, but even memcached and redis are very different. Memcached is really simple caching, a place to store variables in a very uncomplex fashion, and then retrieve them so you don't have to go to a file or database lookup every time you need that data. The types of variable are very simple. Redis on the other hand is actually an in memory database, with a very interesting selection of data types. It has a wonderful data type for doing sorted lists, which works great for applications such as leader boards. You add your new record to the data, and it gets sorted automagically.
-So I wouldn't get too hung up on the categories. You really need to examine each tool differently to see what it can do for you, and the application you're building. It's kind of like trying to draw comparisons on nosql databases - they are all very different, and do different things well.
-
-3. I would add that things in the ""database"" category tend to have more features to protect and replicate your data than a simple ""cache"".   Cache is temporary (usually) where as database data should be persistent.   Many cache solutions I've seen do not persist to disk, so if you lost power to your whole cluster, you'd lose everything in cache.
-But there are some cache solutions that have persistence and replication features too, so the line is blurry.
-",VoltDB
-"I'm trying to read data from a VoltDB database with Java. Now, it can be done using result sets from SQL statements, but there should (I'm told) be another way of doing it, native to VoltDB, similarly to how data is written to a VoltDB database (with client.callProcedure). I can't figure out how to do that; it seems like it should be a pretty simple thing to do, but I don't see any simple way to do it in client.
-","1. Yes, if you are using client.callProcedure for your writes, you can certainly use the same API for your reads.  Here is a simple example:
-ClientResponse cr = client.callProcedure(procname,parameters);
-VoltTable[] tables = cr.getResults();
-VoltTable table_a = tables[0];
-while (table_a.advanceRow()) {
-    System.out.println(""Column 0 is "" + table_a.getString(0));
-}
-
-Here is a shortened example:
-VoltTable table_a = client.callProcedure(procname,parameters).getResults()[0];
-while (table_a.advanceRow()) {
-    System.out.println(""Column 0 is "" + table_a.getString(0));
-}
-
-Rather than procname and parameters, you could also call AdHoc SQL like this:
-VoltTable table_a = client.callProcedure(""@AdHoc"",""SELECT * FROM helloworld;"").getResults()[0];
-
-These examples above are synchronous or blocking calls.  If you want your application to use asynchronous calls, you can also use a Callback object with the call, so the client would continue executing subsequent code.  When the response is received by the client thread that handles callbacks, the result could be passed off to another thread in our application to read the results.
-You can read more about the API in the Java Client API Javadoc. 
-
-2. If you want to use client.callProcedure function. You have to make that procedure in VoltDB's user interface . For example,
-CREATE PROCEDURE insertNumber AS
-INSERT INTO NUMBERS (number1) values (1)
-
-this will create a procedure. When you call it with client.callProcedure(insertNumber), that will do the work.
-",VoltDB
-"The signature of the weaviate config property is as follows:
-wvc.config.Property(
-    *,
-    name: str,
-    data_type: weaviate.collections.classes.config.DataType,
-    description: Optional[str] = None,
-    index_filterable: Optional[bool] = None,
-    index_searchable: Optional[bool] = None,
-    nested_properties: Union[weaviate.collections.classes.config.Property, List[weaviate.collections.classes.config.Property], NoneType] = None,
-    skip_vectorization: bool = False,
-    tokenization: Optional[weaviate.collections.classes.config.Tokenization] = None,
-    vectorize_property_name: bool = True,
-) -> None
-
-where:
-skip_vectorization:Whether to skip vectorization of the property. Defaults to `False`.
-and
-vectorize_property_name : Whether to vectorize the property name. Defaults to `True`.
-A vector DB is a mapping of vectors to these objects that have multiple properties. What is the use of vectorizing the property and the property name via the arguments skip_vectoriszation the property vectorize_property_name
-","1. Duda from Weaviate here! Those options will allow you to have properties and/or properties name that will not be part of the resulting vectorization.
-Check here, for example, how the vectorization of an object in Weaviate works:
-https://weaviate.io/developers/weaviate/config-refs/schema#configure-semantic-indexing
-For instance:
-Unless specified otherwise in the schema, the default behavior is to:
-
-Only vectorize properties that use the text data type (unless skipped)
-Sort properties in alphabetical (a-z) order before concatenating values
-If vectorizePropertyName is true (false by default) prepend the property name to each property value
-Join the (prepended) property values with spaces
-Prepend the class name (unless vectorizeClassName is false)
-Convert the produced string to lowercase
-
-For example, this data object,
-Article = {
-  summary: ""Cows lose their jobs as milk prices drop"",
-  text: ""As his 100 diary cows lumbered over for their Monday...""
-}
-
-will be vectorized as:
-
-article cows lose their jobs as milk prices drop as his 100 diary cows
-lumbered over for their monday...
-
-So if you choose to vectorize the property name, on the above case summary and textyou will add those informations to the vector as well. If that was the case, your vector payload would be :
-
-article summary cows lose their jobs as milk prices drop text as his 100 diary cows lumbered over for their monday...
-
-Let me know if that helps?
-",Weaviate
-"My yaml is as follows
-version: '3.4'
-services:
-  weaviate:
-    image: cr.weaviate.io/semitechnologies/weaviate:1.25.0
-    restart: on-failure:0
-    ports:
-    - 8080:8080
-    - 50051:50051
-    volumes:
-    - /home/nitin/repo/central/Gen_AI/weaviate/volume:/var/lib/weaviate
-    environment:
-      QUERY_DEFAULTS_LIMIT: 20
-      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
-      PERSISTENCE_DATA_PATH: ""/var/lib/weaviate""
-      DEFAULT_VECTORIZER_MODULE: 'none'
-      ENABLE_MODULES: text2vec-transformers
-      TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
-      CLUSTER_HOSTNAME: 'tenant_1'
-
-when I try to spin up a local weaviate server with
-sudo docker compose -f ./docker-compose.yaml up
-I get
-{""action"":""transformer_remote_wait_for_startup"",""error"":""send check ready request: Get \""http://t2v-transformers:8080/.well-known/ready\"": dial tcp: lookup t2v-transformers on 127.0.0.11:53: server misbehaving"",""level"":""warning"",""msg"":""transformer remote inference service not ready"",""time"":""2024-05-20T07:09:47Z""}
-
-I do not know my way around the yaml much. Please help me get the weaviate service running locally
-","1. That's because of ENABLE_MODULES: text2vec-transformers
-If you enable the transformers module (docs) Weaviate expects that it can reach that service (in this specific example here: TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080). In your Dockerfile, t2v-transformers isn't created, hence Weaviate can't find it.
-To solve your issue, you can do two things.
-Run Weaviate without the transformers module
-You want this if you bring your own vector embeddings.
-You simply run it without any modules, as can be found in the docs here.
----
-version: '3.4'
-services:
-  weaviate:
-    command:
-    - --host
-    - 0.0.0.0
-    - --port
-    - '8080'
-    - --scheme
-    - http
-    image: cr.weaviate.io/semitechnologies/weaviate:1.25.1
-    ports:
-    - 8080:8080
-    - 50051:50051
-    volumes:
-    - weaviate_data:/var/lib/weaviate
-    restart: on-failure:0
-    environment:
-      QUERY_DEFAULTS_LIMIT: 25
-      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
-      PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
-      DEFAULT_VECTORIZER_MODULE: 'none'
-      ENABLE_MODULES: ''
-      CLUSTER_HOSTNAME: 'node1'
-volumes:
-  weaviate_data:
-...
-
-Run Weaviate with the transformers module
-You should use this one if you want to locally vectorize your data using the transformers container, as can be found in the docs here.
-version: '3.4'
-services:
-  weaviate:
-    image: cr.weaviate.io/semitechnologies/weaviate:1.25.1
-    restart: on-failure:0
-    ports:
-    - 8080:8080
-    - 50051:50051
-    environment:
-      QUERY_DEFAULTS_LIMIT: 20
-      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
-      PERSISTENCE_DATA_PATH: ""./data""
-      DEFAULT_VECTORIZER_MODULE: text2vec-transformers
-      ENABLE_MODULES: text2vec-transformers
-      TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
-      CLUSTER_HOSTNAME: 'node1'
-  t2v-transformers:
-    image: cr.weaviate.io/semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
-    environment:
-      ENABLE_CUDA: 0 # set to 1 to enable
-      # NVIDIA_VISIBLE_DEVICES: all # enable if running with CUDA
-
-",Weaviate
-"My local client creation yml
-version: '3.4'
-services:
-  weaviate:
-    image: cr.weaviate.io/semitechnologies/weaviate:1.25.0
-    restart: on-failure:0
-    ports:
-    - 8080:8080
-    - 50051:50051
-    environment:
-      QUERY_DEFAULTS_LIMIT: 20
-      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
-      PERSISTENCE_DATA_PATH: ""./data""
-      DEFAULT_VECTORIZER_MODULE: text2vec-transformers
-      ENABLE_MODULES: text2vec-transformers
-      TRANSFORMERS_INFERENCE_API: http://t2v-transformers:8080
-      CLUSTER_HOSTNAME: 'node1'
-  t2v-transformers:
-    image: semitechnologies/transformers-inference:sentence-transformers-multi-qa-MiniLM-L6-cos-v1
-    environment:
-      ENABLE_CUDA: 0
-
-I create a collection:
-client.collections.create(name = ""legal_sections"", 
-                          properties = [wvc.config.Property(name = ""content"",
-                                                           description = ""The actual section chunk that the answer is to be extracted from"",
-                                                           data_type = wvc.config.DataType.TEXT,
-                                                           index_searchable = True,
-                                                           index_filterable = True,
-                                                           skip_vectorization = True,
-                                                           vectorize_property_name = False)])
-
-I create the data to be uploaded and then I upload it:
-upserts = []
-for content, vector in zip(docs, embeddings.encode(docs)):
-    upserts.append(wvc.data.DataObject(
-        properties = {
-            'content':content
-        },
-        vector = vector
-    ))
-
-client.collections.get(""Legal_sections"").data.insert_many(upserts)
-
-My custom vectors are of length 1024
-upserts[0].vector.shape
-output:
-(1024,)
-
-I get a random uuid:
-coll = client.collections.get(""legal_sections"")
-
-for i in coll.iterator():
-    print(i.uuid)
-    break
-output:
-386be699-71de-4bad-9022-31173b9df8d2
-
-I check the length of the vector that this object at this uuid has been stored with
-coll.query.fetch_object_by_id('386be699-71de-4bad-9022-31173b9df8d2', include_vector=True).vector['default'].__len__()
-output:
-384
-
-This should be 1024. What am I doing wrong?
-","1. This is most probably a bug with weaviate (someone from weaviate can confirm). The embeddings output of the embeddings model has each element of dtype np.float32.
-This leads to 2 issues:
-
-collections.data.insert raises error that it cannot json serialize float32
-collections.data.insert_many simply suppresses this bug and simply encodes using the model given in the yml used to create the client
-
-The above code works just fine if I convert the embeddings using
-vector = [float(i) for i in vector]
-
-That is to say:
-upserts = []
-for content, vector in zip(docs, embeddings.encode(docs)):
-    upserts.append(wvc.data.DataObject(
-        properties = {
-            'content':content
-        },
-        vector = vector
-    ))
-
-gets converted to
-upserts = []
-for content, vector in zip(docs, embeddings.encode(docs)):
-    upserts.append(wvc.data.DataObject(
-        properties = {
-            'content':content
-        },
-        vector = [float(i) for i in vector]
-    ))
-
-code to replicate the issue with np.float32
-The following code works if you don't pass the vector through the np.array with explicitly specifying the np.float32 dtype
-import weaviate
-import numpy as np
-client = weaviate.connect_to_local()
-
-jeopardy = client.collections.get(""JeopardyQuestion"")
-uuid = jeopardy.data.insert(
-    properties={
-        ""question"": ""This vector DB is OSS and supports automatic property type inference on import"",
-        ""answer"": ""Weaviate"",
-    },
-    vector = list(np.array([0.12345] * 1536, dtype = np.float32))
-)
-
-",Weaviate
-"I have a local weaviate server running. I ingest my data in it and can use it for similarity search usecases
-The process is
-
-Define an embeddings model using langchain
-Ingest the data in the weaviate local client. This includes passing the data through the embedding model
-Use it in similarity search
-
-However, whenever I take down the service with docker compose down or restart the VM I have to do all three steps again. Is that entirely necessary? Can I do better? Presently it takes me about  2 hours to get started
-","1. Hi! Duda from Weaviate here.
-You probably doesn't have the persistence path mapped in your docker-compose.
-By default, Weaviate will store data in /var/lib/weaviate
-This means that you must map that path to a persistent volume in docker-compose.yaml
-something like this:
-    volumes:
-    - ./weaviate_data/:/var/lib/weaviate
-
-As database, it is expected that after vectorizing and ingesting your data, whenever you restart, your data should still be there.
-Check our docker configurator tool for more information:
-https://weaviate.io/developers/weaviate/installation/docker-compose#configurator
-Let me know if this helps :)
-",Weaviate
-"I need to keep YDB handler() function and execute SQL query simultaneously. But standart code from documentation only suggests static data to upsert. My task is to pass dynamic one. Is there a way to skip handle() or add arguments to execute_query()?
-# Create the session pool instance to manage YDB sessions.
-pool = ydb.SessionPool(driver)
-
-def execute_query(session):
-  # Create the transaction and execute query.
-  return session.transaction().execute(
-    'select 1 as cnt;',
-    commit_tx=True,
-    settings=ydb.BaseRequestSettings().with_timeout(3).with_operation_timeout(2)
-  )
-
-def handler(event, context):
-  # Execute query with the retry_operation helper.
-  result = pool.retry_operation_sync(execute_query)
-  return {
-    'statusCode': 200,
-    'body': str(result[0].rows[0].cnt == 1),
-  }
-
-When I pass dynamic_arg like this i get an error: execute_query() missing 1 required positional argument: 'dynamic_arg'
-dynamic_arg = somefunc()
-
-def execute_query(session, dynamic_arg):
-  # Выполним SQL-запрос
-  return session.transaction().execute(
-    f""""""
-        UPSERT INTO tproger (date, engagementRate, reactionsMedian, subscribers, subscriptions, subscriptionsPct, unsubscriptions, unsubscriptionsPct, views, wau) VALUES
-            ({dynamic_arg}, 1, 2, 3, 4, 5, 6, 7, 8, 9);
-    """""",
-    commit_tx=True,
-    settings=ydb.BaseRequestSettings().with_timeout(3).with_operation_timeout(2)
-  )
-
-def handler():
-  # Исполним запрос с retry_operation helper
-  result = pool.retry_operation_sync(execute_query(dynamic_arg))
-  return {
-    'statusCode': 200,
-    'body': str(result[0].rows[0].cnt == 1),
-  }
-
-","1. Functions are first-class objects in Python, so you could do the following:
-dynamic_arg = somefunc()
-    
-def prepare_execute_query(dynamic_arg):
-  def execute_query(session):
-    # Выполним SQL-запрос
-    return session.transaction().execute(
-      f""""""
-      UPSERT INTO tproger (date, engagementRate, reactionsMedian, subscribers, subscriptions, subscriptionsPct, unsubscriptions, unsubscriptionsPct, views, wau) VALUES
-      ({dynamic_arg}, 1, 2, 3, 4, 5, 6, 7, 8, 9);
-      """""",
-      commit_tx=True,
-    
-settings=ydb.BaseRequestSettings().with_timeout(3).with_operation_timeout(2)
-      )
-  return execute_query
-    
-def handler():
-  # Исполним запрос с retry_operation helper
-  result = pool.retry_operation_sync(prepare_execute_query(dynamic_arg))
-  return {
-    'statusCode': 200,
-    'body': str(result[0].rows[0].cnt == 1),
-  }
-
-Also, you'd better pass that dynamic_arg via query parameters instead of constructing a query string on the client side as it might be prone to SQL injection. Here's an example: https://ydb.tech/docs/en/dev/example-app/python/#param-prepared-queries
-",YDB
-"Does anyone know how to get the string representation of a type IType in ydb-nodejs-sdk? Maybe there is some enum or method, but I don't find anything. Here's a simple example to show what I mean.
-import { Session, TypedData, Types, Ydb, declareType, snakeToCamelCaseConversion, withTypeOptions } from ""ydb-sdk"";
-
-export type TestModelType = {
-  id: string;
-  value: string;
-};
-
-@withTypeOptions({namesConversion: snakeToCamelCaseConversion})
-export class TestModel extends TypedData implements TestModelType {
-  @declareType(Types.UTF8)
-  id: TestModelType['id']
-
-  @declareType(Types.UTF8)
-  value: TestModelType['value']
-
-  constructor(fields: TestModelType) {
-    super(fields);
-    this.id = fields.id;
-    this.value = fields.value;
-  }
-}
-
-export async function insertTest(session: Session, model: TestModel) {
-  const query = `
-    DECLARE $id AS ${doSomethingToGetFieldType(model.getType('id'))};
-    DECLARE $value AS ${doSomethingToGetFieldType(model.getType('value'))};
-
-    INSERT INTO \`tests\` (\`id\`, \`value\`)
-    VALUES ($id, $value);
-  `;
-
-  const preparedQuery = await session.prepareQuery(query);
-  await session.executeQuery(preparedQuery, {
-    '$id': model.getTypedValue('id'),
-    '$value': model.getTypedValue('value')
-  });
-}
-
-function doSomethingToGetFieldType(field: Ydb.IType): string {
-  // How to get the string representation of a type?
-  // ...
-
-  return '';
-}
-
-","1. There isn't a built-in way to get the string representation directly from the Ydb.IType interface. But, you can create a mapping of Ydb.IType to their string representations.
-YDB Node.js SDK comes with the Type class, which contains various static properties representing different types of YDB. Each of these property is an instance of a particular class or subclass of Type. You can create a map where keys are the class names of these types and the values are their string representation:
-const Types = Ydb.Types;
-
-function getYDBType(type: Ydb.IType): string {
-  if (type === Types.Bool) {
-    return 'Bool';
-  }
-  if (type === Types.Uint8) {
-    return 'Uint8';
-  }
-  if (type === Types.Uint32) {
-    return 'Uint32';
-  }
-  if (type === Types.Uint64) {
-    return 'Uint64';
-  }
-  if (type === Types.Int8) {
-    return 'Int8';
-  }
-  if (type === Types.Int32) {
-    return 'Int32';
-  }
-  if (type === Types.Int64) {
-    return 'Int64';
-  }
-  if (type === Types.Float) {
-    return 'Float';
-  }
-  if (type === Types.Double) {
-    return 'Double';
-  }
-  if (type === Types.UTF8) {
-    return 'Utf8';
-  }
-  if (type === Types.JSON) {
-    return 'Json';
-  }
-  if (type === Types.UUID) {
-    return 'Uuid';
-  }
-  //... add other types as per requirements
-
-  throw new Error('Unsupported YDB type ' + type);
-}
-
-
-function doSomethingToGetFieldType(field: Ydb.IType): string {
-  return getYDBType(field);
-}  
-
-It's kinda manual to do this, but if you don't have so many types and don't want to check it all the time it shouldn't be a problem
-",YDB
-"how do I add a UNIQUE constraint to a table column when creating table using Yandex YDB SDK? I am using JavaScript version of the SDK, however, any slightest insight about how it is done on any version of the SDK would be much appreciated.
-Searching for clues using ""unique"" keyword in https://github.com/ydb-platform/ydb-nodejs-sdk/tree/main/examples has returned no results.
-","1. You don't need unique indexes in YDB. YDB operates at a serializable isolation level according to https://ydb.tech/en/docs/concepts/transactions, so you can simply check if a record with column value K exists, and if not, create a new one. If both operations (check and insert) are performed in one transaction, then it will be guaranteed that only one record with the value K will be inserted into the database.
-
-2. Currently it's impossible to create a UNIQUE constraint in YDB Tables. It is not supported yet.
-But this feature is listed in the roadmap as far as I can see
-",YDB
-"I am working with YDB (Yandex database) and need to select top CPU time spent queries. How can I get them? Thanks in advance.
-I was looking into documentation but failed to find the answer. I am a rookie in cloud systems like that.
-","1. You can select top CPU queries with the code like this:
-SELECT
-    IntervalEnd,
-    CPUTime,
-    QueryText
-FROM `.sys/top_queries_by_cpu_time_one_minute`
-ORDER BY IntervalEnd DESC, CPUTime DESC
-LIMIT 100
-
-",YDB
-"I've created a table in YDB with async index using following YQL statement:
-(sql)
-create table messages 
-(
-    chat_id uint64,
-    user_id uint64,
-    posted_at uint64,
-    modifed_at Datetime,
-    message_text Utf8,
-    message_media Json,
-    views_cnt Uint64,
-    index idx_user_chat_posted global sync on (user_id, chat_id, posted_at),
-    primary key(chat_id, posted_at, user_id)
-)
-
-Is it possible to convert index type from sync to async?
-","1. Currently YDB doesn't support index type change.
-Though it is possible to create async index on the same set of columns with ALTER TABLE statement.
-New async index will have another name and all the queries using sync index should be rewritten.
-",YDB
-"I'm working in YelloBrick DB. I want to extract only Date value from DateTime value column.
-In this scenario generally we use query: CAST([dateTime_Value] as DATE)
-So, I'm type-casting from DateTime (timestamp) format to Date format, it's showing Date format along with extra time format (12:00:00).
-This is my YelloBrick Code: (check the RED BOX area)
-
-So, I convert dateTime column data to String (Varchar/TEXT) format and then I use Regular-Expression(or String function will work) to Replace/Match the specific area to extract date-only part: YYYY-MM-DD. (Green BOX area of above image).
-I already worked in MySQL & PostgresSQL and there this method is working fine.
-
-My question is,
-Is there any simple way to do extract date-only format ?
-I don't want to use Regex or String function here . . .
-","1. EOD I found a meaningful solution:
-select CAST(CAST(NOW() AS DATE) AS VARCHAR(20));  
-
-This is best way, type-cast from TimeStamp to Date format.
-Other simple solution are:
-select
-   CAST(TO_DATE(CAST(NOW() AS timestamp), 'YYYY-MM-DD') AS VARCHAR(20)) as sol_2,
-   TO_CHAR(CAST(NOW() AS timestamp), 'YYYY-MM-DD') as sol_3;
-
-YelloBrick Validation:
-
-",Yellowbrick
-"I can't use the librarie yellowbrick on my jupyter lab. I want to specifically use the KElbowVisualizer,
-but it's just impossible (and I know that it seems a simple problem but I tried everything).
-I always get the same message: ModuleNotFoundError: No module named 'yellowbrick' // as if it was not installed.
-I tried to download it using pip through jupiter lab (and I also installed it on my cmd), but it sad everything was already 'satisfied' (downloaded).
-I also used !pip list on my jupyter lab and there was it: yellowbrick
-And just to make sure I restarted my kernel more than once, but it just doesn't work!
-what should I do??
-IMPORTANT: I can't use conda, only pip.
-","1. The command you want to run in your running Jupyter .ipynb file is:
-%pip install yellowbrick
-
-Make a new cell and run that and then restart the kernel. You probably should at least refresh the browser, if not shut down all Jupyter and your browser and restart. Some of the more complex installs involving controlling how things display, like it looks like yellowbrick does, need more than just refreshing the kernel.
-The magic pip command variation was added in 2019 to insure the install occurs in the environment where the kernel is running that backs the active notebook.
-The exclamation point doesn't do that and can lead to issues. You should be also using %pip list to see what is installed in the kernel that your notebook is using.
-See more about the modern %pip install command here. The second paragraph here goes into more details about why the exclamation point may lead to issues.
-",Yellowbrick
-"We want to switch from MySQL to yugabytedb but have some columns with blob data stored in them (type LONGBLOB).
-I see that yugabyte supports BLOB but during migration yugabyte voyager told me that it does not support migration of blob columns.
-Can anybody tell me the proper way to migrate those columns to yugabyte?
-","1. Large objects are not supported in YugabyteDB and are usually a bad idea in PostgreSQL (old implementation)
-You can use the BYTEA datatype for medium size objects (up to 32MB). For large size it is recommended to put then on an object storage with a reference from the database.
-If you have seen BLOB as supported, that's probably with the YCQL API (with same limit as YSQL BYTEA)
-",YugabyteDB
-"I was looking to get all ip's of my hosts for all masters and tservers, do we have any specific command through which I can list all of the host ip's?
-","1. If using YSQL, the yb_servers() function can be helpful:
-yugabyte=# SELECT host, cloud, region, zone FROM yb_servers() ORDER BY zone;
-    host    | cloud |  region   |   zone
--------------+-------+-----------+------------
- 10.37.1.65  | aws   | us-east-1 | us-east-1a
- 10.37.1.204 | aws   | us-east-1 | us-east-1b
- 10.37.1.249 | aws   | us-east-1 | us-east-1c
-(3 rows)
-
-But this won’t include the yb-masters.
-For the list of yb-master servers you can use yb-admin list_all_masters.
-",YugabyteDB
-"I'm just starting testing yugabyte using the yugabyted script.  Trying to see if it can replace our current postgresql implementation.  The server is running RHEL 8 Linux. I'm getting an error when trying to load a custom function.  Any advice would be appreciated.
-The shared object named in the CREATE FUNCTION statement seems to be found, but the error message complains that a library it depends on (libmcpdb.so.1) cannot be found.  However that library is in the correct directory and is present in the ldconfig cache.  The function loads without a problem in postgresql.
-# ./bin/ysqlsh -p5432 pcm_production
-ysqlsh (11.2-YB-2.19.0.0-b0)
-Type ""help"" for help.
-
-pcm_production=# CREATE OR REPLACE FUNCTION pcm_match_encrypt_str( TEXT, TEXT )
-pcm_production-# RETURNS BOOLEAN
-pcm_production-# AS '/usr/lib/libpcmdbcrypt.so.1.0', 'pcm_match_encrypt_str'
-pcm_production-# LANGUAGE C
-pcm_production-# IMMUTABLE;
-ERROR:  could not load library ""/usr/lib/libpcmdbcrypt.so.1.0"": libpcmdb.so.1: cannot open shared object file: No such file or directory
-pcm_production=# \q
-
-# ldd /usr/lib/libpcmdbcrypt.so.1.0
-        linux-vdso.so.1 (0x00007ffd5bfe0000)
-        libpq.so.5 => /lib64/libpq.so.5 (0x00007fd34abb1000)
-        libpcmdb.so.1 => /lib/libpcmdb.so.1 (0x00007fd34a94e000)
-        libc.so.6 => /lib64/libc.so.6 (0x00007fd34a58b000)
-
-# ls -l /lib/libpcmdb*
--rwxr-xr-x  1 root root  14480 Aug  8 14:22 /lib/libpcmdbcrypt.so.1.0
-lrwxrwxrwx. 1 root root     24 Oct  9  2020 /lib/libpcmdb.so -> /usr/lib/libpcmdb.so.1.0
-lrwxrwxrwx. 1 root root     24 Oct  9  2020 /lib/libpcmdb.so.1 -> /usr/lib/libpcmdb.so.1.0
--r-xr-xr-x. 1 root root 759432 Oct  5  2020 /lib/libpcmdb.so.1.0
-
-# ldconfig -p | grep libpcmdb
-        libpcmdbcrypt.so.1.0 (libc6,x86-64) => /lib/libpcmdbcrypt.so.1.0
-        libpcmdb.so.1 (libc6,x86-64) => /lib/libpcmdb.so.1
-        libpcmdb.so (libc6,x86-64) => /lib/libpcmdb.so
-
-","1. For extensions, PostgreSQL expects the .so files to be under its own directory and not on the system library path.
-For refernec you can see https://docs.yugabyte.com/preview/explore/ysql-language-features/pg-extensions/#pgsql-postal-example for steps for setting up custom plugin
-In this case, just put the .so file in the directory $(yugabyte/postgres/bin/pg_config --pkglibdir). Additionally if there are .control and  .sql files should be copied to directory$(yugabyte/postgres/bin/pg_config --sharedir)/extension.
-",YugabyteDB
-"Can I issue a sql-query to detect if a database supports colocation? I would like to have a check to see if a database or a table has the colocation property set ON or OFF, true/false… I currently have a scirpt creating 3 tables and one of those will normally fail, the result tells me if the database supports coloc (currently, and not sure if it is foolproof even..). Is it possible to “externalize” this property to some SQL-table or view, or is this possibly already the case.
-","1. There is a function called yb_table_properties that should help (see is_colocated column below)
-yugabyte=# CREATE DATABASE c with COLOCATION = true;
-CREATE DATABASE
-
-yugabyte=# \c c
-You are now connected to database ""c"" as user ""yugabyte"".
-
-c=# CREATE TABLE t (c1 INT);
-CREATE TABLE
-
-c=# SELECT * FROM yb_table_properties('t'::regclass);
- num_tablets | num_hash_key_columns | is_colocated | tablegroup_oid | colocation_id
--------------+----------------------+--------------+----------------+---------------
-        1 |                 0 | t           |       16392 | 1639512673
-(1 row)
-
-There is also a function yb_is_database_colocated that you can run to determine if the database you are connected to is colocated:
-c=# SELECT yb_is_database_colocated();
- yb_is_database_colocated
---------------------------
- t
-(1 row)
-
-c=# \c yugabyte
-You are now connected to database ""yugabyte"" as user ""yugabyte"".
-
-yugabyte=# SELECT yb_is_database_colocated();
- yb_is_database_colocated
---------------------------
- f
-(1 row)
-
-",YugabyteDB
-"I’m getting the following error when trying to create an index:
-yugabyte=# CREATE TABLE DEMO(C1 INT PRIMARY KEY, C2 INT, C3 INT);
-yugabyte=# CREATE UNIQUE INDEX ix_demo ON public.demo USING BTREE(c2 ASC, c3 ASC
-) WITH (FILLFACTOR=95);
-NOTICE:  index method ""btree"" was replaced with ""lsm"" in YugabyteDB
-ERROR:  unrecognized parameter ""fillfactor""
-
-How to resolve this issue while creating indexes..
-","1. In this case, FILLFACTOR doesn't apply to YugabyteDB storage engine.
-You can remove it from the CREATE INDEX statement.
-",YugabyteDB
-"I'm trying to install dependencies in a dataflow pipeline. First I used requirements_file flag but i get (ModuleNotFoundError: No module named 'unidecode' [while running 'Map(wordcleanfn)-ptransform-54'])
-the unique package added is unidecode.
-trying a second option I configured a Docker image following the google documentation:
-FROM apache/beam_python3.10_sdk:2.52.0
-
-ENV RUN_PYTHON_SDK_IN_DEFAULT_ENVIRONMENT=1
-
-RUN pip install unidecode
-
-RUN apt-get update && apt-get install -y
-
-ENTRYPOINT [""/opt/apache/beam/boot""]
-
-It was compiled in the gcp project vm and pushed to artifact registry
-Then I generated the template for pipeline with:
-python -m mytestcode \
-    --project myprojectid \
-    --region us-central1 \
-    --temp_location gs://mybucket/beam_test/tmp/ \
-    --runner DataflowRunner \
-    --staging_location gs://mybucket/beam_test/stage_output/ \
-    --template_name mytestcode_template \
-    --customvariable 500 \
-    --experiments use_runner_v2 \
-    --sdk_container_image us-central1-docker.pkg.dev/myprojectid/myimagerepo/dataflowtest-image:0.0.1 \
-    --sdk_location container
-
-After all I created the job from template with the UI, but the error is the same, please someone can help me?
-I understand that the workers are using de default beam sdk, is correct that? how I can fix it?
-","1. You will get this error if you declare it globally at the top of the code. For example, let's consider you are performing unidecode library operation inside a ParDo Function. If that is the case, use the import statement inside the ParDo Function instead of importing it in the top line of code.
-In my case, I imported the datetime library inside my ParDo Function:
-class sample_function(beam.DoFn):
-def process(self, element):
-    from datetime import datetime
-   
-
-",Beam
-"I am working with Apache Beam on Google Dataflow and I'm calling 3 functions
-| ""Unnest 1"" >> beam.Map(lambda record: dict_level1(record))
-| ""Unnest 2"" >> beam.Map(lambda record: unnest_dict(record))
-| ""Unnest 3"" >> beam.Map(lambda record: dict_level0(record))
-
-But when I run the job in dataflow I get the error that the name is not defined.
-here is my code
-import apache_beam as beam
-import os
-from apache_beam.options.pipeline_options import PipelineOptions
-
-#este me crea el output y me genera el template
-pipeline_options = {
-    'project': 'c3t-tango-dev',
-    'runner': 'DataflowRunner',
-    'region': 'us-central1',  # Asegúrate de especificar la región correctamente
-    'staging_location': 'gs://dario-dev-gcs/dataflow-course/staging',
-    'template_location': 'gs://dario-dev-gcs/dataflow-course/templates/batch_job_df_gcs_flights4'
-}
-
-
-pipeline_options = PipelineOptions.from_dictionary(pipeline_options)
-
-table_schema = 'airport:STRING, list_delayed_num:INTEGER, list_delayed_time:INTEGER'
-table = 'c3t-tango-dev:dataflow.flights_aggr'
-
-class Filter(beam.DoFn):
-    def process(self, record):
-        if int(record[8]) > 0:
-            return [record]
-
-def dict_level1(record):
-    dict_ = {}
-    dict_['airport'] = record[0]
-    dict_['list'] = record[1]
-    return (dict_)
-
-def unnest_dict(record):
-    def expand(key, value):
-        if isinstance(value, dict):
-            return [(key + '_' + k, v) for k, v in unnest_dict(value).items()]
-        else:
-            return [(key, value)]
-
-    items = [item for k, v in record.items() for item in expand(k, v)]
-    return dict(items)
-
-def dict_level0(record):
-    #print(""Record in dict_level0:"", record)
-    dict_ = {}
-    dict_['airport'] = record['airport']
-    dict_['list_Delayed_num'] = record['list_Delayed_num'][0]
-    dict_['list_Delayed_time'] = record['list_Delayed_time'][0]
-    return (dict_)
-
-with beam.Pipeline(options=pipeline_options) as p1:
-    serviceAccount = ""./composer/dags/c3t-tango-dev-591728f351ee.json""
-    os.environ[""GOOGLE_APPLICATION_CREDENTIALS""] = serviceAccount
-
-    Delayed_time = (
-        p1
-        | ""Import Data time"" >> beam.io.ReadFromText(""gs://dario-dev-gcs/dataflow-course/input/voos_sample.csv"",
-                                                     skip_header_lines=1)
-        | ""Split by comma time"" >> beam.Map(lambda record: record.split(','))
-        | ""Filter Delays time"" >> beam.ParDo(Filter())
-        | ""Create a key-value time"" >> beam.Map(lambda record: (record[4], int(record[8])))
-        | ""Sum by key time"" >> beam.CombinePerKey(sum)
-    )
-
-    Delayed_num = (
-        p1
-        | ""Import Data"" >> beam.io.ReadFromText(""gs://dario-dev-gcs/dataflow-course/input/voos_sample.csv"",
-                                                 skip_header_lines=1)
-        | ""Split by comma"" >> beam.Map(lambda record: record.split(','))
-        | ""Filter Delays"" >> beam.ParDo(Filter())
-        | ""Create a key-value"" >> beam.Map(lambda record: (record[4], int(record[8])))
-        | ""Count by key"" >> beam.combiners.Count.PerKey()
-    )
-
-    Delay_table = (
-      {'Delayed_num': Delayed_num, 'Delayed_time': Delayed_time}
-      | ""Group By"" >> beam.CoGroupByKey()
-      | ""Unnest 1"" >> beam.Map(lambda record: dict_level1(record))
-      | ""Unnest 2"" >> beam.Map(lambda record: unnest_dict(record))
-      | ""Unnest 3"" >> beam.Map(lambda record: dict_level0(record))
-      #| beam.Map(print)
-      | ""Write to BQ"" >> beam.io.WriteToBigQuery(
-        table,
-        schema=table_schema,
-        write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
-        create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
-        custom_gcs_temp_location=""gs://dario-dev-gcs/dataflow-course/staging"")
-    )
-
-p1.run()
-
-I ran this code this generate a template in gcs and then I uploaded the template to dataflow using a custom template and pointing to the template but when runs I got this error
-File ""/Users/dario/Repo-c3tech/c3t-tango/./composer/dags/gcp_to_bq_table.py"", line 76, in 
-NameError: name 'dict_level1' is not defined
-
-","1. To solve the above error message  set the --save_main_session pipeline option to True.
-Error:
-  File ""/Users/dario/Repo-c3tech/c3t-tango/./composer/dags/gcp_to_bq_table.py"", line 76, in NameError: name 'dict_level1' is not defined
-
-When you execute locally, such as with DirectRunner, this error may not occur. This error occurs if your DoFns are using values in the global namespace that are not available on the Dataflow worker. For more information refer to this link.
-",Beam
-"I have an erlang application that has been compiled already so I do not have access to the source code.
-The folder structure looks like this
-base_app
-  lib
-    package_1
-      ebin
-        - package_1.app
-        - package_1.beam
-      src
-        - package_1.app.src
-    package_2
-      ebin
-        - package_2.app
-        - package_2.beam
-      src
-        - package_1.app.src
-
-I want to be able to call functions from those packages in my elixir code, for example
-:package_1.do_this()
-I have added those folders under the lib folder of my mix project, but that doesn't work for me. I'm unsure how else to go about this.
-",,Beam
-"I have few large monolithic databases and applications in my organisation. Any new feature requests  mainly get built within those monoliths. I am exploring options to change this trend , by making data and changes in data available as events in topics for new microservices to consume.
-This  diagram Event streaming design is one part of the design Change data capture(CDC)  for each of tables in source database  send changes from each table to a topic , use a stream processor to wrangle this data and make meaningful events for microservices to consume. In streaming layer , I am considering options such as Apache Kafka stream , Apache Flink , Apache Beam / Google Cloud  dataflow  or Akka streams.
-My questions are ( I have not worked on  complex stream processing )
-
-For stream to work on complex joins and look-ups i need to have a snapshot persisted , at least for static data. How do I do this ? What options do I have with each of the choices above?
-2 . Is this pattern common ? What are some of the challenges ?
-Looks like Kafka is a good choice, will you be able to share your view ?
-
-I have tried few basic stream processing .Works great , but complexity increases when dealing with multiple streams . Thanks for your help.
-","1. For context, I've worked with nearly all of the streaming technologies that you've mentioned in your post and I've found Apache Flink to be far and away the easiest to work with (although it's worth mentioning just about all of these can accomplish/suit your use-case).
-
-For stream to work on complex joins and look-ups i need to have a snapshot persisted , at least for static data. How do I do this ? What options do I have with each of the choices above? 2 . Is this pattern common ? What are some of the challenges?
-
-Most change-data-capture (CDC) streams that you might use to populate your join values against support the notion of a snapshot. Generally when you make major changes to the data you are syncing or add new sources, etc. It can be useful to perform a one time snapshot that handles loading all of that data into something like a Kafka topic.
-Once it's there, you can use a consumer to determine how you want to read it (i.e. from the beginning of the topic, etc.) and you'll also be able to listen to new changes as they come in to ensure they are reflected in your stream as well.
-One of the key differences here in something like Apache Flink is its use of ""state"" for performing these type of operations. In Flink, you'd read this ""lookup"" stream of snapshots/changes and somewhere in your streaming job would you could store those values into state, which wouldn't require re-reading them each time. As the values changed, you could update the state such that each message coming through your pipeline would simply look up the corresponding value and use it to perform the ""join"". State itself is fault-tolerant and would be persisted through restarts of the streaming job, failures, etc.
-
-Looks like Kafka is a good choice, will you be able to share your view?
-
-Generally speaking, Kafka is probably the most ubiquitous technology for handling message processing in the streaming world. There are plenty of other options out there, but you really can't go wrong with using Kafka.
-It's a good, and very likely, the best choice (although your mileage may vary)
-",Beam
-"I'm using Azure event grid trigger output binding python v2 model, the trigger works fine, the events type is cloudSchema. For the output i keep getting
-
-message"": ""This resource is configured to receive event in 'CloudEventV10' schema. The JSON received does not conform to the expected schema. Token Expected: StartObject, Actual Token Received: StartArray
-
-import logging
-import azure.functions as func
-import datetime
-
-@app.function_name(name=""eventgrid_output"")
-@app.route(route=""eventgrid_output"")
-@app.event_grid_output(
-    arg_name=""outputEvent"",
-    topic_endpoint_uri=""MyEventGridTopicUriSetting"",
-    topic_key_setting=""MyEventGridTopicKeySetting"")
-def eventgrid_output(eventGridEvent: func.EventGridEvent, 
-         outputEvent: func.Out[func.EventGridOutputEvent]) -> None:
-
-    logging.log(""eventGridEvent: "", eventGridEvent)
-
-    outputEvent.set(
-        func.EventGridOutputEvent(
-            id=""test-id"",
-            data={""tag1"": ""value1"", ""tag2"": ""value2""},
-            subject=""test-subject"",
-            event_type=""test-event-1"",
-            event_time=datetime.datetime.utcnow(),
-            data_version=""1.0""))
-
-How do i set the output type to CloudEventV10
-Successfully publishing the event triggered the function
-","1. This worked for me:
-You are not using Event hub Trigger but you are using its arg name in the function which is eventGridEvent: func.EventGridEvent,, which is resulting in error
-
-Exception: FunctionLoadError: cannot load the test function: the following parameters are declared in Python but not in function.json: {'eventGridEvent'}
-
-I have used EventHub Trigger and Event Hub output binding in same trigger to  get output.
-For reference check this document
-My Code:
-import azure.functions as func
-import datetime
-import logging
-
-app = func.FunctionApp()
-
-@app.function_name(name=""eventgrid_out"")
-@app.event_grid_trigger(arg_name=""eventGridEvent"")
-@app.event_grid_output(
-    arg_name=""outputEvent"",
-    topic_endpoint_uri=""MyEventGridTopicUriSetting"",
-    topic_key_setting=""MyEventGridTopicKeySetting"")
-def eventgrid_output(eventGridEvent: func.EventGridEvent, 
-         outputEvent: func.Out[func.EventGridOutputEvent]) -> None:
-
-    logging.info(""eventGridEvent: %s"", eventGridEvent)
-
-    outputEvent.set(
-        func.EventGridOutputEvent(
-            id=""test-id"",
-            data={""tag1"": ""value1"", ""tag2"": ""value2""},
-            subject=""test-subject"",
-            event_type=""test-event-1"",
-            event_time=datetime.datetime.utcnow(),
-            data_version=""1.0""))
-
-OUTPUT:
-To check the output binding data sent to  event grid, I used a service bus queue.
-
-
-I think it is not mention properly in document, I have created Document correction issue. follow the issue for further updates.
-",CloudEvents
-"Is it possible to trigger cloud event from pubsub.
-import { cloudEvent } from ""@google-cloud/functions-framework""
-
-export const myCloudEvent = cloudEvent<GoogleDrivePageMessage>(""myTopic"", cloudEvent => {
-  const data = cloudEvent.data;
-  logger.log(""Called pub sub"")
-});
-
-and trigger it by calling publishMessage
-const pubsub = new PubSub(config);
-const topic = pubsub.topic(""myTopic"");
-topic.publishMessage({ data: Buffer.from(messageJson) }, (error, messageId) => {
-      if (error) {
-        logger.log(`There was an error trying to send pubsub message: ${messageId}`, error)
-      }
-    });
-
-Also i am trying to test this locally but the emulator doesn't seem to even load cloudEvent function. How would i test this locally without deploying first.
-","1. PubSub does not support CloudEvent format natively (it's close but not similar). When you read data from pubSub, the library translate the PubSub format in Cloud Event format.
-However, to publish messages, you must publish them in PubSub format (attribute and payload, instead of headers and body)
-
-2. I used onMessagePublished to be able to handle the message
-import { onMessagePublished } from ""firebase-functions/v2/pubsub"";
-
-export const handleMessage = onMessagePublished<MyMessage>(""MYTOPIC"", async (event) => {})
-
-
-So it is possible but no with a cloud event i still dont understand the difference.
-Also i would like to add that documentation is not good and there is no support for local development so avoid firebase if you want to build something serious
-",CloudEvents
-"I'm new to AWS. I'm writing a lambda function in C# to monitor the failed delivery of SNS messages. I encountered an issue where the first parameter passed to the FunctionHandler is empty (not null, but all its fields, like DetailType, Region, Source, Id, etc. are null).
-Following is my code:
-namespace SnsLogProcessor 
-{
-
-    public class EventConfig
-    {
-        public string TopicArn { get; set; }
-        public string status { get; set; }
-    }
-
-    public void FunctionHandler(CloudWatchEvent<EventConfig> evnt, ILambdaContext context)
-    {
-        if (evnt != null)
-        {
-            if (context != null)
-            {
-                context.Logger.LogInformation($""Lambda triggered!"");
-                context.Logger.LogInformation(evnt.DetailType);
-                context.Logger.LogInformation(evnt.Region);
-                context.Logger.LogInformation(evnt.Source);
-                context.Logger.LogInformation(evnt.Id);
-                context.Logger.LogInformation(evnt.Version);
-                context.Logger.LogInformation(evnt.Account);
-
-                if (evnt.Detail != null)
-                {
-                    string status = evnt.Detail.status;
-
-                    if (!string.IsNullOrEmpty(status))
-                        context.Logger.LogInformation(status);
-                    else context.Logger.LogInformation($""Not found."");
-                }
-                else context.Logger.LogInformation(evnt.ToString());
-            }
-        }
-    }
-}
-
-And following is the output from the function after triggered:
-2024-01-24T13:18:01.725-06:00 START RequestId: f24f25e6-10b3-4b63-be75-ae3174bdab70 Version: $LATEST
-2024-01-24T13:18:02.106-06:00   2024-01-24T19:18:02.083Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info Lambda triggered!
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info
-2024-01-24T13:18:02.107-06:00   2024-01-24T19:18:02.107Z f24f25e6-10b3-4b63-be75-ae3174bdab70 info Amazon.Lambda.CloudWatchEvents.CloudWatchEvent`1[SnsLogProcessor.EventConfig]
-2024-01-24T13:18:02.164-06:00   END RequestId: f24f25e6-10b3-4b63-be75-ae3174bdab70 
-
-I tried the code on live AWS environment and got above results. I verified my account, and the function I created, have all the permission for logs. As you can see, all the fields of the event object, including DetailType, Region, etc. are null.
-Thank you so very much for reading this! Any help would be greatly appreciated!
-","1. Your lambda is not listening to CloudWatch Events, it's listening to a CloudWatch log group. They are completely different things. The former is known as EventBridge Events these days.
-CloudWatch log stream events have this shape:
-{
-    ""awslogs"": {
-        ""data"": ""base64-encoded gzipped JSON with log message""
-    }
-}
-
-You should use the type CloudWatchLogsEvent to deserialize it:
-public async Task FunctionHandler(CloudWatchLogsEvent evnt, ILambdaContext context)
-{
-    await Console.Out.WriteLineAsync(evnt.Awslogs.DecodeData());
-}
-
-",CloudEvents
-"I'm deploying Pachyderm on GKE but when I deploy the pipeline (following the https://docs.pachyderm.com/latest/getting_started/beginner_tutorial/) the Pod fails in ImagePullCrashLoopBack giving this error ""no such image"".
-Here, the output of the command ""kubectl get pods"":
-screenshot
-How can I fix the deployment procedure?
-","1. As mentioned in the Slack channel of Pachyderm community, adding the flag --no-expose-docker-socket to the deploy call should solve the issue.
-pachctl deploy google ${BUCKET_NAME} ${STORAGE_SIZE} --dynamic-etcd-nodes=1 --no-expose-docker-socket
-",Pachyderm
-"I have more than one Kubernetes context. When I change contexts, I have been using kill -9  to kill the port-forward in order to redo the pachtctl port-forward & command. I wonder if this is the right way of doing it.
-In more detail:
-I start off being in a Kubernetes context, we'll call it context_x. I then want to change context to my local context, called minikube. I also want to see my repos for this minikube context, but when I use pachctl list-repo, it still shows context_x's Pachyderm repos. When I do pachctl port-forward, I then get an error message about the address being already in use. So I have to ps -a, then kill -9 on those port forward processes, and then do pachctl port-forward command again.
-An example of what I've been doing:
-$ kubectl config use-context minikube
-$ pachctl list-repo #doesn't show minikube context's repos
-$ pachctl port-forward &
-...several error messages along the lines of:
-Unable to create listener: Error listen tcp4 127.0.0.1:30650: bind: address already in use
-$ ps -a | grep forward
-33964 ttys002    0:00.51 kubectl port-forward dash-12345678-abcde 38080:8080
-33965 ttys002    0:00.51 kubectl port-forward dash-12345679-abcde 38081:8081
-37245 ttys002    0:00.12 pachctl port-forward &
-37260 ttys002    0:00.20 kubectl port-forward pachd-4212312322-abcde 30650:650
-$ kill -9 37260
-$ pachctl port-forward & #works as expected now
-
-Also, kill -9 on the pachctl port-forward process 37245 doesn't work, it seems like I have to kill -9 on the kubectl port-forward
-","1. You can specify the port if you want, as a different one using -p flag as mentioned in docs Is there a reason of not doing it?
-Also starting processes in background and then sending it a SIGKILL causes the resources to be unallocated properly so when you try to join again you might see it giving errors since it cannot allocate the same port again. So try running it without & at the end.
-So whenever you change the context all you need to do is CTRL + C and start it again, this will release the resources properly and gain thema gain.
-
-2. Just wanted to update this answer for anyone who finds it—pachctl now supports contexts, and a Pachyderm context includes a reference to its associated kubectl context. When you switch to a new pachctl context, pachctl will now use the associated kubectl context automatically (you'll still need to switch contexts in kubectl)
-",Pachyderm
-"I have a JSON configuration for my pipeline in Pachyderm:
-{
-    ""pipeline"": {
-        ""name"": ""mopng-beneficiary-v2""
-    },
-    ""input"": {
-        ""pfs"": {
-            ""repo"": ""mopng_beneficiary_v2"",
-            ""glob"": ""/*""
-        }
-    },
-    ""transform"": {
-        ""cmd"": [""python3"", ""/pclean_phlc9h6grzqdhm6sc0zrxjne_UdOgg.py /pfs/mopng_beneficiary_v2/euoEQHIwIQTe1wXtg46fFYok.csv /pfs/mopng_beneficiary_v2//Users/aviralsrivastava/Downloads/5Feb18_master_ujjwala_latlong_dist_dno_so_v7.csv /pfs/mopng_beneficiary_v2//Users/aviralsrivastava/Downloads/ppac_master_v3_mmi_enriched_with_sanity_check.csv /pfs/mopng_beneficiary_v2/Qc.csv""],
-        ""image"": ""mopng-beneficiary-v2-image""
-    }
-}
-
-And my docker file is as follows:
-FROM ubuntu:14.04
-
-# Install opencv and matplotlib.
-RUN apt-get update \
-    && apt-get upgrade -y \
-    && apt-get install -y unzip wget build-essential \
-        cmake git pkg-config libswscale-dev \
-        python3-dev python3-numpy python3-tk \
-        libtbb2 libtbb-dev libjpeg-dev \
-        libpng-dev libtiff-dev libjasper-dev \
-        bpython python3-pip libfreetype6-dev \
-    && apt-get clean \
-    && rm -rf /var/lib/apt
-
-RUN sudo pip3 install matplotlib
-RUN sudo pip3 install pandas
-
-# Add our own code.
-ADD pclean.py /pclean.py
-
-However, when I run my command to create the pipeline:
-pachctl create-pipeline -f https://raw.githubusercontent.com/avisrivastava254084/learning-pachyderm/master/pipeline.json
-
-The files are existing in the pfs:
-pachctl put-file mopng_beneficiary_v2 master -f /Users/aviralsrivastava/Downloads/pclean_phlc9h6grzqdhm6sc0zrxjne_UdOgg.py
-➜  ~ pachctl put-file mopng_beneficiary_v2 master -f /Users/aviralsrivastava/Downloads/5Feb18_master_ujjwala_latlong_dist_dno_so_v7.csv
-➜  ~ pachctl put-file mopng_beneficiary_v2 master -f /Users/aviralsrivastava/Downloads/ppac_master_v3_mmi_enriched_with_sanity_check.csv
-➜  ~ pachctl put-file mopng_beneficiary_v2 master -f /Users/aviralsrivastava/Downloads/euoEQHIwIQTe1wXtg46fFYok.csv
-
-It should be worth to note that I am getting this from the logs command(pachctl get-logs --pipeline=mopng-beneficiary-v2):
-container ""user"" in pod ""pipeline-mopng-beneficiary-v2-v1-lnbjh"" is waiting to start: trying and failing to pull image
-
-","1. As Matthew L Daniel commented, the image name looks funny because it has no prefix. By default, Pachyderm pulls Docker images from Dockerhub, and Dockerhub prefixes images with the user that owns them (e.g. maths/mopng-beneficiary-v2-image)
-Also, I think you might need to change the name of your input repo to be more distinct from the name of the pipeline. Pachyderm canonicalized repo names to meet Kubernetes naming requirements, and mopng-beneficiary-v2 and mopng_beneficiary_v2 might canonicalize to the same repo name (you might be getting an error like repo already exists). Try renaming the input repo to mopng_beneficiary_input or some such
-",Pachyderm
-"I am new to Pachyderm.
-I have a pipeline to extract, transform and then save in the db.
-Everything is already written in nodejs, docekrized.
-Now, I would like to move and use pachyderm.
-I tried following the python examples they provided, but creating this new pipeline always fails and the job never starts.
-All my code does is take the /pfs/data and copy it to /pfs/out. 
-Here is my pipeline definition
-{
-    ""pipeline"": {
-        ""name"": ""copy""
-    },
-    ""transform"": {
-        ""cmd"": [""npm"", ""start""],
-        ""image"": ""simple-node-docker""
-    },
-    ""input"": {
-        ""pfs"": {
-            ""repo"": ""data"",
-            ""glob"": ""/*""
-        }
-    }
-}
-
-All that happens is that the pipeline fails and the job never starts.
-Is there a way to debug on why the pipeline is failing?
-Is there something special about my docker image that needs to happen?
-","1. Offhand I see two possible issues:
-
-The image name doesn't have a prefix. By default, images are pulled from dockerhub, and dockerhub images are prefixed with the user who owns the image (e.g. maths/simple-node-docker)
-The cmd doesn't seem to include a command for copying anything. I'm not familiar with node, but it looks like this starts npm and then does nothing else. Perhaps npm loads and runs your script by default? If so, it might help to post your script as well.
-
-",Pachyderm
-"I changed the configuration in pulsar's configuration file broker.conf to
-brokerDeleteInactiveTopicsEnabled: true
-brokerDeleteInactiveTopicsMode: delete_when_no_subscriptions
-brokerDeleteInactiveTopicsMaxInactiveDurationSeconds: 120
-
-Then I restarted pulsar's docker container and sent a message to the topic persistent://t_test/proj_test/web_hook_test, but when time passed, the topic was not removed and the admin rest API could query the topic information.
-
-This topic does not have subscribers and the data returned by queries through the interface ""/admin/v2/persistent/t_test/proj_test/web_hook_test/subscriptions"" is empty
-
-I want to clean up inactive topics with pulsar's auto-delete policy
-","1. First, the topic must be created dynamically. Not by pulsar-admin api.
-Second, make sure you also set the
-allowAutoTopicCreation=true
-
-Last thing make sure you use = in conf/broker instead of : .
-Also try to use newest pulsar version
-",Pulsar
-"I am working on a project to stream an H.264 video file using RabbitMQ (AMQP protocol) and display it in a web application. The setup involves capturing video frames, encoding them, sending them to RabbitMQ, and then consuming and decoding them on the web application side using Flask and Flask-SocketIO.
-However, I am encountering performance issues with the publishing and subscribing rates in RabbitMQ. I cannot seem to achieve more than 10 messages per second. This is not sufficient for smooth video streaming.
-I need help to diagnose and resolve these performance bottlenecks.
-Here is my code:
-
-Video Capture and Publishing Script:
-
-# RabbitMQ setup
-RABBITMQ_HOST = 'localhost'
-EXCHANGE = 'DRONE'
-CAM_LOCATION = 'Out_Front'
-KEY = f'DRONE_{CAM_LOCATION}'
-QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue'
-
-# Path to the H.264 video file
-VIDEO_FILE_PATH = 'videos/FPV.h264'
-
-# Configure logging
-logging.basicConfig(level=logging.INFO)
-
-@contextmanager
-def rabbitmq_channel(host):
-    """"""Context manager to handle RabbitMQ channel setup and teardown.""""""
-    connection = pika.BlockingConnection(pika.ConnectionParameters(host))
-    channel = connection.channel()
-    try:
-        yield channel
-    finally:
-        connection.close()
-
-def initialize_rabbitmq(channel):
-    """"""Initialize RabbitMQ exchange and queue, and bind them together.""""""
-    channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct')
-    channel.queue_declare(queue=QUEUE_NAME)
-    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=KEY)
-
-def send_frame(channel, frame):
-    """"""Encode the video frame using FFmpeg and send it to RabbitMQ.""""""
-    ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe'
-    cmd = [
-        ffmpeg_path,
-        '-f', 'rawvideo',
-        '-pix_fmt', 'rgb24',
-        '-s', '{}x{}'.format(frame.shape[1], frame.shape[0]),
-        '-i', 'pipe:0',
-        '-f', 'h264',
-        '-vcodec', 'libx264',
-        '-pix_fmt', 'yuv420p',
-        '-preset', 'ultrafast',
-        'pipe:1'
-    ]
-    
-    start_time = time.time()
-    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-    out, err = process.communicate(input=frame.tobytes())
-    encoding_time = time.time() - start_time
-    
-    if process.returncode != 0:
-        logging.error(""ffmpeg error: %s"", err.decode())
-        raise RuntimeError(""ffmpeg error"")
-    
-    frame_size = len(out)
-    logging.info(""Sending frame with shape: %s, size: %d bytes"", frame.shape, frame_size)
-    timestamp = time.time()
-    formatted_timestamp = datetime.fromtimestamp(timestamp).strftime('%H:%M:%S.%f')
-    logging.info(f""Timestamp: {timestamp}"") 
-    logging.info(f""Formatted Timestamp: {formatted_timestamp[:-3]}"")
-    timestamp_bytes = struct.pack('d', timestamp)
-    message_body = timestamp_bytes + out
-    channel.basic_publish(exchange=EXCHANGE, routing_key=KEY, body=message_body)
-    logging.info(f""Encoding time: {encoding_time:.4f} seconds"")
-
-def capture_video(channel):
-    """"""Read video from the file, encode frames, and send them to RabbitMQ.""""""
-    if not os.path.exists(VIDEO_FILE_PATH):
-        logging.error(""Error: Video file does not exist."")
-        return
-    cap = cv2.VideoCapture(VIDEO_FILE_PATH)
-    if not cap.isOpened():
-        logging.error(""Error: Could not open video file."")
-        return
-    try:
-        while True:
-            start_time = time.time()
-            ret, frame = cap.read()
-            read_time = time.time() - start_time
-            if not ret:
-                break
-            frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
-            frame_rgb = np.ascontiguousarray(frame_rgb) # Ensure the frame is contiguous
-            send_frame(channel, frame_rgb)
-            cv2.imshow('Video', frame)
-            if cv2.waitKey(1) & 0xFF == ord('q'):
-                break
-            logging.info(f""Read time: {read_time:.4f} seconds"")
-    finally:
-        cap.release()
-        cv2.destroyAllWindows()
-
-
-the backend (flask):
-
-app = Flask(__name__)
-CORS(app)
-socketio = SocketIO(app, cors_allowed_origins=""*"")
-
-RABBITMQ_HOST = 'localhost'
-EXCHANGE = 'DRONE'
-CAM_LOCATION = 'Out_Front'
-QUEUE_NAME = f'DRONE_{CAM_LOCATION}_video_queue'
-
-def initialize_rabbitmq():
-    connection = pika.BlockingConnection(pika.ConnectionParameters(RABBITMQ_HOST))
-    channel = connection.channel()
-    channel.exchange_declare(exchange=EXCHANGE, exchange_type='direct')
-    channel.queue_declare(queue=QUEUE_NAME)
-    channel.queue_bind(exchange=EXCHANGE, queue=QUEUE_NAME, routing_key=f'DRONE_{CAM_LOCATION}')
-    return connection, channel
-
-def decode_frame(frame_data):
-    # FFmpeg command to decode H.264 frame data
-    ffmpeg_path = 'ffmpeg/bin/ffmpeg.exe'
-    cmd = [
-        ffmpeg_path,
-        '-f', 'h264',
-        '-i', 'pipe:0',
-        '-pix_fmt', 'bgr24',
-        '-vcodec', 'rawvideo',
-        '-an', '-sn',
-        '-f', 'rawvideo',
-        'pipe:1'
-    ]
-    process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-    start_time = time.time()  # Start timing the decoding process
-    out, err = process.communicate(input=frame_data)
-    decoding_time = time.time() - start_time  # Calculate decoding time
-    
-    if process.returncode != 0:
-        print(""ffmpeg error: "", err.decode())
-        return None
-    frame_size = (960, 1280, 3)  # frame dimensions expected by the frontend
-    frame = np.frombuffer(out, np.uint8).reshape(frame_size)
-    print(f""Decoding time: {decoding_time:.4f} seconds"")
-    return frame
-
-def format_timestamp(ts):
-    dt = datetime.fromtimestamp(ts)
-    return dt.strftime('%H:%M:%S.%f')[:-3]
-
-def rabbitmq_consumer():
-    connection, channel = initialize_rabbitmq()
-    for method_frame, properties, body in channel.consume(QUEUE_NAME):
-        message_receive_time = time.time()  # Time when the message is received
-
-        # Extract the timestamp from the message body
-        timestamp_bytes = body[:8]
-        frame_data = body[8:]
-        publish_timestamp = struct.unpack('d', timestamp_bytes)[0]
-
-        print(f""Message Receive Time: {message_receive_time:.4f} ({format_timestamp(message_receive_time)})"")
-        print(f""Publish Time: {publish_timestamp:.4f} ({format_timestamp(publish_timestamp)})"")
-
-        frame = decode_frame(frame_data)
-        decode_time = time.time() - message_receive_time  # Calculate decode time
-
-        if frame is not None:
-            _, buffer = cv2.imencode('.jpg', frame)
-            frame_data = buffer.tobytes()
-            socketio.emit('video_frame', {'frame': frame_data, 'timestamp': publish_timestamp}, namespace='/')
-            emit_time = time.time()  # Time after emitting the frame
-
-            # Log the time taken to emit the frame and its size
-            rtt = emit_time - publish_timestamp  # Calculate RTT from publish to emit
-            print(f""Current Time: {emit_time:.4f} ({format_timestamp(emit_time)})"")
-            print(f""RTT: {rtt:.4f} seconds"")
-            print(f""Emit time: {emit_time - message_receive_time:.4f} seconds, Frame size: {len(frame_data)} bytes"")
-        channel.basic_ack(method_frame.delivery_tag)
-
-@app.route('/')
-def index():
-    return render_template('index.html')
-
-@socketio.on('connect')
-def handle_connect():
-    print('Client connected')
-
-@socketio.on('disconnect')
-def handle_disconnect():
-    print('Client disconnected')
-
-if __name__ == '__main__':
-    consumer_thread = threading.Thread(target=rabbitmq_consumer)
-    consumer_thread.daemon = True
-    consumer_thread.start()
-    socketio.run(app, host='0.0.0.0', port=5000)
-
-
-How can I optimize the publishing and subscribing rates to handle a higher number of messages per second?
-Any help or suggestions would be greatly appreciated!
-I attempted to use threading and multiprocessing to handle multiple frames concurrently and  I tried to optimize the frame decoding function to make it faster but with no success.
-","1. First i dont know so much about rabbitmq but i think it would be handle more then 10 Messages per Seconds.
-You have some Design issues,
-
-you Read the video file to rgb via cv2 and reencode it to h264. The file is already h264 encoded. Its just overhead. Use pyav to read Packet wise the file so you dont need reencode step when Sending.
-
-you execute for each frame the whole ffmpeg proc for decoding as again in encoding step, use pyav to Feed the packages to the Decoder as an stream like Thingy.
-
-
-Following this you remove the singel proc execution per frame. If you want to go with the Procs start it once an work with the Pipes.
-But pyav is way more Developer friendly and give you more cool things as just Work with Pipes
-",RabbitMQ
-"I’m using GKE multi-cluster service and have configured two clusters.
-On one cluster I have an endpoint I want to consume and it's hard-coded on address:
-redpanda-0.redpanda.processing.svc.cluster.local.
-Does anyone know how I can reach this from the other cluster?
-EDIT:
-I have exported the service, which is then automatically imported into the other cluster. Previously, I have been able to connect to the other cluster using SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local, but then I had to change the endpoint address manually to this exact address. In my new case, the endpoint address is not configurable.
-","1. Do you know if your clusters are connected to the same VPC? If so, you might as well just create a LoadBalancer service that is exposed only to internal network, which would also allow you to create proper DNS hostname and static internal IP. Something like this:
-- kind: Service
-    apiVersion: v1
-    metadata:
-      name: my-app
-      annotations:
-        networking.gke.io/load-balancer-type: ""Internal""
-    spec:
-      type: LoadBalancer
-      ports:
-        - port: 8080
-          name: my-service
-          targetPort: 8080
-      selector:
-        app: my-app
-
-Doing so would be more straightforward than creating service exports and needing an additional API on the GKE side.
-",Redpanda
-"I am wrestling with getting a job in GitLab to spin-up the RedPanda Kafka container so I can run my integration tests.
-Locally, using this image vectorized/redpanda it works fine running my tests from my host (external) to the container.
-My GitLab job looks like this below (the template is just invoking Maven)
-I can connect to the broker but it is when I connect to the nodes that I have the issue.
-I believe I need to change this --advertise-kafka-addr kafka:9092 but no matter what I do it does not connect.
-kafka_tests:
-  extends: .java_e2e_template
-  services:
-  - name: vectorized/redpanda
-    alias: kafka
-    command: [
-      ""redpanda"",
-      ""start"",
-      ""--advertise-kafka-addr kafka:9092"",
-      ""--overprovisioned""
-    ]
-  variables:
-    MVN_PROPERTIES: ""-Dkafka.bootstrap.servers=kafka:9092""
-
-","1. First the container should be from redpandadata instead of vectorized
-docker pull `redpandadata/redpanda:latest
-
-I think gitlab ci runs docker underneath the hood, so you probably have to look at how you expose the host:port. Meaning the advertise address needs to be reachable by your clients.
-",Redpanda
-"I have a redpanda container which I want to use as a sink using vector when I am running this nomad job it is giving this error
-Feb 27 08:57:01.931 ERROR rdkafka::client: librdkafka: Global error: Resolve (Local: Host resolution failure): redpanda-0:9092/bootstrap: Failed to resolve 'redpanda-0:9092': Name does not resolve (after 1ms in state CONNECT, 15 identical error(s) suppressed)    
-
-this is my nomad job configuration
-job ""vector-redpanda"" {
-  datacenters = [""dc1""]
-  # system job, runs on all nodes
-  type = ""system""
-  update {
-    min_healthy_time = ""10s""
-    healthy_deadline = ""5m""
-    progress_deadline = ""10m""
-    auto_revert = true
-  }
-  group ""vector"" {
-    count = 1
-    restart {
-      attempts = 3
-      interval = ""10m""
-      delay = ""30s""
-      mode = ""fail""
-    }
-    network {
-      port ""api"" {
-        to = 8787
-      }
-      port ""redpanda_network"" {}
-    }
-    # docker socket volume
-    volume ""docker-sock-ro"" {
-      type = ""host""
-      source = ""docker-sock-ro""
-      read_only = true
-    }
-    ephemeral_disk {
-      size    = 500
-      sticky  = true
-    }
-    task ""vector"" {
-      driver = ""docker""
-      config {
-        image = ""timberio/vector:0.14.X-alpine""
-        ports = [""api"", ""redpanda_network""]
-      }
-      # docker socket volume mount
-      volume_mount {
-        volume = ""docker-sock-ro""
-        destination = ""/var/run/docker.sock""
-        read_only = true
-      }
-      # Vector won't start unless the sinks(backends) configured are healthy
-      env {
-        VECTOR_CONFIG = ""local/vector.toml""
-        VECTOR_REQUIRE_HEALTHY = ""true""
-      }
-      # resource limits are a good idea because you don't want your log collection to consume all resources available
-      resources {
-        cpu    = 500 # 500 MHz
-        memory = 256 # 256MB
-      }
-      # template with Vector's configuration
-      template {
-        destination = ""local/vector.toml""
-        change_mode   = ""signal""
-        change_signal = ""SIGHUP""
-        # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }}
-        left_delimiter = ""[[""
-        right_delimiter = ""]]""
-        data=<<EOH
-                data_dir = ""alloc/data/vector/""
-              [api]
-                enabled = true
-                address = ""0.0.0.0:8787""
-                playground = true
-              [sources.logs]
-                type = ""docker_logs""
-              [transforms.records_for_redpanda]
-                type = ""lua""
-                inputs = [ ""logs"" ]
-                version = ""2""
-              [transforms.records_for_redpanda.hooks]
-                process = """"""
-                  function (event, emit)
-                    event.log.job_id = event.log.label.job_id
-                    event.log.task_id = event.log.label.task_id
-                    event.log.run_id = event.log.label.run_id
-                    event.log.created_at = event.log.timestamp
-                    if event.log.run_id ~= """" then
-                      emit(event)
-                    end
-                  end""""""
-              [sinks.my_kafka_sink]
-                type = ""kafka""
-                inputs = [ ""records_for_redpanda"" ]
-                bootstrap_servers = ""redpanda-0:9092""
-                topic = ""logs_aggregation""
-                acknowledgements.enabled = true
-                compression = ""gzip""
-                healthcheck.enabled = false
-                encoding.only_fields = [""job_id"", ""task_id"", ""run_id"", ""message"", ""created_at""]
-                        encoding.timestamp_format = ""unix""
-                        encoding.codec = ""json""
-                message_timeout_ms = 450000
-        EOH
-      }
-      service {
-        check {
-          port     = ""api""
-          type     = ""http""
-          path     = ""/health""
-          interval = ""30s""
-          timeout  = ""5s""
-        }
-      }
-      kill_timeout = ""30s""
-    }
-  }
-}    
-
-I tried the same thing by creating a vector container in the same network as redpanda container network and it worked. I am not sure how I can do it for docker task in nomad.
-","1. You have to change
-bootstrap_servers = ""redpanda-0:9092""
-
-With the IP and port allocated for redpanda. You might be interested in templating https://developer.hashicorp.com/nomad/docs/job-specification/template#consul-integration .
-",Redpanda
-"I've deployed the redpanda in 2 EKS cluster and redpanda broker pods are up & running. One EKS cluster is in us-east-1 region and one is in us-east-2 region.
-Now I need to replicate the data between 2 redpanda cluster and for that I need to setup the kafka mirrormaker on the one of the EKS clusters from that cluster the data will be replicated to the another cluster.
-I am facing an issue while establishing connection between the 2 pods of redpanda broker.
-I have the mirrormaker properties file like below:
-`#Primary and Secondary Kafka Cluster Names
-clusters=, 
-#Comma Saperated list of bootstrap servers for source kafka cluster
-.bootstrap.servers=:9092
-#Comma Saperated list of bootstrap servers for target kafka cluster
-.bootstrap.servers=:9092`
-I am confused what value I need to provide to . Redpanda Borkers pod IPs or the Redpanda Broker node IPs?
-","1. I assume you will have the MM2 deployed on the source cluster correct? So for the source, as long as your MM2 instance is in the same source cluster, you can use the pod ip, (recognized within the k8s cluster), but for your destination, if you are using nodeport, use the node IP.
-I wrote a post explaining a little about the networking, see the ""Using Kafka address and advertised Kafka address in K8s"" part.
-https://redpanda.com/blog/advertised-kafka-address-explanation
-Hope that helps.
-",Redpanda
-"I am trying to aggregate sensor data based on time windows and write it to Cassandra once it has reached 30 seconds window (roll-up).
-For example, a sensor named ""temp"" sends 3 readings for 30 seconds. I like to get the average value for this sensor for the last 30 seconds and write the avg value to Cassandra when window completes.
-This is my code
-BasicConfigurator.configure();
-
-        
-        // Create Siddhi Application
-        String siddhiApp = ""define stream SensorEventStream (sensorid string, value double); "" +
-                "" "" +
-                ""@info(name = 'query1') "" +
-                ""from SensorEventStream#window.time(30 sec)  "" +
-                ""select sensorid, avg(value) as value "" +
-                ""group by sensorid "" +
-                ""insert into AggregateSensorEventStream ;"";
-
-        // Creating Siddhi Manager
-        SiddhiManager siddhiManager = new SiddhiManager();
-
-        //Generating runtime
-        SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime(siddhiApp);
-
-        //Adding callback to retrieve output events from query
-        siddhiAppRuntime.addCallback(""AggregateSensorEventStream"", new StreamCallback() {
-             
-
-            @Override
-            public void receive(org.wso2.siddhi.core.event.Event[] events) {
-                 EventPrinter.print(events);
-            }
-        });
-
-        //Retrieving input handler to push events into Siddhi
-        InputHandler inputHandler = siddhiAppRuntime.getInputHandler(""SensorEventStream"");
-
-        //Starting event processing
-        siddhiAppRuntime.start();
-
-        //Sending events to Siddhi
-        inputHandler.send(new Object[]{""Temp"", 26d});
-        Thread.sleep(1000);
-        inputHandler.send(new Object[]{""Temp"", 25d});
-        Thread.sleep(1000);
-        inputHandler.send(new Object[]{""Temp"", 24d});
-        Thread.sleep(60000);
-        inputHandler.send(new Object[]{""Temp"", 23d});
-         
-        //Shutting down the runtime
-        siddhiAppRuntime.shutdown();
-
-        //Shutting down Siddhi
-        siddhiManager.shutdown();
-
-And the output is like this
-0 [main] INFO org.wso2.siddhi.core.util.EventPrinter  - [Event{timestamp=1552281656960, data=[Temp, 26.0], isExpired=false}]
-1002 [main] INFO org.wso2.siddhi.core.util.EventPrinter  - [Event{timestamp=1552281657971, data=[Temp, 25.5], isExpired=false}]
-2003 [main] INFO org.wso2.siddhi.core.util.EventPrinter  - [Event{timestamp=1552281658972, data=[Temp, 25.0], isExpired=false}]
-62004 [main] INFO org.wso2.siddhi.core.util.EventPrinter  - [Event{timestamp=1552281718972, data=[Temp, 23.0], isExpired=false}]
-
-From this demo code I see it's sending the first avg of temp for 3 events immediately and after 30 seconds window, it does not do anything. then prints 23.
-How can I get a notification when the window roll-up after 30 seconds? I thought that's what receive function does.
-I am not sure whether I have misunderstood the functionality here. Is this possible with Siddhi at all?
-","1. This is the expected behaviour, The window is a sliding window. Here, when the first event comes, 1st second, the window holds only the first event so the average was 26. Then when the second event arrives, the window has both 26d as well as 25d, then the average in 25.5. Likewise, 3rd second average 25d. Then at 31, 32 and 33rd seconds these events would expire from the window. So when your 4th event comes(63rd second), there is only the latest event in the window, so average will be the value itself. This window calculates average as soon as an event arrives depending on the events received in the last 30 seconds before it. 
-From your question, you seem to want timeBatch window. Here, the average is calculated only at the end of the batch. For instance, in this case, 30th, 60th, 90th second so on. Please see timeBatch doc for samples.
-",Siddhi
-"I'm currently in the process of integrating WSO2's Siddhi CEP and Kafka. I want to produce a Siddhi stream by receiving events from Kafka. The Kafka data being received is in JSON format, where each event looks something like this:
-{  
-   ""event"":{  
-      ""orderID"":""1532538588320"",
-      ""timestamps"":[  
-         15325,
-         153
-      ],
-      ""earliestTime"":1532538
-   }
-}
-
-The SiddhiApp that I'm trying to run in the WSO2 stream processor looks like this:
-@App:name('KafkaSiddhi')
-@App:description('Consume events from a Kafka Topic and print the output.')
-
--- Streams
-@source(type='kafka', 
-topic.list = 'order-aggregates',
-partition.no.list = '0',
-threading.option = 'single.thread',
-group.id = 'time-aggregates',
-bootstrap.servers = 'localhost:9092, localhost:2181',
-@map(type='json'))
-define stream TimeAggregateStream (orderID string,timestamps 
-object,earliestTime long);
-
-@sink(type=""log"")
-define stream TimeAggregateResultStream (orderID string, timestamps 
-object, earliestTime long);
-
--- Queries
-from TimeAggregateStream 
-select orderID, timestamps, earliestTime
-insert into TimeAggregateResultStream;
-
-Running this app should log all of the data being updated in the order-aggregates Kafka cluster that I'm listening to. But I see no output whatsoever when click run. 
-I can tell that there is some type of interaction between the WSO2 stream processor and the order-aggregates topic, because error messages are outputted in real-time whenever I run the application with inconsistent data types for my stream schema. The error messages look like this:
-[2018-07-25_10-14-37_224] ERROR 
-{org.wso2.extension.siddhi.map.json.sourcemapper.JsonSourceMapper} - 
-Json message {""event"":{""orderID"":""210000000016183"",""timestamps"": 
-[1532538627000],""earliestTime"":1532538627000}} contains incompatible 
-attribute types and values. Value 210000000016183 is not compatible with 
-type LONG. Hence dropping the message. (Encoded) 
-
-However, when I have the schema setup correctly, I receive no output at all when I run the application. I really don't know how to make sense of this. When I try to debug this by putting a breakpoint into the line including 'insert into', the debugger never stops at that line.
-Can anyone offer some insight on how to approach this issue?  
-","1. We have added the object support for json mapper extension in the latest release of the extension. Please download the extension[1] and replace the siddhi-map-json jar in /lib.
-[1] https://store.wso2.com/store/assets/analyticsextension/details/0e6a6b38-f1d1-49f5-a685-e8c16741494d
-",Siddhi
-"I have used following code to insert data in a RDBMS using siddhi.
- @App:name(""CustomerInfoCreator"")
-  @App:description(""Consume events from HTTP and write to TEST_DB"")
-
-@source(type = 'http', receiver.url = ""http://0.0.0.0:8006/production_cust_information"",
-    @map(type = 'json'))
-define stream CustomerStream (id string, customerName string, cibil float, outsandingLoanAmt float, salary float, phoneNumber string, location string, status string, loanType string, loanAmt float, approvalDecision string);
-
-@store(type='rdbms', jdbc.url=""jdbc:sqlserver://localhost:1433;databaseName=samplesiddhi;sendStringParametersAsUnicode=false;encrypt=false"", username=""sa"", password=""pwd"", jdbc.driver.name=""com.microsoft.sqlserver.jdbc.SQLServerDriver"")
-define table CustomerLoanApplication  (id string, customerName string, cibil float, outsandingLoanAmt float, salary float, phoneNumber string, location string, status string, loanType string, loanAmt float, approvalDecision string);
-
--- Store all events to the table
-@info(name = 'query1')
-from CustomerStream
-insert into CustomerLoanApplication
-
-@App:description(""Consume events from HTTP and write to TEST_DB"")
-
-@source(type = 'http', receiver.url = ""http://0.0.0.0:8006/production_cust_information"",
-    @map(type = 'json'))
-define stream CustomerStream (id string, customerName string, cibil float, outsandingLoanAmt float, salary float, phoneNumber string, location string, status string, loanType string, loanAmt float, approvalDecision string);
-
-@store(type='rdbms', jdbc.url=""jdbc:sqlserver://localhost:1433;databaseName=samplesiddhi;sendStringParametersAsUnicode=false;encrypt=false"", username=""sa"", password=""$9Lserver"", jdbc.driver.name=""com.microsoft.sqlserver.jdbc.SQLServerDriver"")
-define table CustomerLoanApplication  (id string, customerName string, cibil float, outsandingLoanAmt float, salary float, phoneNumber string, location string, status string, loanType string, loanAmt float, approvalDecision string);
-
--- Store all events to the table
-@info(name = 'query1')
-from CustomerStream
-insert into CustomerLoanApplication
-
-Inserted data in the table.
-Now I wanted to retrieve the data from a REST API. I am able to fetch all records using following payload
-POST: https://localhost:9743/query
-
-{
-   ""appName"":""CustomerInfoCreator"",
-   ""query"":""from  CustomerLoanApplication select * limit 10 ""
-}
-
-But If I want to fetch the information with a where clause. So, I have used the following payload. But it throws an exception
-POST : https://localhost:9743/query
-
-{
-   ""appName"":""CustomerInfoCreator"",
-   ""query"":""from  CustomerLoanApplication(phoneNumber = '587488848484') select * limit 10 ""
-}
-
-Response :
-{
-    ""code"": 1,
-    ""type"": ""error"",
-    ""message"": ""Cannot query: Error between @ Line: 1. Position: 0 and @ Line: 1. Position: 29. Syntax error in SiddhiQL, mismatched input '(' expecting <EOF>.""
-}
-
-I did not find any documentation that provides and example for my need.
-","1. The query should be corrected as
-
-POST : https://localhost:9743/query
-
-{
-   ""appName"":""CustomerInfoCreator"",
-   ""query"":""from CustomerLoanApplication on phoneNumber = '587488848484' select * limit 10""
-}
-
-",Siddhi
-"I'm using StreaMSets Data Collector to download from a Microsoft dataverse API, which uses pagination, supplying a next-page link in the record.
-I'm using an HTTP Processor stage with Pagination = Link in Response Field.
-It works fine when there are more pages, but when there are no more data the field is simply not there and I get ""Link field '/EntityData/'@odata.nextLink'' does not exist in record""; nothing I've tried will get around this.
-${record:exists(""/EntityData/'@odata.nextLink'"")} in the Stop Condition should work.
-StreaSets Data Collector v3.22.0
-It seems like a bug in StreamSets but nothing like that's been fixed according to the Release Notes up to the latest version.
-Can anyone advise of a solution?
-","1. I checked ${record:exists} function in Stop Condition in the later version of SDC, and it works fine. Must've been fixed since SDC 3.22 (which is quite old).
-",StreamSets
-"I have a MSSQL database whose structure is replicated over a Postgres database.
-I've enabled CDC in MSSQL and I've used the SQL Server CDC Client in StreamSets Data Collector to listen for changes in that db's tables.
-But I can't find a way to write to the same tables in Postgres.
-For example I have 3 tables in MSSQL:
-tableA, tableB, tableC. Same tables I have in Postgres.
-I insert data into tableA and tableC. I want those changes to be replicated over Postgres.
-In StreamSets DC, in order to write to Postgres, I'm using JDBC Producer and in the Table Name field I've specified: ${record:attributes('jdbc.tables')}.
-Doing this, the data will be read from tableA_CT, tableB_CT, tableC_CT. Tables created by MSSQL when you enable the CDC option. So I'll end up with those table names in the ${record:attribute('jdbc.tables')}.
-Is there a way to write to Postgres in the same tables as in MSSQL ?
-","1. You can cut the _CT suffix off the jdbc.tables attribute by using an Expression Evaluator with a Header Attribute Expression of:
-${str:isNullOrEmpty(record:attribute('jdbc.tables')) ? '' : 
-  str:substring(record:attribute('jdbc.tables'), 0, 
-    str:length(record:attribute('jdbc.tables')) - 3)}
-
-Note - the str:isNullOrEmpty test is a workaround for SDC-9269.
-
-2. The following expression provides the original table name
-${record:attribute('jdbc.cdc.source_name')}
-
-If you are looking for the original table schema name then you can use
-${record:attribute('jdbc.cdc.source_schema_name')} 
-
-",StreamSets
-"I use Streamsets Data Collector to Load data from Stage 2 database tables (JDBC Query Consumer) using a query and Write loaded data to another Stage 2 Database table (JDBC Producer). I use Init Query as below to delete the previous records before loading data. But this does not delete any record from the table. It would be great if someone can help me.
-
-","1. I was using Init Query in the JDBC Query Consumer to delete previous records; it did not work. I removed the init query from the JDBC Query Consumer and added a Start Event. It deletes previous records, and the pipeline works as I expected. 
-My question in the community: https://community.streamsets.com/show-us-your-pipelines-71/init-query-in-the-jdbc-query-consumer-not-working-for-delete-previous-data-2028?postid=4733#post4733
-
-",StreamSets
-"I have a file that I am consuming into StreamSets and in that I have the following sample:
-Source_id: {String} ""1234""
-Partition_id: {String} ""ABC""
-Key: {String} ""W3E""
-
-(the field names are dynamic, sometimes it changes so we can't hardcode those field names).
-I want to be able to somehow get these to two separate fields so that I can send the entire to a stored procedure that uses dynamic SQL to insert into various tables. For this purpose I need to have two fields with in this format.
-ColumnName: {string} "" 'Source_id', 'Partition_id', 'Key' "" 
-ValueName: {String} ""'1234', 'ABC', 'W3E' ""
-
-I've tried field mappers and other processors but unable to get it working.
-I don't know Java/ Groovy enough to make it work. Any help would be appreciated.
-Thanks
-Regards, NonClever human.
-","1. here's a Groovy option that should do the trick:
-// Sample Groovy code
-records = sdc.records
-String keys = """"
-String values = """"
-for (record in records) {
-try {
-      keys = """";
-      values = """";
-  
-      for(String key: record.value.keySet()) {
-        keys = keys + (keys==""""?"""":"", "") + key;
-        values = values + (values==""""?"""":"", "") + record.value[key];
-  }
-  record.value = [""keys"":keys, ""values"": values];
-
-    // Write a record to the processor output
-        sdc.output.write(record)
-    } catch (e) {
-    // Write a record to the error pipeline
-        sdc.log.error(e.toString(), e)
-        sdc.error.write(record, e.toString())
-    }
-}
-
-I strongly suggest that you learn the basics of either Groovy or Javascript or Python. That should help a lot with corner cases like this.
-",StreamSets
-"After upgrading strimzi kafka cluster from 0.36.1 to 0.40.0. plain listeners not able to connect to kafka from port 9092. it throws
-in consumers, producers:
-""Can't connect Not authorized to access topics: [Topic authorization failed]""
-in Kafka brokers:
-INFO Principal = User:ANONYMOUS is Denied Operation = Describe from host = xx.xx.xx.xx on resource = Topic:LITERAL:topic-name
-but with tls via 9094 port it allows to connect both producers and consumers to cluster. below is my listener config
-listeners:
-      - name: plain
-        port: 9092
-        type: nodeport
-        tls: false      
-      - name: tlsnp
-        port: 9094
-        type: nodeport
-        tls: true
-        authentication:
-          type: tls
-authorization:
-      type: simple
-      superUsers:
-        - CN=xxx-user
-
-","1. The user is ANONYMOUS. So that will be a user connecting through the 9092 listener where you don't have any authentication configured. That would likely be also why it is denied the operation because even if you have some ALC configured for some user, you likely don't have them for the ANONYMOUS user. This also doesn't seem like anything to do with Strimzi upgrade, just with the client and where it connects.
-",Strimzi
-"We are running strimzi with kafka in a openshift cluster.
-We have multiple topics, all with the same retention.ms settings: 259200000 which is 72 hours.
-We observed that kafka disk space have been decreasing over time.
-To check where the space is used, we exec'ed into one of the kafka pods, and ran:
-du -sh /var/lib/kafka/data/kafka-log0/* | sort -hr
-This produced the following output
-bash-4.4$ du -sh /var/lib/kafka/data/kafka-log0/* | sort -hr
-149G    /var/lib/kafka/data/kafka-log0/mytopic1-0
-29G     /var/lib/kafka/data/kafka-log0/mytopic2-0
-3.6G    /var/lib/kafka/data/kafka-log0/mytopic3-0
-681M    /var/lib/kafka/data/kafka-log0/mytopic4-security-0
-
-Checking the /var/lib/kafka/data/kafka-log0/mytopic1-0 revealed that there is data as far back as 28th of march, which is over a month old data.
-This with a retention set to 72 hours.
-total 155550304
--rw-rw-r--. 1 1000760000 1000760000       6328 Mar 28 10:17 00000000001211051718.index
--rw-rw-r--. 1 1000760000 1000760000   12339043 Mar 28 10:17 00000000001211051718.log
--rw-rw-r--. 1 1000760000 1000760000       1164 Mar 28 10:17 00000000001211051718.timeindex
-
-For the other topics that has the same retention settings, the data in i the folders is in line with the retention settings.
-Having checked the oldest file earlier this morning, no change has been made in the old files. So it seems to removal of old files is not taking place for this specific topic.
-Kubernetes: OpenShift
-Kafka image: kafka:0.35.1-kafka-3.4.0
-Kafka version: 3.4.0
-Strimzi version: 0.35.1
-
-KafkaTopic - the one that has the issue
-apiVersion: kafka.strimzi.io/v1beta2
-kind: KafkaTopic
-metadata:
-  annotations:
-  labels:
-    app.kubernetes.io/instance: kafka
-    strimzi.io/cluster: kafka
-  name: mytopic1
-  namespace: kafka-ns
-  resourceVersion: ""162114882""
-  uid: 266caad6-c73d-4a23-9913-6c9a64f505ca
-spec:
-  config:
-    retention.ms: 259200000
-    segment.bytes: 1073741824
-  partitions: 1
-  replicas: 3
-status:
-  conditions:
-  - lastTransitionTime: ""2023-10-27T09:03:58.698836682Z""
-    status: ""True""
-    type: Ready
-  observedGeneration: 2
-  topicName: mytopic1
-
-Another kafka topic without issues
-apiVersion: kafka.strimzi.io/v1beta2
-kind: KafkaTopic
-metadata:
-  labels:
-    app.kubernetes.io/instance: kafka
-    strimzi.io/cluster: kafka
-  name: mytopic2
-  namespace: kafka-ns
-  resourceVersion: ""162114874""
-  uid: 871b5fab-7996-4674-a0b4-d3476cbe9c6c
-spec:
-  config:
-    retention.ms: 259200000
-    segment.bytes: 1073741824
-  partitions: 1
-  replicas: 3
-status:
-  conditions:
-  - lastTransitionTime: ""2023-10-27T09:03:58.539808920Z""
-    status: ""True""
-    type: Ready
-  observedGeneration: 2
-  topicName: mytopic2
-
-Any ideas on where to check to try to get to the root cause?
-","1. The reason for this was that the clients sending messages had an incorrect timestamp, set in the future, 2024-12-06.
-Kafka used this client timestamp as the timestamp for all messages, and thus the retention period of 3 days would start 3 days after 2024-12-06.
-We added retention.bytes as well as a mitigation until the people managing the client can address the issue.
-",Strimzi
-"I'm using Strimzi, Kafka, Kafka Connect and a custom connector plugin, following this docs.
-The deploy works fine, Kafka Connect is working, I can consume its RESTFUL API.
-But the connector is not created. This is the error message:
-
-Failed to find any class that implements Connector and which name
-matches org.company.MySourceConnector
-
-I know the cause: it doesn't find the plugin (a jar file). But if I enter in the kafka-connect pod, I can see the jar file in the right (i suppose) place: /opt/kafka/plugins/my-source-connector/my-source-connector.jar.
-Furthermore, I run cat /tmp/strimzi-connect.properties and I see the plugin path: plugin.path=/opt/kafka/plugins/. (the file is created by strimzi during deploy)
-apiVersion: kafka.strimzi.io/v1beta2
-kind: KafkaConnect
-metadata:
-  name: kafka-connect
-  annotations:
-    strimzi.io/use-connector-resources: ""true""
-spec:
-  replicas: 1
-  bootstrapServers: kafka-kafka-bootstrap:9092
-  image: ""{{ .Values.image.repository }}:{{ .Values.image.tag }}""
-  config:
-    group.id: connect-cluster
-    ...
-
-
-apiVersion: kafka.strimzi.io/v1beta2
-kind: KafkaConnector
-metadata:
-  name: my-connector
-  labels:
-    strimzi.io/cluster: kafka-connect
-spec:
-  class: org.company.MySourceConnector
-  tasksMax: 1
-  config:
-    topic: my-topic
-    name: my-connector
-
-How do I configure Strimzi or Kafka Connect to find my plugin?
-I exhausted all my resources. If someone could give some light on this, I would really appreciate it.
-","1. I found out that the jar file is corrupted.
-
-2. In my case, the problem was that the plugin JAR was owned by root:root, and only had user rw permission.
-[kafka@bf1f56801cfd plugins]$ ls -al 
-total 384
-drwxr-xr-x 2 kafka root   4096 Apr  8 22:52 .
-drwxr-xr-x 1 root  root   4096 Apr  8 22:52 ..
--rw------- 1 root  root 385010 Feb  7 16:10 mongo-kafka-connect-1.11.2.jar
-
-But the KafkaConnect container runs as kafka:root
-[kafka@bf1f56801cfd plugins]$ whoami
-kafka
-[kafka@bf1f56801cfd plugins]$ id -gn
-root
-
-So I added a chmod to my Containerfile, which fixed it.
-ARG STRIMZI_HELM_VERSION='0.33.0'
-ARG KAFKA_VERSION='3.2.0'
-ARG MONGO_CONNECTOR_VERSION='1.11.2'
-
-FROM ""quay.io/strimzi/kafka:${STRIMZI_HELM_VERSION}-kafka-${KAFKA_VERSION}""
-ARG MONGO_CONNECTOR_VERSION
-ADD --chown=kafka:root ""https://repo1.maven.org/maven2/org/mongodb/kafka/mongo-kafka-connect/${MONGO_CONNECTOR_VERSION}/mongo-kafka-connect-${MONGO_CONNECTOR_VERSION}-all.jar"" /opt/kafka/plugins/
-
-
-",Strimzi
-"I have setup my app-config.yaml to use environment variable substitution (EVS) like bellow:
-auth:
-  environment: development
-  providers:
-    gitlab:
-      development:
-        clientId: ${AUTH_GITLAB_CLIENT_ID}
-        clientSecret: ${AUTH_GITLAB_CLIENT_SECRET}
-
-And this works fine when deployed though my CI/CD pipeline. but for local development I have not found a good way to inject environment variables.
-I do have an app-config.local.yaml but I would like to avoid having essentially having two copies of `app-config.yaml, one with hardcoded values and the other with the EVS.
-What is the best way to load environment variables so that app-config.yaml can read them.
-This is a very similar question to How to edit environment variables in Backstage.io? But the question asker never gave an example of how they are actually adding it into their app-config.local.yaml file
-","1. tl;dr: I used a .env.yarn file
-https://stackoverflow.com/a/77950324/6010125
-
-2. The environment variable substitution syntax can be useful for supplying local default values:
-app:
-  baseUrl: https://${HOST:-localhost:3000}
-
-Unlike some other techniques, this works well with backstage-cli config:print
-",Backstage
-"I'm trying to add backstage to an organisation - I was building something similar myself - so I have a bunch of metadata on hundreds of projects already in my own database.
-I want to bulk add them to backstage - had a look at the catalog/entities API endpoint, and I can read the list of endpoints - but I can't figure out how to write one
-Is the any way to programmatically create a component?
-","1. Yes, you are looking for custom processor:
-https://backstage.io/docs/features/software-catalog/external-integrations/#custom-processors
-It is possible to process data from APIs or DBs.
-
-2. I would rather go to https://backstage.io/docs/features/software-catalog/external-integrations#custom-entity-providers
-EntityProvider is the main class to ingest data.
-Processor is just an extra optional step to add extra data, or emit extra entities.
-
-3. turns out it was really easy - you define all your entities in a yaml file - either one or multiple - and just add it as ""location"" - by posting to the /locations endpoint as documented here:
-https://backstage.io/docs/features/software-catalog/software-catalog-api/
-What's interesting is - if that location is in github - from then on backstage will notice changes.
-It was not obvious to me at all that to create a component you had to call the locations endpoint - but now I get that ""locations"" are basically ""files that backstage knows about"" - and it wants stuff to live in files, because it's designed to let you click edit and maintain something.
-",Backstage
-"Please let me know how to create the dapr custom middleware in c#
-how to register it.
-And how to configure it in a microservice to use it.
-Please let me know in detailed steps as I have been struggling a lot on this.
-Please pass the sample application from top to bottom steps with yaml configuration file and component.
-Not even sure how to register it and how can i use it in other microservice.
-Created the middle ware like this
-public class DaprCustomAuthMiddleware : IMiddleware
-{
-    public async Task InvokeAsync(HttpContext context, RequestDelegate next)
-    {
-        Console.ForegroundColor = ConsoleColor.Green;
-        Console.WriteLine($""******************Incoming request: {context.Request.Path}**********************************"");
-
-        // Call the next middleware in the pipeline
-        await next(context);
-
-        // Log outgoing response
-        Console.WriteLine($""\""******************Outgoing response: {context.Response.StatusCode}\""******************"");
-    }
-}
-
-On startup did like this
-public class Startup
-{
-  
-    public IConfiguration Configuration { get; }
-
-    /// <summary>
-    /// Configures Services.
-    /// </summary>
-    /// <param name=""services"">Service Collection.</param>
-    public void ConfigureServices(IServiceCollection services)
-    {
-        //services.AddDaprClient();
-        services.AddTransient<DaprCustomAuthMiddleware>();
-        services.AddSingleton(new JsonSerializerOptions()
-        {
-            PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
-            PropertyNameCaseInsensitive = true,
-        });
-    }
-
-    /// <summary>
-    /// Configures Application Builder and WebHost environment.
-    /// </summary>
-    /// <param name=""app"">Application builder.</param>
-    /// <param name=""env"">Webhost environment.</param>
-    /// <param name=""serializerOptions"">Options for JSON serialization.</param>
-    public void Configure(IApplicationBuilder app, IWebHostEnvironment env, JsonSerializerOptions serializerOptions,
-        ILogger<Startup> logger)
-    {
-        if (env.IsDevelopment())
-        {
-            app.UseDeveloperExceptionPage();
-        }
-
-        app.UseRouting();
-        app.UseMiddleware<DaprCustomAuthMiddleware>();
-        app.UseCloudEvents();
-
-        app.UseEndpoints(endpoints =>
-        {
-          //  endpoints.MapSubscribeHandler();
-            endpoints.MapGet(""dapr/subscribe"", async context =>
-            {
-                // Handle subscription request
-                // For example: return the subscription response
-                await context.Response.WriteAsync(""{\""topics\"": [\""topic1\"", \""topic2\""]}"");
-            });
-
-            endpoints.MapGet(""dapr/config"", async context =>
-            {
-                // Handle subscription request
-                // For example: return the subscription response
-                await context.Response.WriteAsync(""Config reached successfully"");
-            });
-
-            endpoints.MapGet(""/"", async context =>
-            {
-                // Handle subscription request
-                // For example: return the subscription response
-                await context.Response.WriteAsync(""Get reached successfully"");
-            });
-        });
-
-    }
-}
-
-Now I want to use this middleware registered in dapr so that I could use it across the microservices communication.
-I tried something
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
-  name: DaprConsoleApp
-spec:
-  type: middleware.http
-  version: v1
-  metadata:
-    - name: not sure what to enter 
-      value: not sure what to enter code here`
-
-
-So I need help from end to end steps
-","1. Below are the steps to accomplish this, including the C# code for the middleware, the ASP.NET Core startup configuration, and the Dapr YAML configuration for the middleware component.
-
-Used this git Microservices with Dapr in DotNet.
-
-Define Custom Middleware:
-
-public class DaprCustomAuthMiddleware
-{
-    private readonly RequestDelegate _next;
-
-    public DaprCustomAuthMiddleware(RequestDelegate next)
-    {
-        _next = next;
-    }
-
-    public async Task InvokeAsync(HttpContext context)
-    {
-        Console.ForegroundColor = ConsoleColor.Green;
-        Console.WriteLine($""******************Incoming request: {context.Request.Path}**********************************"");
-
-        // Call the next middleware in the pipeline
-        await _next(context);
-
-        // Log outgoing response
-        Console.WriteLine($""******************Outgoing response: {context.Response.StatusCode}******************"");
-    }
-
-
-
-
-
-
-Custom Middleware in Startup:
-
-public void Configure(IApplicationBuilder app)
-{
-    // Other middleware registrations...
-
-    app.UseMiddleware<DaprCustomAuthMiddleware>();
-
-    // Other middleware registrations...
-}
-
-
-
-
-
-I Followed this link to Configure middleware components  in  Dapr
-
-Dapr Middleware Configuration:
-apiVersion: dapr.io/v1alpha1
-kind: Configuration
-metadata:
-  name: dapr-custom-auth-middleware
-spec:
-  httpPipeline:
-    handlers:
-      - name: custom-auth
-        type: middleware.http.custom
-
-(or)
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
-  name: dapr-custom-auth-middleware
-spec:
-  type: middleware.http
-  version: v1
-  metadata:
-    - name: handlerType
-      value: middleware
-
-
-
-
-
-Output:
-
-
-For Azure samples middleware-oauth-microsoft, follow this git.
-
-",Dapr
-"I have uploaded my 2 Builds on Steam, one for macOS and one for Windows, but the Depots section looks like this:
-
-Here's my Builds settings:
-
-And here's my Depots settings:
-
-I have no idea about what to do to complete my Depot checklist in my Dashboard. I've read the official Documentation but found no help from it.
-","1. This is very late, but it might help others.
-From your app dashboard, go to:
-All Associated Packages, DLC, Demos and Tools.
-
-Under:
-Promotional or special-use packages
-
-select your package title and add/remove depots. Also change package name if needed.
-
-2. You've probably already solved this by this point, but here's my answer for anyone facing this in the future.
-To ""fix"" this issue, you need to understand something that Steam's tooltip for that entry does not tell you: the depots in your Store and Devcomp packages must match exactly, as in, they need to have the same depots.
-This means that you either need to add the development depots to your Store package or, like I did, (temporarily) remove the development depots from your Devcomp package while your build is under review.
-This will give you the green checkmark that you're looking for.
-
-3. I think that you should use separate depots for each OS builds for binary executables... And common content files place in 3st shared content depot and include (add) it in both builds.
-",Depot
-"Put simply:
-There are files in my EC2 instance that I need to remove. They were added there by p4 but p4 doesn't seem to be aware of them on the depot. As a result, I'm not sure how to reconcile the local work against the depot's content.
-How it happened:
-
-I used Helix Core's template in AWS EC2 to set up a fresh server, realizing I needed to get p4 up and running for a project that had outgrown local/manual backups.
-Once the depot was up and running, I connected to it via p4 and created a workspace.
-Since this was a preexisting project with many files, I needed to do my initial submit to the depot after reconciling local work.
-This understandably took a long time and I continued to use the computer for unrelated work in the meantime. During this time, the computer crashed. (This unrelated work involved troubleshooting an unrelated issue that ultimately caused a blue-screen crash when I unplugged a peripheral.)
-This interrupted the submit abruptly, instead of a proper cancelation.
-When I was back online, the changelist was still available in p4. I reconciled to be safe, nothing was found, and I went to restart the long submit.
-When I tried to submit, I was warned there wasn't adequate space on the drive anymore. The drive was double the necessary space when I made it, so I used EC2's Session Manager to connect and look at the disc space. Sure enough, there was a huge chunk of storage being consumed, as if many of the files I was submitting had actually uploaded and were now taking up space.
-But p4 doesn't see this and reconciling isn't helping.
-
-My skill level:
-While I've worked in games for years and am familiar with using p4 for a project, most of my experience is in the design/art side of things, with engineering being new to me. And even then, most of my engineering experience is in game logic, not server infrastructure. I had to follow tutorials to get set up in AWS.
-My thoughts from here:
-
-Using ""p4 sizes -s //depot/directory/..."", I'm told there are 0 files and 0 bytes. But again, if I check on EC2 itself using Session Manager, I can see well over 100gb in the depot that were not there prior to the submit effort.
-If I can't figure out a better approach, I may have to just grab the first snapshot that was taken after setup, before the attempt, and figure out how to restore the depot to that state. But the tutorials and info I've seen about how to do this seems to be more about completely replacing the instance/volume with the snapshot, rather than resetting to it, and that makes me nervous about breaking something from the Helix Core template I got and breaking how p4 is talking to the server. Again, this is all very new to me and there's so many common and shared terms that googling is only getting me so far.
-
-","1. As Samwise mentioned, p4 storage is the safest way to fix that.
-A couple extra pieces of info that might help:
-
-On the AWS template, the depot files are stored in /p4/1/depots/DEPOTNAME so you can go in there to check on those orphaned files.
-p4 storage -d by default only deletes orphaned files that were scanned at least 24 hours ago (86400 seconds) but this can be overridden (see below)
-
-Check th efp4 storage options here: https://www.perforce.com/manuals/cmdref/Content/CmdRef/p4_storage.html#Options
-Try these steps and see if it helps:
-
-Start the scan: p4 storage -l start //DEPOTNAME/...
-Check the status until it's done: p4 storage -l status //DEPOTNAME/...
-Run -d with the -D flag to change 24-hour limit: p4 storage -d -D 60 -y //DEPOTNAME/...
--D sets the time limit since scanning to 60 seconds. You could use a different number of seconds, if needed.
-This will display the files to delete but won't delete until you run again with -y.
-Finally, run with -y: p4 storage -d -D 60 -y //DEPOTNAME/...
-
-You should see an output with all the files being deleted and a message confirming that they were deleted.
-",Depot
-"I have an existing workspace that I have been using and everything has been working as expected.
-Now I am beginning a new project and would like to change my workspace root so that the files will be located in a different directory, for example, C:/NewProject.
-I have made the /NewProject folder and added files to it, which I can see in my workspace view.
-When I try to Mark for Add... I get a warning c:\NewProject\FileName - file(s) not in client view
-How can I add these files to my depot?  Or to the client view so that I may successfully add them?
-","1. You can edit your client view through Connection -> Edit Current Workspace in the View tab (or something similar; I'm on a p4 client from 2011).  If you're working in a relatively small depot, you might as well just include //depot/... in your view.
-
-2. Usually after installing Perforce to a new computer, when you try to sync Depot files the system gives sync error message ""File(s) not in client view"". Here is the solution:
-
-Go to Connection > Edit Current Workspace > Expand the Workspace Mappings field to display the Depot Tree.
-Right-click on the name of the Depot Tree that you want to ""Include"" in mappings. 
-Click Apply, click Get Latest to sync the files.
-
-
-3. After struggling with it for hours, I finally sort it out.
-It's very simple, just Add your folder name to the mapping.
-My situation is the folder name is not the same as the workspace name ""deport"".
-(Auto-generated for you in the Workspace Mappings)
-//depot/... //alice_1545/depot/...
-
-So all you need to do is add your folder name into the Workspace Mappings.
-//depot/... //alice_1545/depot/...
-//depot/... //alice_1545/{your folder name}/...
-
-",Depot
-"I am new with anylogic an I didn't know how to solve the problem. Please help me.
-
-","1. The error simply means that there is no variable called depot in main.
-I would suggest using the code auto-complete feature, alt+space on Mac and ctrl+space on Windows.
-Then you will be able to see if the variable you are trying to access is available?
-
-The first question would be: Do you have a variable or populate on main called depots?
-Or maybe you called it something else. It is hard to know without seeing your main, or having the main agent expanded fully in the project explorer window 9(See my screenshot)
-If you do it will be available for you to use in the vehicle agent
-",Depot
-"I have many microservices running locally in a kubernetes/devspace environment.
-Based on this and this, IntelliJ and PhpStorm don't support running PHPUnit tests from within devspace or kubectl.
-So in the meantime, I am just running a bash script manually as a script via Run | Edit Configurations...:
-#!/usr/bin/env bash
-
-devspace enter -c foo php artisan config:clear;
-
-kubectl exec -it deploy/foo -- \
-  vendor/bin/phpunit \
-  tests/MyExamples/ExampleTest.php -v \
-  --log-junit reports/unitreport0.xml \
-  -d memory_limit=-1
-
-Is there a better way to do this using devspace?  I'd like to be able to at least integrate it with the test runner instead of running a script.
-If not, is there any way to extract the current test name or class in IntelliJ/PhpStorm so that I can pass it into the script as a parameter?
-Also, is there any way to make the lines clickable in IntelliJ/PhpStorm?
-/bar/foo/tests/MyExamples/ExampleTest.php:123
-","1. I think the best way to do this in IntelliJ/PHPStorm is to use sshd.  This is well supported in devspace with a tiny amount of configuration.
-First, add this line to the devspace.yaml file to set up sshd on your instance:
-ssh: {}
-
-Then you should be able to log in via ssh.  Note that you will have appended some lines to your .ssh/config file, so you may want to verify these settings:
-ssh foo.bar.devspace
-
-Then in IntelliJ, go to File → Settings → Tools → SSH Configurations, add a new entry:
-Host: foo.bar.devspace
-Username: devspace
-Port: 10659 (NOTE: confirm this value in your ~/.ssh/config file.  The default of 22 is probably incorrect)
-Authentication Type: Key pair
-Private key file: /Users/<myusername>/.devspace/ssh/id_devspace_ecdsa (replacing myusername with your own user name)
-ensure that Parse config file ~/.ssh/config is checked
-Then, in File → Remote Development, click on SSH Connection → New Connection, then choose the connection we just created from the Connection dropdown (all the other fields should be greyed out), and click Check Connection and Continue.  (If this doesn’t work, your shell may be set incorrectly---make sure the $SHELL is not set to something like /sbin/nologin inside devspace).
-In the ""Choose IDE and Project"" dialogue, choose an IDE and version and set the Project directory to some value---it doesn't look like you can set it to be empty, so maybe your home directory will work.  You may get some confirmation dialogues from docker desktop as it installs Intellj/PHPStorm on the devspace client.
-Run Intellij---it will connect to your devspace environment.  You can load your project from the container. It will likely prompt you to install the PHP plugin and restart.
-Now you should be able to Run a unit test---click on the Green arrow beside a test to make sure it works.  You should also be able to Debug a unit test with a breakpoint.
-",DevSpace
-"Simple: I want to use an online environment to work on some simple PhP  coding. Such that I can work from anywhere with my work computers (no admin rights) and maybe even with an iPad.
-I though it should be quite straightforward to set this up on GitPod. However, all is somewhat very complex for a beginner like me. I am sure its all not that complicated, I am just a little lost. Thanks for help! Any blog articles or step by step guide greatly appreciated.
-","1. For just PHP:
-
-https://github.com/gitpod-io/apache-example
-
-Including XDebug:
-
-https://github.com/Eetezadi/Gitpod-Apache-PHP-Xdebug
-
-Wordpress Developing Environment:
-
-https://github.com/Eetezadi/Gitpod-Wordpress-Development
-
-",Gitpod
-"I am in the process of working on a project with a React frontend, a Django backend, developing in gitpod.  I believe that gitpod may be complicating this more than I'd expect.
-Currently, I can confirm that I am able to run python manage.py runserver, then browse the Django Rest Framework via the api root.
-I also have a Create-React_app frontend that is able to make requests to another API, but requests to my API returns only the error:
-""Access to fetch at 'https://8000-anthonydeva-sylvanlibra-8b5cu5lyhdl.ws-us107.gitpod.io/lineitem/' from origin 'https://3000-anthonydeva-sylvanlibra-8b5cu5lyhdl.ws-us107.gitpod.io' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.""
-I know that django-cors-headers is the recommended solution for this based on what I have seen so far, but even after installing and setting this up in settings.py, the same error is showing up and my django server doesn't seem to be showing any updates for the failed requests.
-My apologies if any needed information is missing.  I'm pretty new to this and having a bit of trouble identifying what is unnecessary and what is helpful.  I'm happy to share any additional information needed.
-INSTALLED_APPS = [
-    'corsheaders',
-    # 'django_filters',
-    'library.apps.LibraryConfig',
-    'django.contrib.admin',
-    'django.contrib.auth',
-    'django.contrib.contenttypes',
-    'django.contrib.sessions',
-    'django.contrib.messages',
-    'django.contrib.staticfiles',
-    'rest_framework',
-]
-
-MIDDLEWARE = [
-    ""corsheaders.middleware.CorsMiddleware"",
-    'django.middleware.security.SecurityMiddleware',
-    'django.contrib.sessions.middleware.SessionMiddleware',
-    ""django.middleware.common.CommonMiddleware"",
-    'django.middleware.csrf.CsrfViewMiddleware',
-    'django.contrib.auth.middleware.AuthenticationMiddleware',
-    'django.contrib.messages.middleware.MessageMiddleware',
-    'django.middleware.clickjacking.XFrameOptionsMiddleware',
-]
-
-# CORS_ALLOWED_ORIGINS = [
-#     'https://3000-anthonydeva-sylvanlibra-37g1pm5kjx9.ws-us107.gitpod.io',
-# ]
-
-CORS_ALLOW_ALL_ORIGINS = True 
-
-CORS_ORIGIN_ALLOW_ALL = True #I have seen both of these, so I tried both
-
-# CORS_ORIGIN_WHITELIST = [
-#     'https://3000-anthonydeva-sylvanlibra-8b5cu5lyhdl.ws-us107.gitpod.io/'
-# ]
-
-ALLOWED_HOSTS = [ '*' ]
-
-CORS_ALLOW_HEADERS = [ '*' ]
-
-CSRF_TRUSTED_ORIGINS = [ 'https://***.gitpod.io' ] 
-
-ROOT_URLCONF = 'sylvan.urls'
-
-CORS_ALLOW_CREDENTIALS = False
-
-
-TEMPLATES = [
-    {
-        'BACKEND': 'django.template.backends.django.DjangoTemplates',
-        'DIRS': [],
-        'APP_DIRS': True,
-        'OPTIONS': {
-            'context_processors': [
-                'django.template.context_processors.debug',
-                'django.template.context_processors.request',
-                'django.contrib.auth.context_processors.auth',
-                'django.contrib.messages.context_processors.messages',
-            ],
-        },
-    },
-]
-
-WSGI_APPLICATION = 'sylvan.wsgi.application'
-
-
-","1. Try this in your settings.py
-CORS_ALLOW_ALL_ORIGINS = False
-CORS_ALLOWED_ORIGINS = [
-      ""http://localhost:3000"",
-]
-
-",Gitpod
-"I'm trying to get a Spring Boot app running on Gitpod that I can log in to with OpenID Connect (OIDC). I'm using @oktadev/auth0-spring-boot-passkeys-demo from GitHub. Everything works fine when I run it locally.
-I have it working so it redirects back to my app after logging in to Auth0. However, the code-for-token exchange after that fails. The error in my Auth0 Dashboard says ""Unauthorized"":
-{
-  ""date"": ""2024-01-12T19:43:09.157Z"",
-  ""type"": ""feacft"",
-  ""description"": ""Unauthorized"",
-  ""connection_id"": """",
-  ""client_id"": null,
-  ""client_name"": null,
-  ""ip"": ""34.105.96.106"",
-  ""user_agent"": ""Other 0.0.0 / Linux 6.1.66"",
-  ""details"": {
-    ""code"": ""******************************************N29""
-  },
-  ""hostname"": ""dev-06bzs1cu.us.auth0.com"",
-  ""user_id"": """",
-  ""user_name"": """",
-  ""auth0_client"": {
-    ""name"": ""okta-spring-security"",
-    ""env"": {
-      ""spring"": ""6.1.2"",
-      ""java"": ""21.0.1"",
-      ""spring-boot"": ""3.2.1"",
-      ""spring-security"": ""6.2.1""
-    },
-    ""version"": ""3.0.6""
-  },
-  ""log_id"": ""90020240112194309196948000000000000001223372061311523769"",
-  ""_id"": ""90020240112194309196948000000000000001223372061311523769"",
-  ""isMobile"": false,
-  ""id"": ""90020240112194309196948000000000000001223372061311523769""
-}
-
-In my browser, it says:
-
-[invalid_token_response] An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]
-
-
-I enabled trace logging for Spring Security in application.properties:
-logging.level.org.springframework.security=trace
-
-It shows the following error:
-2024-01-13T18:57:37.442Z DEBUG 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Securing GET /oauth2/authorization/okta
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking DisableEncodeUrlFilter (1/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking WebAsyncManagerIntegrationFilter (2/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking SecurityContextHolderFilter (3/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking HeaderWriterFilter (4/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking CorsFilter (5/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking CsrfFilter (6/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.csrf.CsrfFilter         : Did not protect against CSRF since request did not match CsrfNotRequired [TRACE, HEAD, GET, OPTIONS]
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking LogoutFilter (7/16)
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.s.w.a.logout.LogoutFilter            : Did not match request to Ant [pattern='/logout']
-2024-01-13T18:57:37.442Z TRACE 3391 --- [nio-8080-exec-7] o.s.security.web.FilterChainProxy        : Invoking OAuth2AuthorizationRequestRedirectFilter (8/16)
-2024-01-13T18:57:37.443Z DEBUG 3391 --- [nio-8080-exec-7] o.s.s.web.DefaultRedirectStrategy        : Redirecting to https://dev-06bzs1cu.us.auth0.com/authorize?response_type=code&client_id=r6jm3HVTz12YmxRCdZ1rWTZNQST7gEvz&scope=profile%20email%20openid&state=x86P_R-kX3LczSA-n_gDDgY8sFPOijhJHb6QMsf8E5E%3D&redirect_uri=http://8080-oktadev-auth0springboot-j691oeruapd.ws-us107.gitpod.io/login/oauth2/code/okta&nonce=t3KqIkXDRcY8RUDab4GtMSN-EZJrqyJJOJinXhyhAk8
-2024-01-13T18:57:37.443Z TRACE 3391 --- [nio-8080-exec-7] o.s.s.w.header.writers.HstsHeaderWriter  : Not injecting HSTS header since it did not match request to [Is Secure]
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Trying to match request against DefaultSecurityFilterChain [RequestMatcher=any request, Filters=[org.springframework.security.web.session.DisableEncodeUrlFilter@2e4eda17, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@7b5021d1, org.springframework.security.web.context.SecurityContextHolderFilter@6fbf5db2, org.springframework.security.web.header.HeaderWriterFilter@50cdfafa, org.springframework.web.filter.CorsFilter@6befbb12, org.springframework.security.web.csrf.CsrfFilter@794240e2, org.springframework.security.web.authentication.logout.LogoutFilter@37d3e140, org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter@2b441e56, org.springframework.security.oauth2.client.web.OAuth2LoginAuthenticationFilter@4662752a, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@3ab595c8, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@21d9cd04, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@57cabdc3, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@75bd28d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@799f354a, org.springframework.security.web.access.ExceptionTranslationFilter@70d4f672, org.springframework.security.web.access.intercept.AuthorizationFilter@760f1081]] (1/1)
-2024-01-13T18:57:57.562Z DEBUG 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Securing GET /login/oauth2/code/okta?code=8t5psmw2cbb3OMfxTmyEwt5L343UvUGCQOgoEVP6h6FLu&state=x86P_R-kX3LczSA-n_gDDgY8sFPOijhJHb6QMsf8E5E%3D
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking DisableEncodeUrlFilter (1/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking WebAsyncManagerIntegrationFilter (2/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking SecurityContextHolderFilter (3/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking HeaderWriterFilter (4/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking CorsFilter (5/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking CsrfFilter (6/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.csrf.CsrfFilter         : Did not protect against CSRF since request did not match CsrfNotRequired [TRACE, HEAD, GET, OPTIONS]
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking LogoutFilter (7/16)
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.s.w.a.logout.LogoutFilter            : Did not match request to Ant [pattern='/logout']
-2024-01-13T18:57:57.562Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking OAuth2AuthorizationRequestRedirectFilter (8/16)
-2024-01-13T18:57:57.563Z TRACE 3391 --- [nio-8080-exec-8] o.s.security.web.FilterChainProxy        : Invoking OAuth2LoginAuthenticationFilter (9/16)
-2024-01-13T18:57:57.563Z TRACE 3391 --- [nio-8080-exec-8] o.s.s.authentication.ProviderManager     : Authenticating request with OAuth2LoginAuthenticationProvider (1/3)
-2024-01-13T18:57:57.563Z TRACE 3391 --- [nio-8080-exec-8] o.s.s.authentication.ProviderManager     : Authenticating request with OidcAuthorizationCodeAuthenticationProvider (2/3)
-2024-01-13T18:57:57.815Z DEBUG 3391 --- [nio-8080-exec-8] .s.a.DefaultAuthenticationEventPublisher : No event was found for the exception org.springframework.security.oauth2.core.OAuth2AuthenticationException
-2024-01-13T18:57:57.815Z TRACE 3391 --- [nio-8080-exec-8] .s.o.c.w.OAuth2LoginAuthenticationFilter : Failed to process authentication request
-
-org.springframework.security.oauth2.core.OAuth2AuthenticationException: [invalid_token_response] An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]
-        at org.springframework.security.oauth2.client.oidc.authentication.OidcAuthorizationCodeAuthenticationProvider.getResponse(OidcAuthorizationCodeAuthenticationProvider.java:178) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.oidc.authentication.OidcAuthorizationCodeAuthenticationProvider.authenticate(OidcAuthorizationCodeAuthenticationProvider.java:146) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:182) ~[spring-security-core-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.web.OAuth2LoginAuthenticationFilter.attemptAuthentication(OAuth2LoginAuthenticationFilter.java:196) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:231) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:221) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.web.OAuth2AuthorizationRequestRedirectFilter.doFilterInternal(OAuth2AuthorizationRequestRedirectFilter.java:181) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:107) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:93) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:117) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.header.HeaderWriterFilter.doHeadersAfter(HeaderWriterFilter.java:90) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:75) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:82) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.context.SecurityContextHolderFilter.doFilter(SecurityContextHolderFilter.java:69) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:62) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.session.DisableEncodeUrlFilter.doFilterInternal(DisableEncodeUrlFilter.java:42) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:374) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:233) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:191) ~[spring-security-web-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.servlet.handler.HandlerMappingIntrospector.lambda$createCacheFilter$3(HandlerMappingIntrospector.java:195) ~[spring-webmvc-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.CompositeFilter$VirtualFilterChain.doFilter(CompositeFilter.java:113) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.CompositeFilter.doFilter(CompositeFilter.java:74) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.config.annotation.web.configuration.WebMvcSecurityConfiguration$CompositeFilterChainProxy.doFilter(WebMvcSecurityConfiguration.java:225) ~[spring-security-config-6.2.1.jar:6.2.1]
-        at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:352) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:268) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:116) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:174) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:149) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:482) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:115) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:340) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:391) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:896) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1744) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:52) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-10.1.17.jar:10.1.17]
-        at java.base/java.lang.Thread.run(Thread.java:1583) ~[na:na]
-Caused by: org.springframework.security.oauth2.core.OAuth2AuthorizationException: [invalid_token_response] An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]
-        at org.springframework.security.oauth2.client.endpoint.DefaultAuthorizationCodeTokenResponseClient.getResponse(DefaultAuthorizationCodeTokenResponseClient.java:99) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.endpoint.DefaultAuthorizationCodeTokenResponseClient.getTokenResponse(DefaultAuthorizationCodeTokenResponseClient.java:78) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.endpoint.DefaultAuthorizationCodeTokenResponseClient.getTokenResponse(DefaultAuthorizationCodeTokenResponseClient.java:56) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        at org.springframework.security.oauth2.client.oidc.authentication.OidcAuthorizationCodeAuthenticationProvider.getResponse(OidcAuthorizationCodeAuthenticationProvider.java:172) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        ... 70 common frames omitted
-Caused by: org.springframework.web.client.HttpClientErrorException$Unauthorized: 401 Unauthorized: [no body]
-        at org.springframework.web.client.HttpClientErrorException.create(HttpClientErrorException.java:106) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:183) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:137) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:932) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:881) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:721) ~[spring-web-6.1.2.jar:6.1.2]
-        at org.springframework.security.oauth2.client.endpoint.DefaultAuthorizationCodeTokenResponseClient.getResponse(DefaultAuthorizationCodeTokenResponseClient.java:92) ~[spring-security-oauth2-client-6.2.1.jar:6.2.1]
-        ... 73 common frames omitted
-
-I've tried changing from using the Okta Spring Boot starter to spring-boot-starter-oauth2-client (with Spring Security properties). The same error happens, so I'm pretty sure it's related to Gitpod. It is able to connect to Auth0 on startup. I know this because I fat-fingered the issuer and it fails to start when it's invalid.
-","1. I was curious with Gitpod and created this repo to try it.
-It worked like a charm. As mentioned in the comment to your question, you probably forgot to configure the spring.security.oauth2.client.* properties correctly.
-In the case of the repo above, I hardcoded the client-id in porperties.yaml but used a VScode launch configuration to avoid persisting the client-secret in Github repo. When using this launch configuration in Gitpod (and after adding the valid redirect URI with container name to Auth0), the user login works and the template displays the user subject.
-",Gitpod
-"I have the following config:
-jreleaser {
-    gitRootSearch = true
-    signing {
-        active.set(Active.ALWAYS)
-        armored.set(true)
-    }
-    deploy {
-        maven {
-            nexus2 {
-                create(""mavenCentral"") {
-                    active.set(Active.ALWAYS)
-                    url.set(""https://s01.oss.sonatype.org/service/local"")
-                    snapshotUrl.set(""https://s01.oss.sonatype.org/content/repositories/snapshots/"")
-                    closeRepository.set(true)
-                    releaseRepository.set(true)
-                    stagingRepository(""target/staging-deploy"")
-                }
-            }
-        }
-    }
-}
-
-I get this:
-FAILURE: Build failed with an exception.
-
-* What went wrong:
-Execution failed for task ':lib:jreleaserFullRelease'.
-> maven.nexus2.mavenCentral.stagingRepository does not exist: target/staging-deploy
-
-(There are barely any tutorials on this, especially for Gradle with Kotlin DSL. I found only for Maven which was already helpful)
-What is stagingRepository supposed to mean?
-(I can also post the publishing part if needed.)
-","1. The stagingRepository is a local directory that contains artifacts to be deployed. Its use is explained at the official guide.
-",JReleaser
-"Action Point : I have to build the image and push to docker registry with the help Kaniko executor.
-I am using self hosted github runner.
-Steps: First I tried to set up docker configuration
-
-name: Set up Docker Configuration
-env:
-DOCKER_CONFIG_JSON: ${{ secrets.DOCKER_CONFIG_JSON }}
-run: |
-mkdir -p ${{ github.workspace }}/.docker
-echo -n ""$DOCKER_CONFIG_JSON"" > ${{ github.workspace }}/.docker/config.json
-
-name: Build and push alertmanager image to aws registry
-run: |
-pwd
-ls -ltr
-docker run --rm 
--v ${{ github.workspace }}:/workspace 
--v pwd/config.json:/kaniko/.docker/config.json:ro 
-gcr.io/kaniko-project/executor:latest 
---context=/workspace 
---dockerfile=/workspace/alertmanager/Dockerfile 
---destination=xyz/dsc/alertmanager:1.0
-
-
-
-so when I am trying to build the image I am getting this error : error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for ""xyz/abc/alertmanager:1.0"": creating push check transport for xyz failed: Get ""https://xyz/v2/"": tls: failed to verify certificate: x509: certificate signed by unknown authority
-
-","1. Just add the --skip-tls-verify option.
-",kaniko
-"I am trying to use the --cache-repo option of the kaniko executor but I see that it does not use the cache that I saved in ECR/AWS and the gitlab log returns this;
-Checking for cached layer [MASKED]/dev-cache:627d56ef7c151b98c02c0de3d3d0d9a5bc8d538b1b1d58632ef977f4501b48f4...
-INFO[0521] No cached layer found for cmd COPY --from=build /../../../../..............
-
-I have rebuilt the image with the same tag and the code has not changed and it is still taking the same time...
-The version of kaniko I am using is the following gcr.io/kaniko-project/executor:v1.9.1
-These are the flags I use in kaniko:
-  /kaniko/executor --cache=true \
-    --cache-repo ""${URL_ECR}/dev-cache"" \
-    --cache-copy-layers \
-    --single-snapshot \
-    --context ""${CI_PROJECT_DIR}"" ${BUILD_IMAGE_EXTRA_ARGS} \
-    --dockerfile ""${CI_PROJECT_DIR}/Dockerfile"" \
-    --destination ""${IMAGE_NAME}:${IMAGE_TAG}"" \
-    --destination ""${IMAGE_NAME}:latest"" \
-    --skip-unused-stages \
-    --snapshotMode=redo \
-    --use-new-run
-
-Do you have any ideas?
-","1. Successfully resolved issue by removing the flags: --cache-copy-layers and --single-snapshot, and adding the flag: --cleanup
-
-2. When we set --cache-repo as ECR repo url, kaniko push all layers to ecr repo as cache, if dockerfile has too many/multi-step instructions, this increases the ECR repo storage size. And for each build it pushes the cache to ecr repo.
-Can we have option like if layer cache is present in ECR repo then do not push it again?
-@Tom Saleeba @Andres Cabrera
-",kaniko
-"I want to create a CI/CD pipeline where
-
-The developer pushes his code to the Github repo.
-Github action runs the unit tests, and upon the passing of all the tests, it lets you merge the PR.
-On K8s cluster an argocd deployment listens for changes on this repo. Upon a new merge, it triggers a build process. Here is my confusion: I want Kaniko to build a docker image from this repo and push this image to GCR where K8s is running.
-Once the Kaniko build is successful, I want to update the helm chart image version to the latest tag and then deploy the actual application via argocd.
-
-I am not sure how to transition from step 3 to step 4. I will be able to build and push the image to GCR, but how do I update the new image tag in the deployment spec of the application, and how do I trigger the deployment?
-I am also not sure about the directory structure.
-For now, I am thinking of putting the Kaniko spec here.
-But then that means I'll have to create a new repo just for the specs of this application deployment. Does that make sense, or am I approaching this the wrong way?
-","1. Your question includes different topics which makes it hard to cover all of them here. I'll try to address most of them.
-Argo CD, as the name represents itself, can only handle CD part.
-Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
-Argo CD follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state.
-since you are using GitHub, you need to define your pipeline, within GitHub workflow(action) + modification of your manifest files where your applications(k8s files) are located --> it means all the above steps(1 to 4) you mentioned.
-for the build and push step, I believe you need an Actions Runner Controller (ARC) as a self-hosted runner, so you can trigger the runner you have to build and push your image.
-# for e.g. in your Github workflow, you can use:
-jobs:
-  build:
-    runs-on: 
-        - self-hosted   # required label to tigger your runner.
-
-in the official doc of Kaniko, I could only find an example for GitLab CI, but you can use it to get the idea for GitHub.[LINK]
-So now let's assume that you have your GitHub workflow, which triggers based on the specific branch, within the pipeline, you are building and pushing your image, and now it's time to change K8s manifest, regarding the new image tag.
-also, need to define IMAGE_TAG vars in your GitHub workflow.
-    - name: Deploy to staging
-      run: |
-        sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
-        sudo chmod a+x /usr/local/bin/yq
-        sudo git config --global url.""https://${{ secrets.GIT_CLONE_SECRET }}@github.com"".insteadOf ""https://github.com""
-        sudo git clone https://<REPO_ADDRESS_OF_YOUR_HELM_CHARTS>
-        cd helm-charts    # this part, is related to your repo structure
-        sudo yq -i eval '.image.tag = ""${{ env.IMAGE_TAG }}""' <YOUR_PATH_TO_HELM_VALUES>/values.yaml
-        sudo git config user.email <YOUR_GIT_EMAIL_ADDRESS_HERE>
-        sudo git config user.name <YOUR_GIT_USERNAME_HERE>
-        sudo git add .
-        sudo git commit -m ""image updated with ${{ env.IMAGE_TAG }}""
-        sudo git push origin master  # CHANGE IT TO THE BRANCH YOUR K8S MANIFEST ARE.
-
-and about your directory and deployment structure, the official ArgoCD doc can help you: LINK
-",kaniko
-"While going over Getting Started on Okteto Cloud with PHP tutorial and getting the “certificate signed by unknown authority” error when running okteto init. I believe it’s related to the custom zscaler CA that our company defines.
-How can I have okteto cli to trust custom CA? As far as I understand it's developed in golang, but setting SSL_CERT_FILE and SSL_CERT_DIR with the location of the certificates didn't help.
-➜ php-getting-started git:(main) okteto init
-i Using … @ cloud.okteto.com as context
-✓ Okteto manifest (okteto.yml) deploy and build configured successfully
-? Do you want to launch your development environment? [Y/n]: y
-i Building ‘Dockerfile’ in tcp://buildkit.cloud.okteto.net:443…
-[+] Building 0.0s (0/0)
-x Error building service ‘hello-world’: error building image ‘registry.cloud.okteto.net/.../php-hello-world:1.0.0’: build failed: failed to dial gRPC: rpc error: code = Unavailable desc = connection error: desc = “transport: authentication handshake failed: x509: certificate signed by unknown authority”
-
-","1. This is not supported on the latest build (2.15.3), but is scheduled to be released on the next.
-The fix is already merged, and available on the dev channel:
-export OKTETO_CHANNEL=dev
-curl https://get.okteto.com -sSfL | sh
-
-https://community.okteto.com/t/allowing-custom-certificates-in-okteto-cli/828 has more information on this.
-",Okteto
-"currently, I am trying to create an Okteto development environment with a NodeJS project + Mongo database. I already created an Okteto account and authenticated Okteto with my CLI.
-I am using Okteto CLI version 2.3.3 and would like to use the Docker compose file configured for my application. My docker-compose.yml file is as following:
-version: '3'
-
-services: 
-  app:
-    container_name: docker-node-mongo
-    restart: always
-    build: .
-    ports:
-      - 3000:3000
-    environment:
-      DB_HOST: mongo
-      DB_PORT: 27017
-      DB_USER: kevin
-      DB_PASS: Ab12345!
-      DB_NAME: devops_week1
-    volumes:
-     - .:/user/src/app
-
-  mongo:
-    container_name: mongo
-    image: mongo
-    restart: always
-    ports:
-      - 27017:27017
-    environment:
-      MONGO_INITDB_ROOT_USERNAME: kevin
-      MONGO_INITDB_ROOT_PASSWORD: Ab12345!
-    volumes:
-      - ./mongo-data:/data/db
-
-Steps I took to create Okteto development environment (inside local repo folder):
-
-okteto context use https://cloud.okteto.com
-okteto up
-
-Unfortunately, I get the following error during okteto up command:
- i  Building the image 'okteto.dev/devops-2223-kevinnl1999-mongo:okteto-with-volume-mounts' in tcp://buildkit.cloud.okteto.net:443...
-[+] Building 2.9s (7/7) FINISHED
- => [internal] load .dockerignore                                                                                                                                                                                                   0.5s 
- => => transferring context: 67B                                                                                                                                                                                                    0.4s 
- => [internal] load build definition from buildkit-3009021795                                                                                                                                                                       0.6s 
- => => transferring dockerfile: 82B                                                                                                                                                                                                 0.4s 
- => [internal] load metadata for docker.io/library/mongo:latest                                                                                                                                                                     0.9s 
- => [internal] load build context                                                                                                                                                                                                   0.4s 
- => => transferring context: 32B                                                                                                                                                                                                    0.3s 
- => => resolve docker.io/library/mongo@sha256:2374c2525c598566cc4e62145ba65aecfe1bd3bf090cccce1ca44f3e2b60f861                                                                                                                      0.1s 
- => CACHED [2/2] COPY mongo-data /data/db                                                                                                                                                                                           0.0s 
- => ERROR exporting to image                                                                                                                                                                                                        0.7s 
- => => exporting layers                                                                                                                                                                                                             0.0s 
- => => exporting manifest sha256:1aa8d8d6b1baf52e15dbb33e3578e5b2b4d827b20f279f4eb7577b7ee1404c60                                                                                                                                   0.1s 
- => => exporting config sha256:21be98f915c4523d85e7d89832742648b9fd2ea33d240ba6cd8c6b94d111c406                                                                                                                                     0.0s 
- => => pushing layers                                                                                                                                                                                                               0.6s 
-------
- > exporting to image:
-------
- x  Error building image 'registry.cloud.okteto.net/kevinnl1999/devops-2223-kevinnl1999-mongo:okteto-with-volume-mounts': build failed: failed to solve: content digest sha256:10ac4908093d4325f2c94b2c9a571fa1071a17a72dd9c21c1ffb2c86f68ca028: not found
-
-It looks like Okteto is looking for an old volume. I already tried the steps above in a new Okteto namespace and tried the okteto up --reset command hoping that it would clear some cache. What could be the solution to this error?
-","1. I figured out a solution to my problem. The problem was about cache as I suspected. What I did to solve the problem:
-
-Removed old container, image and volumes
-docker builder prune --all
-docker build with --no-cache flag
-
-After that, okteto up was succesful.
-",Okteto
-"I've deployed the Duende IdentityServer to Okteto Cloud: https://id6-jeff-tian.cloud.okteto.net/.
-Although the endpoint is https from the outside, the inside pods still think they are behind HTTP protocol. You can check the discovery endpoint to find out: https://id6-jeff-tian.cloud.okteto.net/.well-known/openid-configuration
-
-That causes issues during some redirecting. So how to let the inner pods know that they are hosted in https scheme?
-Can we pass some headers to the IdP to tell it the original https schema?
-These headers should be forwarded to the inner pods:
-X-Forwarded-For: Holds information about the client that initiated the request and subsequent proxies in a chain of proxies. This parameter may contain IP addresses and, optionally, port numbers.
-X-Forwarded-Proto: The value of the original scheme, should be https in this case.
-X-Forwarded-Host: The original value of the Host header field.
-I searched from some aspnet documentations and found this: https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?source=recommendations&view=aspnetcore-6.0, however, I don't know how to configure the headers in Okteto, or in any k8s cluster.
-Is there anyone who can shed some light here?
-My ingress configurations is as follows (https://github.com/Jeff-Tian/IdentityServer/blob/main/k8s/app/ingress.yaml):
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  name: id6
-  annotations:
-    dev.okteto.com/generate-host: id6
-spec:
-  rules:
-    - http:
-        paths:
-          - backend:
-              service:
-                name: id6
-                port:
-                  number: 80
-            path: /
-            pathType: ImplementationSpecific
-
-","1. The headers that you mention are being added to the request when it’s forwarded to your pods.
-Could you dump the headers on the receiving end?
-Not familiar with Duende, but does it have a setting to specify the “public URL”? That’s typically what I’ve done in the past for similar setups.
-",Okteto
-"I try to upload my rasa chatbot with okteto via docker. So i has implemented a ""Dockerfile"", a ""docker-compose.yaml"" and a ""okteto.yaml"". The last past weeks the code works fine. Today it wont work anymore because Okteto gives the error: Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit status 1.
-I really dont understand what i should change. thanks
-docker-compose.yaml:
-version: '3.4'
-services:
-
-  rasa-server:
-    image: rasa-bot:latest
-    working_dir: /app
-    build: ""./""
-    restart: always
-    volumes:
-    - ./actions:/app/actions
-    - ./data:/app/data
-    command: bash -c ""rm -rf .rasa/* && rasa train && rasa run --enable-api --cors \""*\"" -p 5006""
-    ports:
-    - '5006:5006'
-    networks:
-    - all
-
-  rasa-actions-server:
-    image: rasa-bot:latest
-    working_dir: /app
-    build: ""./""
-    restart: always
-    volumes:
-    - ./actions:/app/actions
-    command: bash -c ""rasa run actions""
-    ports:
-    - '5055:5055'
-    networks:
-    - all
-
-networks:
-  all:
-    driver: bridge
-    driver_opts:
-      com.docker.network.enable_ipv6: ""true""
-
-Dockerfile:
-FROM python:3.7.13 AS BASE
-
-
-WORKDIR /app
-
-COPY requirements.txt .
-RUN pip install -r requirements.txt
-COPY . .
-CMD [""./bot.py""]
-
-RUN pip install --no-cache-dir --upgrade pip
-RUN pip install rasa==3.3.0
-
-
-ADD config.yml config.yaml
-ADD domain.yml domain.yaml
-ADD credentials.yml credentials.yaml
-ADD endpoints.yml endpoints.yaml
-
-okteto.yml:
-name: stubu4ewi
-autocreate: true
-image: okteto.dev/rasa-bot:latest
-command: bash
-volumes:
-  - /root/.cache/pip
-sync:
-  - .:/app
-forward:
-  - 5006:5006
-reverse:
-  - 9000:9000
-
-Error
-Found okteto manifest on /okteto/src/okteto.yml
-Unmarshalling manifest...
-Okteto manifest unmarshalled successfully
-Found okteto compose manifest on docker-compose.yaml
-Unmarshalling compose...
-x  Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit 
-status 1
-
-Dont have any clue what went wrong. It works fine till yesterday and even when nothings changed okteto gives this error.
-Tried to rename the docker-compose.yaml to: docker-compose.yml, okteto-compose.yml
-","1. That error is not about the file's name itself but the name of the services defined inside your docker-compose.yaml file.
-What command did you run, and what version of the okteto cli are you using? okteto version will give it to you.
-
-2. If you everr face the Problem: Rename your Repo so that it consists only of lower case alphanumeric characters or '-', and starts and ends with an alphanumeric character.
-Seems like Okteto uses the Repository Name to build the Images.
-",Okteto
-"I am going through this documentation: https://learn.microsoft.com/en-us/azure/azure-sql/database/azure-sql-dotnet-entity-framework-core-quickstart?view=azuresql&tabs=visual-studio%2Cservice-connector%2Cportal
-Specific snipped that has the issue:
-app.MapGet(""/Person"", (PersonDbContext context) =>
-{
-    return context.Person.ToList();
-})
-.WithName(""GetPersons"")
-.WithOpenApi();
-
-app.MapPost(""/Person"", (Person person, PersonDbContext context) =>
-{
-    context.Add(person);
-    context.SaveChanges();
-})
-.WithName(""CreatePerson"")
-.WithOpenApi();
-
-I want to connect azure sql server to my project. I started React + .Net project and these are the only changes I've made however I come across this error:
-Error (active)  CS1061  'RouteHandlerBuilder' does not contain a definition for 'WithOpenApi' and no accessible extension method 'WithOpenApi' accepting a first argument of type 'RouteHandlerBuilder' could be found (are you missing a using directive or an assembly reference?)
-
-I came across this documentation but it didn't resolve the issue: https://learn.microsoft.com/en-us/aspnet/core/tutorials/getting-started-with-swashbuckle?view=aspnetcore-8.0&tabs=netcore-cli
-","1. I had the same issue with that azuresql learn page.
-I followed the .NET CLI steps, and to get my console app connecting to my azure sql db I added two nugets, a using and the few swagger lines, from the referenced url:
-dotnet add package Microsoft.AspNetCore.OpenApi
-dotnet add package Swashbuckle.AspNetCore
-...
-using Microsoft.AspNetCore.OpenApi;
-...
-
-builder.Services.AddEndpointsApiExplorer();
-builder.Services.AddSwaggerGen();
-...
-
-if (app.Environment.IsDevelopment())
-{
-    app.UseSwagger();
-    app.UseSwaggerUI();
-}
-
-good luck
-ref: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis/openapi?view=aspnetcore-8.0
-",OpenAPI
-"FastAPI generates the ""openapi.json"" file and provides an interface to it.
-For an experiment I need to replace this with a third party file.
-from pathlib import Path
-import json
-app.openapi_schema = json.loads(Path(r""myopenapi.json"").read_text())
-
-
-when I put this code behind a endpoint, for example ""/""
-@app.get(""/"", include_in_schema=FULL_SCHEMA)
-def read_root():
-    # code here
-
-
-After calling the endpoint once, the loaded myopenapi.json is displayed for the ""/docs"" interface and the original is overwritten. The functionality has not changed, the old definitions still work.
-I would like to be able to make the switch directly after FastAPI has completed the setup and all end points are created.
-Putting this in the Startup code block doesn't work (async def lifespan(app: FastAPI):) - when this reaches yield the app.openapi _schema is not created yet.
-Where is the right place to change the FastAPI app after generation?
-FastAPI is started with the command:
-uvicorn.run(app, host=SERVER_HOST, port=SERVER_PORT, workers=1)
-
-","1. @Helen had the right idea in the comment section
-And the ""problem"" was that the openapi schema in FastAPI is created on demand and not on startup.
-Adding this code works.
-Note: I chose to use the original swagger.yaml I was given over converting it to json for this test.
-def custom_openapi():
-    if app.openapi_schema:
-        return app.openapi_schema
-
-    openapi_schema= yaml.safe_load(Path(""swagger.yaml"").read_text())
-
-    # remove the server from the schema
-    del openapi_schema['servers']
-
-    app.openapi_schema = openapi_schema
-    return app.openapi_schema
-
-app = FastAPI()
-app.openapi = custom_openapi
-
-And it's also do-able to add an exeption handler on StarletteHTTPException and then switch out 404 errors with 501 for those not implemented faked endpoints.
-",OpenAPI
-"I'm trying to access the headers, url path, query parameters and other http information associated with a request received by my poem server.  The server uses poem-openapi and tokio to receive and process requests. Here's the driving code from main.rs:
-// Create the routes and run the server.
-let addr = format!(""{}{}"", ""0.0.0.0:"", RUNTIME_CTX.parms.config.http_port);
-let ui = api_service.swagger_ui();
-let app = Route::new()
-    .nest(""/v1"", api_service)
-    .nest(""/"", ui)
-    .at(""/spec"", spec)
-    .at(""/spec_yaml"", spec_yaml);
-
-// ------------------ Main Loop -------------------
-// We expect the certificate and key to be in the external data directory.
-let key = RUNTIME_CTX.tms_dirs.certs_dir.clone() + TMSS_KEY_FILE;
-let cert = RUNTIME_CTX.tms_dirs.certs_dir.clone() + TMSS_CERT_FILE;
-poem::Server::new(
-    TcpListener::bind(addr).rustls(
-        RustlsConfig::new().fallback(
-            RustlsCertificate::new()
-                .key(std::fs::read(key)?)
-                .cert(std::fs::read(cert)?),
-        ),
-    ),
-)
-.name(SERVER_NAME)
-.run(app)
-.await
-
-Multiple endpoints work just fine, but I'm having a hard time accessing detailed request information in my asynchronous endpoint code that looks something like this:
-impl MyApi {
-    #[oai(path = ""/myapp/endpoint"", method = ""post"")]
-    async fn get_new_ssh_keys(&self, req: Json<MyReq>) -> Json<MyResp> {
-        let resp = match MyResp::process(&req) { ... } }
-
-I tried using poem::web::FromRequest, poem_openapi::param::Header and other interfaces, but the documentation is a bit sparse. One concrete question is how do I access the http headers in the endpoint code?  Thanks.
-","1. You would extract the &HeaderMap:
-use poem::http::HeaderMap;
-
-async fn get_new_ssh_keys(&self, headers: &HeaderMap, req: Json<MyReq>) -> Json<MyResp> {
-                              // ^^^^^^^^^^^^^^^^^^^
-
-The only types that can be used as parameters in your handlers are those that implement FromRequest. You can consult the implementations listed in the docs to determine what you can use (poem_openapi::param::Header is not one of them). More info is in the Extractors section of the main docs.
-Likewise for URL, query, etc. there are the &Uri and Query extractors. If you're needing so much basic information from the request though, I'd suggest extracting the full &Request which will have all that data (except the body I believe).
-",OpenAPI
-"I'm trying to configure CW Agent using packer. I was able to install the  CW Agent using packer but however when tried to start the agent it fails to create the image
-I know the fact that temporary EC2 instance that packer create during the image creation process does not have necessary iam roles attached which allows to push metrics/logs to cloudwatch. I know that's why it fails to start the agent.
-# Install CloudWacth Agent
-wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
-sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
-rm amazon-cloudwatch-agent.deb
-
-But when I add below line to start the CW Agent, it fails to create the image
-sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:AmazonCloudWatch-linux -s
-
-Is there any way that I can achieve this? I need to be able to start the 
-agent while setting up the image.
-","1. If using Ansible provisioner you can set the service as enabled in order for it to be started on the instance boot:
-- name: Start CloudWatch agent service on boot
-  ansible.builtin.service:
-    name: amazon-cloudwatch-agent
-    enabled: true
-
-References:
-https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units#enabling-and-disabling-services
-https://docs.ansible.com/ansible/latest/collections/ansible/builtin/service_module.html#parameter-enabled
-",Packer
-"Want to automate using any Jenkins Pipeline : How to detect the latest AMI ID available and use that for customization like additional packages ?
-Any other tool to detect new AMI and deploy EC2 Instance.
-","1. Try using EC2 ImageBuilder (if you want to develop a custom AMI with additional packages) which can be later used to deploy EC2Instance.
-I have worked on the same using terraform. Here are the resources:
-https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/imagebuilder_component
-Assuming either that the custom AMI is built or using base image AMI, use a data lookup element to get the most recent image:
-data ""aws_ami"" ""latest_version""{ 
-     owners = [#replace with accountID] 
-     most_recent=true
-     name_regex = ""#replace with your AMI name if needed"" 
-}
-
-Once you add the required data lookup element, while creating the ec2 instance, you can use this AMI-ID, so that you will have the most recent AMI version.
-resource ""aws_instance"" ""new_instance""{
-    ami = data.aws_ami.new_instance.id
-    #....other resource properties...#
-}
-
-We can manage the terraform state files using Jenkins.
-
-2. There might be other options available, but the one I know is subscribing to the AWS AMI SNS topic, then use AWS EventBridge to send a notification to your system, if you are using CodeBuild, then you could trigger it directly. If you are using Jenkins then you could trigger your Jenkins pipeline via a Webhook or something.
-
-3. Try fetching the latest AMI Id of the specified image name from AWS SSM. Search for the required AMI's name in AWS SSM. For example, to fetch the latest AMI details of Windows 2019 server, call this aws cli command:
-aws ssm get-parameter --name /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base
-
-You may automate it using jenkins to fetch the AMI Id by using shell or powershell script and querying json output. You can also use python boto3 library to fetch the ami Id:
-import os
-import sys,json
-import time
-import boto3
-
-ssmParameter = str(sys.argv[1])
-region = str(sys.argv[2])
-client = boto3.client('ssm', region)
-
-response = client.get_parameter(
-    Name=ssmParameter
-)
-
-amiValue = json.loads(response['Parameter']['Value'])
-print(amiValue['image_id'])
-    
-sys.stdout.flush()    
-
-
-It can be called as follows to fetch ami id of Windows server 2019:
-python filename.py '/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-Base' 'us-east-1'
-
-",Packer
-"I would like to use plurals for my Android project.
-However, the values I provide can be float values. 
-So for instance, when setting 1.5 stars, I want this to understand, it's not 1 star but 1.5 stars.
-<plurals name=""stars"">
-  <item quantity=""one"">%d star</item>
-  <item quantity=""other"">%d stars</item>
-</plurals>
-
-However, the Android system seems to use integer values (%d) only.
-The method looks like this:
-String getQuantityString(@PluralsRes int id, int quantity, Object... formatArgs)
-
-where quantity is defined as Int.
-Is there any solution for this? 
-","1. After doing further research, it appears there is no good solution for this.
-As also seen in the other answers, they always require a lot of ""manual processing"" to it requiring no different workflow than creating separate string resources.
-The general suggestion seems to be rounding / processing the float values manually (e.g checking whether the float value matches 1.0) and then using apropriate Int values for the plurals call. 
-But aside from not really using plurals then this comes with the problem of other languages (e.g. I have no clue if 1.5 stars would also be plural in another language as it is in English) and thus these rounding options may not apply universally.
-So the answer is: there seems to be no perfect solution (meaning solved ""automatically"" by the Android system).
-What I actually do therefore is to simply pick exceptions and use different Strings there.
-So the (pseudo code) way of doing currently looks like
-// optionally wrap different languages around
-// if language == English
-
-   when (amountStars) {
-    is 1.0 -> getString(R.string.stars_singular, 1) 
-    ... ->
-    else -> getString(R.string.stars_plural, amountStars)
-   }
-// if language == Chinese ...
-
-where additional cases have to be ""hard coded"". So for example you have to decide whether 0 means 
-""0 stars"" (plural string) or 
-""no star"" (singular string)
-But there seems no real benefit of using plurals over separate string resources with common placeholders. On the other hand this (at last for me) gives more flexibility for formatting options. For example, one may create a text like ""1 star and a half"" where it becomes singular again (even though numerically we would write 1.5 stars).
-
-2. Don't use plurals for fractional numbers. Just stick with basic string resources and use a placeholder:
-<string name=""fractional_stars"">%1$s stars</string>
-
-
-getString(R.string.fractional_stars, 0.5F.toString())
-
-
-or
-<string name=""fractional_stars"">% stars</string>
-
-
-getString(R.string.half_a_star).replace(""%"", 0.5F.toString())
-
-
-3. Simply do this:
-getQuantityString(R.plurals.stars, quantity > 1f ? 2 : 1, quantity):
-
-And replace the %d in your strings with %f.
-",Plural
-"In PHP, I use Kuwamoto's class to pluralize nouns in my strings. I didn't find something as good as this script in javascript except for some plugins. So, it would be great to have a javascript function based on Kuwamoto's class.
-http://kuwamoto.org/2007/12/17/improved-pluralizing-in-php-actionscript-and-ror/
-","1. Simple version (ES6):
-const pluralize = (count, noun, suffix = 's') =>
-  `${count} ${noun}${count !== 1 ? suffix : ''}`;
-
-Typescript:
-const pluralize = (count: number, noun: string, suffix = 's') =>
-  `${count} ${noun}${count !== 1 ? suffix : ''}`;
-
-Usage:
-pluralize(0, 'turtle'); // 0 turtles
-pluralize(1, 'turtle'); // 1 turtle
-pluralize(2, 'turtle'); // 2 turtles
-pluralize(3, 'fox', 'es'); // 3 foxes
-
-This obviously doesn't support all english edge-cases, but it's suitable for most purposes
-
-2. Use Pluralize
-There's a great little library called Pluralize that's packaged in npm and bower.
-This is what it looks like to use: 
-import Pluralize from 'pluralize';
-
-Pluralize( 'Towel', 42 );       // ""Towels""
-
-Pluralize( 'Towel', 42, true ); // ""42 Towels""
-
-And you can get it here: 
-https://github.com/blakeembrey/pluralize
-
-3. So, I answer my own question by sharing my translation in javascript of Kuwamoto's PHP class.
-String.prototype.plural = function(revert){
-
-    var plural = {
-        '(quiz)$'               : ""$1zes"",
-        '^(ox)$'                : ""$1en"",
-        '([m|l])ouse$'          : ""$1ice"",
-        '(matr|vert|ind)ix|ex$' : ""$1ices"",
-        '(x|ch|ss|sh)$'         : ""$1es"",
-        '([^aeiouy]|qu)y$'      : ""$1ies"",
-        '(hive)$'               : ""$1s"",
-        '(?:([^f])fe|([lr])f)$' : ""$1$2ves"",
-        '(shea|lea|loa|thie)f$' : ""$1ves"",
-        'sis$'                  : ""ses"",
-        '([ti])um$'             : ""$1a"",
-        '(tomat|potat|ech|her|vet)o$': ""$1oes"",
-        '(bu)s$'                : ""$1ses"",
-        '(alias)$'              : ""$1es"",
-        '(octop)us$'            : ""$1i"",
-        '(ax|test)is$'          : ""$1es"",
-        '(us)$'                 : ""$1es"",
-        '([^s]+)$'              : ""$1s""
-    };
-
-    var singular = {
-        '(quiz)zes$'             : ""$1"",
-        '(matr)ices$'            : ""$1ix"",
-        '(vert|ind)ices$'        : ""$1ex"",
-        '^(ox)en$'               : ""$1"",
-        '(alias)es$'             : ""$1"",
-        '(octop|vir)i$'          : ""$1us"",
-        '(cris|ax|test)es$'      : ""$1is"",
-        '(shoe)s$'               : ""$1"",
-        '(o)es$'                 : ""$1"",
-        '(bus)es$'               : ""$1"",
-        '([m|l])ice$'            : ""$1ouse"",
-        '(x|ch|ss|sh)es$'        : ""$1"",
-        '(m)ovies$'              : ""$1ovie"",
-        '(s)eries$'              : ""$1eries"",
-        '([^aeiouy]|qu)ies$'     : ""$1y"",
-        '([lr])ves$'             : ""$1f"",
-        '(tive)s$'               : ""$1"",
-        '(hive)s$'               : ""$1"",
-        '(li|wi|kni)ves$'        : ""$1fe"",
-        '(shea|loa|lea|thie)ves$': ""$1f"",
-        '(^analy)ses$'           : ""$1sis"",
-        '((a)naly|(b)a|(d)iagno|(p)arenthe|(p)rogno|(s)ynop|(t)he)ses$': ""$1$2sis"",        
-        '([ti])a$'               : ""$1um"",
-        '(n)ews$'                : ""$1ews"",
-        '(h|bl)ouses$'           : ""$1ouse"",
-        '(corpse)s$'             : ""$1"",
-        '(us)es$'                : ""$1"",
-        's$'                     : """"
-    };
-
-    var irregular = {
-        'move'   : 'moves',
-        'foot'   : 'feet',
-        'goose'  : 'geese',
-        'sex'    : 'sexes',
-        'child'  : 'children',
-        'man'    : 'men',
-        'tooth'  : 'teeth',
-        'person' : 'people'
-    };
-
-    var uncountable = [
-        'sheep', 
-        'fish',
-        'deer',
-        'moose',
-        'series',
-        'species',
-        'money',
-        'rice',
-        'information',
-        'equipment'
-    ];
-
-    // save some time in the case that singular and plural are the same
-    if(uncountable.indexOf(this.toLowerCase()) >= 0)
-      return this;
-
-    // check for irregular forms
-    for(word in irregular){
-
-      if(revert){
-              var pattern = new RegExp(irregular[word]+'$', 'i');
-              var replace = word;
-      } else{ var pattern = new RegExp(word+'$', 'i');
-              var replace = irregular[word];
-      }
-      if(pattern.test(this))
-        return this.replace(pattern, replace);
-    }
-
-    if(revert) var array = singular;
-         else  var array = plural;
-
-    // check for matches using regular expressions
-    for(reg in array){
-
-      var pattern = new RegExp(reg, 'i');
-
-      if(pattern.test(this))
-        return this.replace(pattern, array[reg]);
-    }
-
-    return this;
-}
-
-Easy to use:
-alert(""page"".plural()); // return plural form => pages
-alert(""mouse"".plural()); // return plural form => mice
-alert(""women"".plural(true)); // return singular form => woman
-
-DEMO
-",Plural
-"I'm looking for a function that given a string it switches the string to singular/plural. I need it to work for european languages other than English.
-Are there any functions that can do the trick? (Given a string to convert and the language?)
-Thanks
-","1. Here is my handy function:
-function plural( $amount, $singular = '', $plural = 's' ) {
-    if ( $amount === 1 ) {
-        return $singular;
-    }
-    return $plural;
-}
-
-By default, it just adds the 's' after the string. For example:
-echo $posts . ' post' . plural( $posts );
-
-This will echo '0 posts', '1 post', '2 posts', '3 posts', etc. But you can also do:
-echo $replies . ' repl' . plural( $replies, 'y', 'ies' );
-
-Which will echo '0 replies', '1 reply', '2 replies', '3 replies', etc. Or alternatively:
-echo $replies . ' ' . plural( $replies, 'reply', 'replies' );
-
-And it works for some other languages too. For example, in Spanish I do:
-echo $comentarios . ' comentario' . plural( $comentarios );
-
-Which will echo '0 comentarios', '1 comentario', '2 comentarios', '3 comentarios', etc. Or if adding an 's' is not the way, then:
-echo $canciones . ' canci' . plural( $canciones, 'ón', 'ones' );
-
-Which will echo '0 canciones', '1 canción', '2 canciones', '3 canciones', etc.
-
-2. This is not easy: each language has its own rules for forming plurals of nouns. In English it tends to be that you put ""-s"" on the end of a word, unless it ends in ""-x"", ""-s"", ""-z"", ""-sh"", ""-ch"" in which case you add ""-es"". But then there's ""mouse""=>""mice"", ""sheep""=>""sheep"" etc.
-The first thing to do, then, is to find out what the rule(s) are for forming the plural from the singular noun in the language(s) you want to work with. But that's not the whole solution. Another problem is recognising nouns. If you are given a noun, and need to convert it from singular to plural that's not too hard, but if you are given a piece of free text and you have to find the singular nouns and convert them to plural, that's a lot harder. You can get lists of nouns, but of course some words can be nouns and verbs (""hunt"", ""fish"", ""break"" etc.) so then you need to parse the text to identify the nouns.
-It's a big problem. There's probably an AI system out there that would do what you need, but I don't imagine there'll be anything free that does it all.
-
-3. There is no function built into PHP for this type of operation.  There are, however, some tools that you may be able to use to help accomplish your goal.
-There is an unofficial Google Dictionary API which contains information on plurals.  You can read more about that method here.  You'll need to parse a JSON object to find the appropriate items and this method tends to be a little bit slow.
-The other method that comes to mind is to use aspell, or one of its relatives.  I'm not sure how accurate the dictionaries are for languages other than English but I've used aspell to expand words into their various forms.
-I know this is not the most helpful answer, but hopefully it gives you a general direction to search in.
-",Plural
-"When using SwiftUI's inflect parameter like this:
-import SwiftUI
-
-struct ContentView: View {
-    @State  var count = 1
-    
-    var body: some View {
-        VStack {
-            VStack{
-                
-                Text(""^[\(count) Song](inflect: true)"")
-                Button(""Add""){ count += 1}
-                Button(""Remove""){ count -= 1}
-            }
-        }
-        .padding()
-    }
-}
-
-struct ContentView_Previews: PreviewProvider {
-    static var previews: some View {
-        ContentView()
-    }
-}
-
-Everything works as expected, but if we try to do these things, it does not:
-import SwiftUI
-
-struct ContentView: View {
-    @State  var count = 1
-
-    
-    var body: some View {
-        VStack {
-            VStack{
-                Text(generateString)
-                Button(""Add""){ count += 1}
-                Button(""Remove""){ count -= 1}
-            }
-        }
-        .padding()
-    }
-}
-
-func generateString(count:Int) -> String {
-    return ""^[\(count) Song](inflect: true)""
-}
-
-struct ContentView_Previews: PreviewProvider {
-    static var previews: some View {
-        ContentView()
-    }
-}
-
-I hope this is not so.  If it is so, why is that?
-","1. Simply change the return type of your method to LocalizedStringKey and it should work:
-func generateString(count:Int) -> LocalizedStringKey {
-    return ""^[\(count) Song](inflect: true)""
-}
-
-",Plural
-"I`m trying to make a .service on wsl to autostart a QuestDB container on start and restart it when needed. For some reason I get error 125 when trying to start the service but it works perfectly fine if I input the command giving me the error directly.
-Used sudo nano systemctl /etc/systemd/system/podman-QuestDBTeste.service to create the file and the content is as follows:
-[Unit]
-Description=QuestDB Container
-After=network.target
-
-[Service]
-Type=simple
-Restart=always
-RestartSec=10
-ExecStartPre=/bin/bash -c 'fuser -k 9000/tcp 8812/tcp 9009/tcp || true'
-ExecStart=/usr/bin/podman start -a QuestDBTeste > /home/gabriel/podman_log.txt
-ExecStop=podman stop -t 2 QuestDBTeste
-ExecReload=/usr/bin/podman restart QuestDBTeste
-
-[Install]
-WantedBy=multi-user.target
-
-
-When I send systemctl status podman-QuestDBTeste.service I get
-podman-QuestDBTeste.service - QuestDB Container
-     Loaded: loaded (/etc/systemd/system/podman-QuestDBTeste.service; enabled; vendor preset: enabled)
-     Active: activating (auto-restart) (Result: exit-code) since Mon 2024-05-27 22:39:16 -03; 9s ago
-    Process: 4114 ExecStartPre=/bin/bash -c fuser -k 9000/tcp 8812/tcp 9009/tcp || true (code=exited, status=0/SUCCESS)
-    Process: 4116 ExecStart=sudo /usr/bin/podman start -a QuestDBTeste > /home/gabriel/podman_log.txt (code=exited, status=125)
-   Main PID: 4116 (code=exited, status=125)
-
-But if I type directly podman start -a QuestDBTeste, it works perfectly fine and starts the container.
-I have no idea why and was expecting the .service to work the same as me inputing podman start -a QuestDBTeste
-","1. Maybe avoid sudo and redirections in your service config so it behaves as close as possible to the local command line? Something like
-[Unit]
-Description=QuestDB Container
-After=network.target
-
-[Service]
-Type=simple
-Restart=always
-RestartSec=10
-ExecStartPre=/bin/bash -c 'fuser -k 9000/tcp 8812/tcp 9009/tcp || true'
-ExecStart=/usr/bin/podman start -a QuestDBTeste
-ExecStop=/usr/bin/podman stop -t 2 QuestDBTeste
-ExecReload=/usr/bin/podman restart QuestDBTeste
-User=gabriel
-Group=gabriel
-Environment=""PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin""
-StandardOutput=append:/home/gabriel/podman_log.txt
-StandardError=append:/home/gabriel/podman_error_log.txt
-
-[Install]
-WantedBy=multi-user.target
-
-Then
-sudo systemctl daemon-reload
-sudo systemctl restart podman-QuestDBTeste.service
-
-",Podman
-"In podman, use --replace to replace existing container
-According to podman-run — Podman documentation
-
---replace
-If another container with the same name already exists, replace and remove it. The default is false.
-
-What is the corresponding command in docker?
-Not find in docker run | Docker Docs
-","1. There's no equivalent docker run option; the documentation you link to would show it if there was.
-It should be more or less equivalent to explicitly stop and delete the existing container
-docker stop ""$CONTAINER_NAME"" || true
-docker rm ""$CONTAINER_NAME"" || true
-docker run --name ""$CONTAINER_NAME"" ...
-
-",Podman
-"I'm trying to upgrade a multi-module Quarkus project from 2.2 to the latest 2.6.1.Final. The build (including quarkus:dev with -Psomeproject) works on 2.3.1.Final, but when I upgrade to 2.4.0.Final it fails with this error:
-Exception in thread ""main"" java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke ""io.quarkus.deployment.dev.DevModeContext$ModuleInfo.getMain()""
-because the return value of ""io.quarkus.deployment.dev.DevModeContext.getApplicationRoot()"" is null
-    at io.quarkus.deployment.dev.DevModeMain.start(DevModeMain.java:151)
-    at io.quarkus.deployment.dev.DevModeMain.main(DevModeMain.java:63)
-Caused by: java.lang.NullPointerException: Cannot invoke ""io.quarkus.deployment.dev.DevModeContext$ModuleInfo.getMain()"" because the return value of ""io.quarkus.deployment.dev.DevModeContext.getApplicationRoot()"" is null
-    at io.quarkus.deployment.dev.DevModeMain.start(DevModeMain.java:91)
-
-A regular build still works; it is quarkus:dev that fails. I simply can't see what's wrong here. What am I missing?
-I'll create a minimal solution with the problem as the next step, but would appreciate any pointers.
-The project is using Java 17 but the regular build does work and development mode also worked with the older platform.
-EDIT 2024-05-29: we never solved this and have been working without quarkus:dev, but with Quarkus 3.8.3 it works again. We haven't really changed the structure of the pom files; the Quarkus version is a property that gets updated when we switch to a new release. Whatever it was, it is nice to be back on track.
-","1. In order to run Quarkus you have to have the correct version of Quarkus itself, and it's helper build plugin.
-Ex. of configuration in gradle:
-plugins {
-  // keep the version the same as the quarkus universe bom
-  id 'io.quarkus' version '2.12.0.Final'
-}
-
-dependencies {
-   api platform(""io.quarkus:quarkus-universe-bom:2.12.0.Final"")
-   //... other
-}
-
-Make sure that your project dependencies are correct, and the Quarkus plugin nor Quarkus versions are not overridden. Especially check for transitive dependencies, if they do not smuggle some older Quarkus version. For maven:
-mvn dependency:tree
-// for gradle wrapper:
-./gradlew dependencies
-// for gradle:
-gradle dependencies
-
-
-2. Not exactly the same context but nevertheless it can be very useful for anyone having the same error message: If you migrate an application to Quarkus (e.g. from Wildfly) and have the same error, you may have forgotten to remove the following tag <packaging>war</packaging> from your pom.xml.
-",Quarkus
-"After migrating Quarkus 2.16 to 3 (and Hibernate 5.6 to 6) I got a bug in persisting data. Essentially my schema looks like this:
-@Entity
-class Person {
-    @Column(name = ""name"")
-    private String name;
-
-    @OneToMany(
-            mappedBy = ""owner"",
-            cascade = CascadeType.ALL,
-            orphanRemoval = true,
-            fetch = FetchType.EAGER
-    )
-    private Set<Pet> pets = new HashSet<>();
-}
-
-@Entity
-class Pet {
-    @Column(name = ""name"")
-    private String name;
-
-    @JoinColumn(name = ""fkidperson"",
-                referencedColumnName = ""id"")
-    @ManyToOne(optional = false)
-    private Person owner;
-
-    @OneToMany(mappedBy = ""pet"",
-               cascade = CascadeType.ALL,
-               orphanRemoval = true,
-               fetch = FetchType.EAGER)
-    private Set<Food> foods = new HashSet<>();
-}
-
-@Entity
-class Food {
-    @Column(name = ""name"")
-    private String name;
-
-    @JoinColumn(name = ""fkidpet"",
-                referencedColumnName = ""id"")
-    @ManyToOne(optional = false)
-    private Pet pet;
-}
-
-and I will try to update it with a POST request containing
-{
-    ""id"": 2,
-    ""name"": ""John"",
-    ""pets"": [
-        {
-            ""id"": null,
-            ""name"": ""Fuffy"",
-            ""foods"": []
-        }
-    ]
-}
-
-while on the backend I would like to overwrite the previous person using an entityManager.merge(person) but it fails with an
-org.hibernate.AssertionFailure: null identifier for collection of role (Pet.foods)
-
-while before it was working.
-Any idea? Thanks!
-","1. This problem could be because it is initializing the lists in the declaration.
-Try it:
-@Entity
-class Person {
-    @Column(name = ""name"")
-    private String name;
-
-    @OneToMany(
-            mappedBy = ""owner"",
-            cascade = CascadeType.ALL,
-            orphanRemoval = true,
-            fetch = FetchType.EAGER
-    )
-    private Set<Pet> pets;
-}
-
-@Entity
-class Pet {
-    @Column(name = ""name"")
-    private String name;
-
-    @JoinColumn(name = ""fkidperson"",
-                referencedColumnName = ""id"")
-    @ManyToOne(optional = false)
-    private Person owner;
-
-    @OneToMany(mappedBy = ""pet"",
-               cascade = CascadeType.ALL,
-               orphanRemoval = true,
-               fetch = FetchType.EAGER)
-    private Set<Food> foods;
-}
-
-
-2. I ran into the same problem and submitted an issue with Hibernate:
-HHH-18177
-It does seem to be caused by instantiating the collections. However, this worked just fine in Hibernate 5.x.
-I was also only able to recreate this with Quarkus and JTA transactions. Running with RESOURCE_LOCAL seems to work fine.
-",Quarkus
-"When running skaffold dev with a container that has a listener (in this case Express), the container hangs as expected.
-If I remove the listener with simple executable code, it keeps running it over and over.
-I tried to search in the documentation but all I can see is that Skaffold should re-run the code only if you make changes.
-I expected Skaffold would terminate the container after the run.
-Are the restarts a normal behavior? Should I only add to Skaffold services that have listeners?
-The following code results in seeing the ""test"" console.log every few seconds.
-skaffold.yaml
-apiVersion: skaffold/v2alpha3
-kind: Config
-deploy:
-  kubectl:
-    manifests:
-      - ./dpl/*
-build:
-  local:
-    push: false
-  artifacts:
-    - image: myuser/myworker
-      context: myworker
-      docker:
-        dockerfile: Dockerfile
-      sync:
-        manual:
-          - src: 'src/**/*.ts'
-            dest: .
-
-index.ts
-//import express from 'express'
-
-//const app = express()
-
-
-console.log('test')
-
-//app.listen(3000, () => {
-//  console.log('Listening on port 3000!')
-//})
-
-package.json
-{
-  ""name"": ""myworker"",
-  ""version"": ""1.0.0"",
-  ""description"": """",
-  ""main"": ""index.js"",
-  ""scripts"": {
-    ""start"": ""ts-node-dev --poll src/index.ts""
-  },
-  ""keywords"": [],
-  ""author"": """",
-  ""license"": ""ISC"",
-  ""dependencies"": {
-    ""@types/express"": ""^4.17.21"",
-    ""express"": ""^4.19.2"",
-    ""ts-node-dev"": ""^2.0.0"",
-    ""typescript"": ""^5.4.5""
-  }
-}
-
-Dockerfile
-FROM node:alpine
-
-WORKDIR /app
-COPY package.json .
-RUN npm install --omit=dev
-COPY . .
-
-CMD [""npm"", ""start""]
-
-","1. One reason that the container keeps on restarting is because of ts-node-dev in your package.json. ts-node-dev is a tool that automatically watches for changes in your TypeScript files and restarts the application whenever it detects a modification. There is also a possibility that when you run skaffolds dev, it enters a development loop that monitors your code for changes. While Skaffold itself might not be restarting the container due to the missing listener, ts-node-dev is triggering restarts independently.
-You can try this approach of replacing ts-node-dev. You can consider using npm run build to compile your TypeScript code before running the application. This might create a build that won't be restarted by file changes. Or you can consider disabling the automatic file watchingof ts-node-dev behaviour with --no-poll flag:
-""scripts"": {
-  ""start"": ""ts-node-dev --no-poll src/index.ts""
-}
-
-However, you'll need to manually rebuild and restart the container whenever you  make code changes.
-",Skaffold
-"I am a Docker noob and am trying to run the make dev-services script, declared in the skaffold.yml file (I exchanged image and sha names with xxx):
-- name: dev-services
-  build:
-    tagPolicy:
-      inputDigest: {}
-    local:
-      push: false
-      useBuildkit: true
-    artifacts:
-    - image: gcr.io/xxx/service-base
-      context: .
-    - image: gcr.io/xxx/api
-      context: server/api/
-      requires:
-        - image: gcr.io/xxx/service-base
-          alias: service_base
-    - image: gcr.io/xxx/media
-      context: server/media/app
-      requires:
-        - image: gcr.io/xxx/service-base
-          alias: service_base
-  deploy:
-    kustomize:
-      paths:
-        - ./k8s/local
-        - ./server/api/k8s/development
-        - ./server/media/k8s/development
-
-
-When I run it, I get this error:
-Building [gcr.io/xxx/media]...
-[+] Building 2.8s (4/4) FINISHED                                                                                                                                                                            
- => [internal] load build definition from Dockerfile                                                                                                                                                   0.0s
- => => transferring dockerfile: 37B                                                                                                                                                                    0.0s
- => [internal] load .dockerignore                                                                                                                                                                      0.0s
- => => transferring context: 2B                                                                                                                                                                        0.0s
- => [internal] load metadata for docker.io/library/alpine:3.14                                                                                                                                         1.2s
- => ERROR [internal] load metadata for gcr.io/xxx/service-base:xxx                                                     2.6s
-------
- > [internal] load metadata for gcr.io/xxx/service-base:xxx:
-------
-failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests xxx]: 401 Unauthorized
-Building [gcr.io/xxx/api]...
-Canceled build for gcr.io/xxx/api
-exit status 1. Docker build ran into internal error. Please retry.
-If this keeps happening, please open an issue..
-make: *** [dev-services] Error 1
-
-
-Anyone know what might be the problem here?
-Might it be the google container registry?
-I'm using Minikube. Is there a Minikube - or Docker - registry that could try? If so, what would I need to change in the skaffold.yaml file?
-Thanks a lot in advance :)
-","1. running this command rm  ~/.docker/config.json before the build worked for me.
-
-2. for anyone else coming here from windows OS  in your docker desktop settings, uncheck the Use Docker Compose V2 this worked for me, i uncheck it works, i checked to try again and make sure that was the issue and yes it was the issue didn't work , until i uncheck again
-
-3. I got some clue from this thread, however, just want to be precise about the steps you must pay attentions.
-
-Open the Docker Desktop and make sure, you screen looks like this 
-Click Apply & Restart - here is the trick, it doesn't restart infact, follow next step
-Right click on the Docker app Icon on the taskbar > click Restart and wait 
-Now retry the ps script, hope this will work for you.
-
-",Skaffold
-"When I'm running following code:
-minikube addons enable ingress
-
-I'm getting following error:
-▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
-    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
-    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
-🔎  Verifying ingress addon...
-
-❌  Exiting due to MK_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1
-stdout:
-namespace/ingress-nginx unchanged
-configmap/ingress-nginx-controller unchanged
-configmap/tcp-services unchanged
-configmap/udp-services unchanged
-serviceaccount/ingress-nginx unchanged
-clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
-clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
-role.rbac.authorization.k8s.io/ingress-nginx unchanged
-rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
-serviceaccount/ingress-nginx-admission unchanged
-clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
-clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
-role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
-rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
-service/ingress-nginx-controller-admission unchanged
-service/ingress-nginx-controller unchanged
-validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
-
-stderr:
-Error from server (Invalid): error when applying patch:
-{""metadata"":{""annotations"":{""kubectl.kubernetes.io/last-applied-configuration"":""{\""apiVersion\"":\""apps/v1\"",\""kind\"":\""Deployment\"",\""metadata\"":{\""annotations\"":{},\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""controller\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""},\""name\"":\""ingress-nginx-controller\"",\""namespace\"":\""ingress-nginx\""},\""spec\"":{\""minReadySeconds\"":0,\""revisionHistoryLimit\"":10,\""selector\"":{\""matchLabels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""controller\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""}},\""strategy\"":{\""rollingUpdate\"":{\""maxUnavailable\"":1},\""type\"":\""RollingUpdate\""},\""template\"":{\""metadata\"":{\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""controller\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\"",\""gcp-auth-skip-secret\"":\""true\""}},\""spec\"":{\""containers\"":[{\""args\"":[\""/nginx-ingress-controller\"",\""--ingress-class=nginx\"",\""--configmap=$(POD_NAMESPACE)/ingress-nginx-controller\"",\""--report-node-internal-ip-address\"",\""--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services\"",\""--udp-services-configmap=$(POD_NAMESPACE)/udp-services\"",\""--validating-webhook=:8443\"",\""--validating-webhook-certificate=/usr/local/certificates/cert\"",\""--validating-webhook-key=/usr/local/certificates/key\""],\""env\"":[{\""name\"":\""POD_NAME\"",\""valueFrom\"":{\""fieldRef\"":{\""fieldPath\"":\""metadata.name\""}}},{\""name\"":\""POD_NAMESPACE\"",\""valueFrom\"":{\""fieldRef\"":{\""fieldPath\"":\""metadata.namespace\""}}},{\""name\"":\""LD_PRELOAD\"",\""value\"":\""/usr/local/lib/libmimalloc.so\""}],\""image\"":\""k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\"",\""imagePullPolicy\"":\""IfNotPresent\"",\""lifecycle\"":{\""preStop\"":{\""exec\"":{\""command\"":[\""/wait-shutdown\""]}}},\""livenessProbe\"":{\""failureThreshold\"":5,\""httpGet\"":{\""path\"":\""/healthz\"",\""port\"":10254,\""scheme\"":\""HTTP\""},\""initialDelaySeconds\"":10,\""periodSeconds\"":10,\""successThreshold\"":1,\""timeoutSeconds\"":1},\""name\"":\""controller\"",\""ports\"":[{\""containerPort\"":80,\""hostPort\"":80,\""name\"":\""http\"",\""protocol\"":\""TCP\""},{\""containerPort\"":443,\""hostPort\"":443,\""name\"":\""https\"",\""protocol\"":\""TCP\""},{\""containerPort\"":8443,\""name\"":\""webhook\"",\""protocol\"":\""TCP\""}],\""readinessProbe\"":{\""failureThreshold\"":3,\""httpGet\"":{\""path\"":\""/healthz\"",\""port\"":10254,\""scheme\"":\""HTTP\""},\""initialDelaySeconds\"":10,\""periodSeconds\"":10,\""successThreshold\"":1,\""timeoutSeconds\"":1},\""resources\"":{\""requests\"":{\""cpu\"":\""100m\"",\""memory\"":\""90Mi\""}},\""securityContext\"":{\""allowPrivilegeEscalation\"":true,\""capabilities\"":{\""add\"":[\""NET_BIND_SERVICE\""],\""drop\"":[\""ALL\""]},\""runAsUser\"":101},\""volumeMounts\"":[{\""mountPath\"":\""/usr/local/certificates/\"",\""name\"":\""webhook-cert\"",\""readOnly\"":true}]}],\""dnsPolicy\"":\""ClusterFirst\"",\""serviceAccountName\"":\""ingress-nginx\"",\""volumes\"":[{\""name\"":\""webhook-cert\"",\""secret\"":{\""secretName\"":\""ingress-nginx-admission\""}}]}}}}\n""},""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""app.kubernetes.io/managed-by"":null,""app.kubernetes.io/version"":null,""helm.sh/chart"":null}},""spec"":{""minReadySeconds"":0,""selector"":{""matchLabels"":{""addonmanager.kubernetes.io/mode"":""Reconcile""}},""strategy"":{""$retainKeys"":[""rollingUpdate"",""type""],""rollingUpdate"":{""maxUnavailable"":1}},""template"":{""metadata"":{""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""gcp-auth-skip-secret"":""true""}},""spec"":{""$setElementOrder/containers"":[{""name"":""controller""}],""containers"":[{""$setElementOrder/ports"":[{""containerPort"":80},{""containerPort"":443},{""containerPort"":8443}],""args"":[""/nginx-ingress-controller"",""--ingress-class=nginx"",""--configmap=$(POD_NAMESPACE)/ingress-nginx-controller"",""--report-node-internal-ip-address"",""--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services"",""--udp-services-configmap=$(POD_NAMESPACE)/udp-services"",""--validating-webhook=:8443"",""--validating-webhook-certificate=/usr/local/certificates/cert"",""--validating-webhook-key=/usr/local/certificates/key""],""image"":""k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a"",""name"":""controller"",""ports"":[{""containerPort"":80,""hostPort"":80},{""containerPort"":443,""hostPort"":443}]}],""nodeSelector"":null,""terminationGracePeriodSeconds"":null}}}}
-to:
-Resource: ""apps/v1, Resource=deployments"", GroupVersionKind: ""apps/v1, Kind=Deployment""
-Name: ""ingress-nginx-controller"", Namespace: ""ingress-nginx""
-for: ""/etc/kubernetes/addons/ingress-dp.yaml"": Deployment.apps ""ingress-nginx-controller"" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{""addonmanager.kubernetes.io/mode"":""Reconcile"", ""app.kubernetes.io/component"":""controller"", ""app.kubernetes.io/instance"":""ingress-nginx"", ""app.kubernetes.io/name"":""ingress-nginx""}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
-Error from server (Invalid): error when applying patch:
-{""metadata"":{""annotations"":{""helm.sh/hook"":null,""helm.sh/hook-delete-policy"":null,""kubectl.kubernetes.io/last-applied-configuration"":""{\""apiVersion\"":\""batch/v1\"",\""kind\"":\""Job\"",\""metadata\"":{\""annotations\"":{},\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""admission-webhook\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""},\""name\"":\""ingress-nginx-admission-create\"",\""namespace\"":\""ingress-nginx\""},\""spec\"":{\""template\"":{\""metadata\"":{\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""admission-webhook\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""},\""name\"":\""ingress-nginx-admission-create\""},\""spec\"":{\""containers\"":[{\""args\"":[\""create\"",\""--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\"",\""--namespace=$(POD_NAMESPACE)\"",\""--secret-name=ingress-nginx-admission\""],\""env\"":[{\""name\"":\""POD_NAMESPACE\"",\""valueFrom\"":{\""fieldRef\"":{\""fieldPath\"":\""metadata.namespace\""}}}],\""image\"":\""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\"",\""imagePullPolicy\"":\""IfNotPresent\"",\""name\"":\""create\""}],\""restartPolicy\"":\""OnFailure\"",\""securityContext\"":{\""runAsNonRoot\"":true,\""runAsUser\"":2000},\""serviceAccountName\"":\""ingress-nginx-admission\""}}}}\n""},""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""app.kubernetes.io/managed-by"":null,""app.kubernetes.io/version"":null,""helm.sh/chart"":null}},""spec"":{""template"":{""metadata"":{""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""app.kubernetes.io/managed-by"":null,""app.kubernetes.io/version"":null,""helm.sh/chart"":null}},""spec"":{""$setElementOrder/containers"":[{""name"":""create""}],""containers"":[{""image"":""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"",""name"":""create""}]}}}}
-to:
-Resource: ""batch/v1, Resource=jobs"", GroupVersionKind: ""batch/v1, Kind=Job""
-Name: ""ingress-nginx-admission-create"", Namespace: ""ingress-nginx""
-for: ""/etc/kubernetes/addons/ingress-dp.yaml"": Job.batch ""ingress-nginx-admission-create"" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:""ingress-nginx-admission-create"", GenerateName:"""", Namespace:"""", SelfLink:"""", UID:"""", ResourceVersion:"""", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{""addonmanager.kubernetes.io/mode"":""Reconcile"", ""app.kubernetes.io/component"":""admission-webhook"", ""app.kubernetes.io/instance"":""ingress-nginx"", ""app.kubernetes.io/name"":""ingress-nginx"", ""controller-uid"":""d33a74a3-101c-4e82-a2b7-45b46068f189"", ""job-name"":""ingress-nginx-admission-create""}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"""", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:""create"", Image:""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"", Command:[]string(nil), Args:[]string{""create"", ""--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc"", ""--namespace=$(POD_NAMESPACE)"", ""--secret-name=ingress-nginx-admission""}, WorkingDir:"""", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:""POD_NAMESPACE"", Value:"""", ValueFrom:(*core.EnvVarSource)(0xc00a79dea0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:""/dev/termination-log"", TerminationMessagePolicy:""File"", ImagePullPolicy:""IfNotPresent"", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:""OnFailure"", TerminationGracePeriodSeconds:(*int64)(0xc003184dc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:""ClusterFirst"", NodeSelector:map[string]string(nil), ServiceAccountName:""ingress-nginx-admission"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"""", SecurityContext:(*core.PodSecurityContext)(0xc010b3d980), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"""", Subdomain:"""", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:""default-scheduler"", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"""", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
-Error from server (Invalid): error when applying patch:
-{""metadata"":{""annotations"":{""helm.sh/hook"":null,""helm.sh/hook-delete-policy"":null,""kubectl.kubernetes.io/last-applied-configuration"":""{\""apiVersion\"":\""batch/v1\"",\""kind\"":\""Job\"",\""metadata\"":{\""annotations\"":{},\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""admission-webhook\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""},\""name\"":\""ingress-nginx-admission-patch\"",\""namespace\"":\""ingress-nginx\""},\""spec\"":{\""template\"":{\""metadata\"":{\""labels\"":{\""addonmanager.kubernetes.io/mode\"":\""Reconcile\"",\""app.kubernetes.io/component\"":\""admission-webhook\"",\""app.kubernetes.io/instance\"":\""ingress-nginx\"",\""app.kubernetes.io/name\"":\""ingress-nginx\""},\""name\"":\""ingress-nginx-admission-patch\""},\""spec\"":{\""containers\"":[{\""args\"":[\""patch\"",\""--webhook-name=ingress-nginx-admission\"",\""--namespace=$(POD_NAMESPACE)\"",\""--patch-mutating=false\"",\""--secret-name=ingress-nginx-admission\"",\""--patch-failure-policy=Fail\""],\""env\"":[{\""name\"":\""POD_NAMESPACE\"",\""valueFrom\"":{\""fieldRef\"":{\""fieldPath\"":\""metadata.namespace\""}}}],\""image\"":\""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\"",\""imagePullPolicy\"":\""IfNotPresent\"",\""name\"":\""patch\""}],\""restartPolicy\"":\""OnFailure\"",\""securityContext\"":{\""runAsNonRoot\"":true,\""runAsUser\"":2000},\""serviceAccountName\"":\""ingress-nginx-admission\""}}}}\n""},""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""app.kubernetes.io/managed-by"":null,""app.kubernetes.io/version"":null,""helm.sh/chart"":null}},""spec"":{""template"":{""metadata"":{""labels"":{""addonmanager.kubernetes.io/mode"":""Reconcile"",""app.kubernetes.io/managed-by"":null,""app.kubernetes.io/version"":null,""helm.sh/chart"":null}},""spec"":{""$setElementOrder/containers"":[{""name"":""patch""}],""containers"":[{""image"":""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"",""name"":""patch""}]}}}}
-to:
-Resource: ""batch/v1, Resource=jobs"", GroupVersionKind: ""batch/v1, Kind=Job""
-Name: ""ingress-nginx-admission-patch"", Namespace: ""ingress-nginx""
-for: ""/etc/kubernetes/addons/ingress-dp.yaml"": Job.batch ""ingress-nginx-admission-patch"" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:""ingress-nginx-admission-patch"", GenerateName:"""", Namespace:"""", SelfLink:"""", UID:"""", ResourceVersion:"""", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{""addonmanager.kubernetes.io/mode"":""Reconcile"", ""app.kubernetes.io/component"":""admission-webhook"", ""app.kubernetes.io/instance"":""ingress-nginx"", ""app.kubernetes.io/name"":""ingress-nginx"", ""controller-uid"":""ef303f40-b52d-49c5-ab80-8330379fed36"", ""job-name"":""ingress-nginx-admission-patch""}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"""", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:""patch"", Image:""docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"", Command:[]string(nil), Args:[]string{""patch"", ""--webhook-name=ingress-nginx-admission"", ""--namespace=$(POD_NAMESPACE)"", ""--patch-mutating=false"", ""--secret-name=ingress-nginx-admission"", ""--patch-failure-policy=Fail""}, WorkingDir:"""", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:""POD_NAMESPACE"", Value:"""", ValueFrom:(*core.EnvVarSource)(0xc00fd798a0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:""/dev/termination-log"", TerminationMessagePolicy:""File"", ImagePullPolicy:""IfNotPresent"", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:""OnFailure"", TerminationGracePeriodSeconds:(*int64)(0xc00573d190), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:""ClusterFirst"", NodeSelector:map[string]string(nil), ServiceAccountName:""ingress-nginx-admission"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"""", SecurityContext:(*core.PodSecurityContext)(0xc00d7d9100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"""", Subdomain:"""", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:""default-scheduler"", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"""", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
-]
-
-😿  If the above advice does not help, please let us know: 
-👉  https://github.com/kubernetes/minikube/issues/new/choose
-
-So I had some bug issue in my PC. So, i reinstall minikube. After this when I use minikube start and all want fine. But when i enable ingress then the above error was showing.
-And when i run skaffold dev the following error was showing:
-Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- - Error from server (InternalError): error when creating ""STDIN"": Internal error occurred: failed calling webhook ""validate.nginx.ingress.kubernetes.io"": an error on the server ("""") has prevented the request from succeeding
-exiting dev mode because first deploy failed: kubectl apply: exit status 1
-
-
-","1. As @Brian de Alwis pointed out in the comments section, this PR #11189 should resolve the above issue.
-You can try the v1.20.0-beta.0 release with this fix. Additionally, a stable v1.20.0 version is now available.
-
-2. The following commands solve the issue on macOs:
-minikube delete --all
-minikube start --vm=true
-minikube addons enable ingress
-
-
-3. 1.minikube delete --all
-2.minikube start --cpus 2 --memory 4096
-
-try to increase resources for the minikube.
-
-
-minikube addons enable ingress
-
-",Skaffold
-"I'm experiencing an issue with squash commits in Azure DevOps pull requests. Occasionally, when I complete a pull request using the squash commit option, not all changes/changed files seem to be included in the squash. This results in some changes appearing to be missing from the final commit.
-Has anyone else encountered this issue, and if so, do you know why this happens or how to ensure all files are included in the squash?
-Here are some details about my setup:
-I’m using Azure DevOps for version control.
-The issue doesn't occur consistently; sometimes all changed files are included, other times they are not.
-I've checked for any error messages or logs but haven't found anything unusual.
-Any insights or solutions would be greatly appreciated!
-
-It took at the end only one commit
-
-","1. Please make sure you are comparing the pull request with the corresponding squash merge commit.
-For example, I have a pull request ID 61 from bug to main branch.
-
-After I complete the Pull request with Squash Commit, the commit is in the main branch history with pull request ID 61.
-
-Open the commit, you can find the pull request ID in the page. Please compare the files here.
-
-In your second screenshot, the pull request ID is not there. It seems the commit is not from a squash merge.
-",Squash
-"Django documentation says we could delete migrations after squashing them:
-
-You should commit this migration but leave the old ones in place; the
-  new migration will be used for new installs. Once you are sure all
-  instances of the code base have applied the migrations you squashed,
-  you can delete them.
-
-Here, does deleting means deleting only the migration files, or the entries in the django_migrations table as well? 
-Here is some background: I have only the development machine, so just one code base. After squashing some of the migrations that I had already applied, I deleted the files and the database entries. Tested if this is OK by making migrations, it did not find anything. So, everything looked good. Next day, I had to change something, and made migration. When I tried to migrate, it tried to apply the squashed migration too (which was applied part by part before being squashed). So, I had to go back and recreate the entries in the django_migrations table. So, it seems like I had to keep the database entries. I am trying to make sure before I mess up anything again, and understand why it looked fine first, and then tried to apply the squashed migration. 
-","1. Squashed migrations are never marked as applied, which will be fixed in 1.8.3 (see #24628). 
-The steps to remove the old migrations are:
-
-Make sure all replaced migrations are applied (or none of them). 
-Remove the old migration files, remove the replaces attribute from the squashed migrations.
-(Workaround) Run ./manage.py migrate <app_label> <squashed_migration> --fake.
-
-The last step won't be necessary when 1.8.3 arrives. 
-
-2. Converting squashed migrations has gotten easier since the question was posted. I posted a small sample project that shows how to squash migrations with circular dependencies, and it also shows how to convert the squashed migration into a regular migration after all the installations have migrated past the squash point.
-As the Django documentation says:
-
-You must then transition the squashed migration to a normal migration by:
-
-Deleting all the migration files it replaces.
-Updating all migrations that depend on the deleted migrations to depend on the squashed migration instead.
-Removing the replaces attribute in the Migration class of the squashed migration (this is how Django tells that it is a squashed migration).
-
-
-
-3. I'm no expert by any means, but I just squashed my migrations, and ended up doing the following:
-Ran this query to removed the old migrations (squashed)
-DELETE FROM south_migrationhistory;
-
-Run this management command to remove the ghosted migrations
-./manage.py migrate --fake --delete-ghost-migrations 
-
-Django 1.7 also has squashmigrations
-",Squash
-"This gives a good explanation of squashing multiple commits:
-http://git-scm.com/book/en/Git-Branching-Rebasing
-but it does not work for commits that have already been pushed. How do I squash the most recent few commits both in my local and remote repos?
-When I do git rebase -i origin/master~4 master, keep the first one as pick, set the other three as squash, and then exit (via c-x c-c in emacs), I get:
-$ git rebase -i origin/master~4 master
-# Not currently on any branch.
-nothing to commit (working directory clean)
-
-Could not apply 2f40e2c... Revert ""issue 4427: bpf device permission change option added""
-$ git rebase -i origin/master~4 master
-Interactive rebase already started
-
-where 2f40 is the pick commit. And now none of the 4 commits appear in git log. I expected my editor to be restarted so that I could enter a commit message. What am I doing wrong?
-","1. Squash commits locally with:
-git rebase -i origin/master~4 master
-
-where ~4 means the last 4 commits.
-This will open your default editor. Here, replace pick in the second, third, and fourth lines (since you are interested in the last 4 commits) with squash. The first line (which corresponds to the newest commit) should be left with pick. Save this file.
-Afterwards, your editor will open again, showing the messages of each commit. Comment the ones you are not interested in (in other words, leave the commit message that will correspond to this squashing uncommented). Save the file and close it.
-You will than need to push again with the -f flag.
-and then force push with :
-git push origin +master
-
-
-Difference between --force and +
-From the documentation of git push:
-
-Note that --force applies to all the refs that are pushed, hence using
-it with push.default set to matching or with multiple push
-destinations configured with remote.*.push may overwrite refs other
-than the current branch (including local refs that are strictly behind
-their remote counterpart). To force a push to only one branch, use a +
-in front of the refspec to push (e.g git push origin +master to force
-a push to the master branch).
-
-
-2. On a branch I was able to do it like this (for the last 4 commits)
-git checkout my_branch
-git reset --soft HEAD~4
-git commit
-git push --force origin my_branch
-
-
-3. Minor difference to accepted answer, but I was having a lot of difficulty squashing and finally got it.
-$ git rebase -i HEAD~4
-
-
-At the interactive screen that opens up, replace pick with squash
-at the top for all the commits that you want to squash.
-Save and close the editor
-
-Push to the remote using:
-$ git push origin branch-name --force
-
-",Squash
-"I am using this command to intercep the service:
-telepresence intercept chat-server-service --port 8002:8002 --env-file ./env
-
-how do I terminate the intercept safety? I check the intercept command:
-telepresence intercept --help
-
-did not found the exists command. I also tried to kill the process but could not found which process should I kill:
-> ps aux|grep intercept
-xiaoqiangjiang   30747   0.0  0.0 408626880   1312 s009  S+    1:48PM   0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox intercept
-> ps aux|grep telepresence
-xiaoqiangjiang   78916   0.3  0.3 413777600  88416   ??  S    Tue12PM   9:46.96 /opt/homebrew/bin/telepresence connector-foreground
-root             78937   0.1  0.2 413599888  81328   ??  S    Tue12PM   6:10.95 /opt/homebrew/bin/telepresence daemon-foreground /Users/xiaoqiangjiang/Library/Logs/telepresence /Users/xiaoqiangjiang/Library/Application Support/telepresence
-xiaoqiangjiang   49214   0.0  0.0 408627424    336 s008  S+   11:07PM   0:00.01 bash ./macbook-backup-full-reddwarf-dolphin-telepresence.sh
-root             78936   0.0  0.0 408646448   2848   ??  S    Tue12PM   0:00.01 sudo --non-interactive /opt/homebrew/bin/telepresence daemon-foreground /Users/xiaoqiangjiang/Library/Logs/telepresence /Users/xiaoqiangjiang/Library/Application Support/telepresence
-xiaoqiangjiang   30771   0.0  0.0 408626880   1312 s009  S+    1:48PM   0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox telepresence
-
-what should I do to terminal the telepresence process safety? Today I have tried the leave command with current service infra-server-service:
-telepresence leave infra-server-service
-
-but the server side still output this log:
- traffic-agent 2024-05-15 14:58:20.0369 info    envoy-server : Management server listening on 18000                                                                                                                      │
-│ traffic-agent                                                                                                                                                                                                           │
-│ traffic-agent 2024-05-15 14:58:20.0377 info    envoy : started command [""envoy"" ""-l"" ""warning"" ""-c"" ""bootstrap-ads.pb"" ""--base-id"" ""1""] : dexec.pid=""13""                                                                │
-│ traffic-agent 2024-05-15 14:58:20.0377 info    envoy :  : dexec.pid=""13"" dexec.stream=""stdin"" dexec.err=""EOF""                                                                                                           │
-│ traffic-agent 2024-05-15 14:58:20.0926 info    sidecar : probing for HTTP/2 support using fallback false... : base-url=""http://127.0.0.1:8081""                                                                          │
-│ traffic-agent 2024-05-15 14:58:20.0932 info    sidecar : HTTP/2 support = false : base-url=""http://127.0.0.1:8081""                                                                                                      │
-│ traffic-agent 2024-05-15 14:58:20.0969 info    sidecar : Connected to Manager 2.19.4                                                                                                                                    │
-│ traffic-agent 2024-05-15 14:58:20.1107 info    sidecar : LoadConfig: asking traffic-manager for license information...                                                                                                  │
-│ traffic-agent 2024-05-15 14:58:20.1112 error   sidecar : error(s) getting license: license not found                                                                                                                    │
-│ traffic-agent 2024-05-15 14:58:20.1113 info    sidecar : LoadConfig: asking traffic-manager for Ambassador Cloud configuration...                                                                                       │
-│ traffic-agent 2024-05-15 14:58:20.1118 info    sidecar : trying to connect to ambassador cloud at app.getambassador.io:443                                                                                              │
-│ traffic-agent 2024-05-15 14:58:20.3256 info    sidecar : connected to ambassador cloud at app.getambassador.io:443                                                                                                      │
-│ traffic-agent 2024-05-15 15:02:43.4839 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:51726 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:02:53.4773 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:34598 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:03:03.6010 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:50470 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:03:23.4768 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:53092 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:03:33.4797 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:47944 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:03:43.4821 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:37614 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:06:43.4788 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:51130 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ traffic-agent 2024-05-15 15:07:03.4807 error   sidecar/handleIntercept : !! CLI tcp 127.0.0.1:32854 -> 127.0.0.1:8081, read from grpc.ClientStream failed: rpc error: code = DeadlineExceeded desc = timeout while esta │
-│ Stream closed EOF for reddwarf-pro/infra-server-service-6fd6ddb9cd-ws8pc (tel-agent-init)                                                                                                                               │
-│ Stream closed EOF for reddwarf-pro/infra-server-service-6fd6ddb9cd-ws8pc (infra-server-service)
-
-this is the command I am using right now:
-telepresence intercept infra-server-service --mount=false --port 8081:8081 --env-file ./env
-
-this is the telepresence list output:
-> telepresence list
-Connected to context kubernetes-admin@kubernetes (https://106.14.183.131:6443)
-admin-service        : ready to intercept (traffic-agent not yet installed)
-ai-web               : ready to intercept (traffic-agent not yet installed)
-alt-service          : ready to intercept (traffic-agent not yet installed)
-chat-server-service  : ready to intercept (traffic-agent not yet installed)
-cruise-web           : ready to intercept (traffic-agent not yet installed)
-cv-render-service    : ready to intercept (traffic-agent not yet installed)
-cv-service           : ready to intercept (traffic-agent not yet installed)
-cv-web               : ready to intercept (traffic-agent not yet installed)
-dolphin-dict-service : ready to intercept (traffic-agent not yet installed)
-dolphin-music-service: ready to intercept (traffic-agent not yet installed)
-dolphin-post-service : ready to intercept (traffic-agent not yet installed)
-fortune-service      : ready to intercept (traffic-agent not yet installed)
-infra-server-service : ready to intercept (traffic-agent already installed)
-official-website     : ready to intercept (traffic-agent not yet installed)
-pydolphin-service    : ready to intercept (traffic-agent not yet installed)
-react-admin          : ready to intercept (traffic-agent not yet installed)
-react-admin-new      : ready to intercept (traffic-agent not yet installed)
-rss-sync-service     : ready to intercept (traffic-agent not yet installed)
-snap-web             : ready to intercept (traffic-agent not yet installed)
-texhub-server-service: ready to intercept (traffic-agent not yet installed)
-texhub-web           : ready to intercept (traffic-agent not yet installed)
-time-capsule-service : ready to intercept (traffic-agent not yet installed)
-tool-web             : ready to intercept (traffic-agent not yet installed)
-y-websocket-service  : ready to intercept (traffic-agent not yet installed)
-
-","1. in order to terminate intercept you need to run
-telepresence  leave - (Remove existing intercept)
-for example:
-telepresence intercept notifications --port 8080
-telepresence leave notifications
-
-2. Yes, the telepresence intercept --help doesn't mention the telepresence leave command. But if you run telepresence --help, you'll see telepresence leave on the list.
-We try to match the --help flag to the specified parameters - so since there's no command like telepresence intercept leave it's not going to show up under the telepresence intercept --help results. So whenever you don't see a flag you are looking for try running the telepresence --help command instead.
-",Telepresence
-"I´m currently working on ubuntu linux. I have already implemented minikube and have a config for kubernetes ready:
-apiVersion: v1
-clusters:
-- cluster:
-    certificate-authority: /home/<user>/.minikube/ca.crt
-    extensions:
-    - extension:
-        last-update: Wed, 11 Oct 2023 11:58:40 CEST
-        provider: minikube.sigs.k8s.io
-        version: v1.31.2
-      name: cluster_info
-    server: https://192.168.49.2:8443
-  name: minikube
-contexts:
-- context:
-    cluster: minikube
-    extensions:
-    - extension:
-        last-update: Wed, 11 Oct 2023 11:58:40 CEST
-        provider: minikube.sigs.k8s.io
-        version: v1.31.2
-      name: context_info
-    namespace: default
-    user: minikube
-  name: minikube
-current-context: minikube
-kind: Config
-preferences: {}
-users:
-- name: minikube
-  user:
-    client-certificate: /home/<user>/.minikube/profiles/minikube/client.crt
-    client-key: /home/<user>/.minikube/profiles/minikube/client.key
-
-I'm trying to connect with my kubernetes configuration with this command:
-telepresence --kubeconfig=/home/<user>/.kube/config connect 
-
-But then I get this:
-telepresence connect: error: connector.Connect: initial cluster check failed: Get ""https://192.168.49.2:8443/version"": URLBlockedUncategorized
-
-I don't know what exactly ""URLBlockedUncategorized"" means and I can't really find an explanation in relation to telepresence. Is it blocked by the corporate proxy? Do I have to edit some certificate in minikube? Is there a command to deactivate it?
-","1. After many trials and errors, it somehow worked once I wrote
-telepresence quit -s 
-
-and then
-telepresence connect 
-
-I don't believe this was the main solution to the problem, so I write what else I did before that:
-
-Added minikube ip address to the ""no_proxy"" and ""NO_PROXY"" environment variables.
-
-Once used this command:
-telepresence helm install --never-proxy=
-
-
-I also recently understood how the global configuration described here works: You need to create that yaml-file yourself and then do the following:
-telepresence helm install -f <path to>/<your-global-config.yaml>
-
-Don't copy paste the global-config one-to-one. You have to check the registry as well as add the minikube subnet below the ""neverProxySubnets"".
-",Telepresence
-"I am using Telepresence to remote debugging the Kubernetes cluster, and I am log in cluster using command
-telepresence
-
-but when I want to install some software in the telepresence pod
-sudo apt-get install wget
-
-and I did not know the password of telepresence pod, so what should I do to install software?
-","1. you could using this script to login pod as root:
-#!/usr/bin/env bash
-set -xe
-
-POD=$(kubectl describe pod ""$1"")
-NODE=$(echo ""$POD"" | grep -m1 Node | awk -F'/' '{print $2}')
-CONTAINER=$(echo ""$POD"" | grep -m1 'Container ID' | awk -F 'docker://' '{print $2}')
-
-CONTAINER_SHELL=${2:-bash}
-
-set +e
-
-ssh -t ""$NODE"" sudo docker exec --user 0 -it ""$CONTAINER"" ""$CONTAINER_SHELL""
-
-if [ ""$?"" -gt 0 ]; then
-  set +x
-  echo 'SSH into pod failed. If you see an error message similar to ""executable file not found in $PATH"", please try:'
-  echo ""$0 $1 sh""
-fi
-
-login like this:
-./login-k8s-pod.sh flink-taskmanager-54d85f57c7-wd2nb
-
-",Telepresence
-"I am trying to install telepresence version 1 in ubuntu 22.04 but I don't have the download links to it.
-","1. Telepresence version 1 is no longer supported but you can install the new version here. Feel free also to join the Telepresence open source slack channel if you have further questions or want to join our weekly help session for further assistance.
-
-2. Not sure if this helps, but I grabbed a version of telepresence that was published for 20.04 and it seems to work for me. Here are the details of what I did:
-# install telepresence pre-requisites
-apt install torsocks sshfs conntrack
-# HACK - download the version of telepresence for Ubuntu 20.04 (focal)
-curl -L https://packagecloud.io/datawireio/telepresence/packages/ubuntu/focal/telepresence_0.109_amd64.deb/download.deb?distro_version_id=210 -o telepresence_0.109_amd64.deb
-# install telepresence
-dpkg -i telepresence_0.109_amd64.deb
-
-",Telepresence
-"The requested module '/node_modules/.vite/deps/react-tilt.js?t=1681239019006&v=59c21aef' does not provide an export named 'default
-I tried reinstalling tilt again using npm still it didnt make any difference
-","1. Try this
-import { Tilt } from 'react-tilt'
-
-2. use react-parallax-tilt instead of react-tilt and remove --legacy-peer-deps from the command.
-Also wherever you are using: import Tilt from ""react-tilt"", replace it with your new library like this: import Tilt from ""react-parallax-tilt
-
-3. Uninstall the package with:
-npm uninstall react-tilt
-Then reinstalled it with:
-npm install react-tilt
-Then change the import to:
-import { Tilt } from ""react-tilt"";
-",Tilt
-"Link to my codepen: [Codepen](https://codepen.io/sunylkwc-the-selector/pen/xxBPRGx?editors=1100)
-When you hover different parts of the webpage the weird glitches occur on the transparent cards, such as when you hover the third card glitches appear on the first card as a faded line and when you hover the read more button of the second card same thing happens.
-
-I tried switching my browser but it was still there.
-","1. tested in Edge v120.0.2210.144
-The issue stems from your backdrop-filter: blur(0px);, transform and box-shadow on the card elements.
-The tilt-glare effect you are using is affecting the shadows and the backdrop filter.
-There are a couple of fixes you can employ, though the main way is a bit of effort.
-
-Remove either the backdrop filter or the box-shadow
-Reduce the shadow size significantly to 5px 5px 5px
-Split the shadow into its own pseudo-element so that the filters can't interact with it
-
-The simplest and best-looking fix is to drop the shadows or the blur, from my experimenting with your codepen.
-The effects can be used together in current browser versions but not quite with the way you have built the page.
-Essentially what you are seeing is a flicker in the blurred area of the shadow emphasised by the effects on the other cards (I believe)
-",Tilt
-"I have a Logitech PTZ USB camera. I have prepared a video call functionality using WebRtc. Now what I need is to add Pan, Tilt and Zoom controls within the browser so user can controls the camera based on his need. 
-Is it possible to implement PTZ control using JavaScript/WebRtc or any other third-party JS?
-","1. Scott Hanselman wrote a PTZ controller for the Logitech cameras back in 2012: https://www.hanselman.com/blog/CloudControlledRemotePanTiltZoomCameraAPIForALogitechBCC950CameraWithAzureAndSignalR.aspx
-The APIs needed for that are not available to Javascript sadly.
-",Tilt
-"I've begun using tilt for Docker development and love it! I have containers that take a long time to build though. When I'm done using them, I don't want to run tilt down  (docs) because rebuilding them can take time. Also, running Ctrl-C doesn't stop the containers at all.
-Is there a way to disable the containers without killing them in tilt?
-","1. If you want to stop the containers without rebuilding them or killing them in Tilt, you can use the command tilt down --keep=containers. This will halt the Tilt process and keep the containers running, so you can resume your work without rebuilding them later. Just remember to use tilt up when you're ready to continue working with those containers.
-",Tilt
-"I tried to implement this effect, similar to this: https://www.reddit.com/r/Unity3D/comments/12kn33e/i_added_camera_tilt_based_on_the_movement_of_the/, using Lerp to smooth out the camera tilts, but was only able to get it working ""snappily"", ie. without lerp.
-How could I get an effect similar to the one on reddit?
-","1. Hard to answer since I have no code, but usually smoothness is achieved by increasing your camera tilt over time, until it reaches the maximum tilt value.
-Here's an abstract example of how to do that:
-currentTilt += Time.deltaTime * maximumTilt * 2;
-
-Try using it somewhere in your code, may work.
-If it doesn't - edit your question and add some of your code so I have more info on your issue.
-
-2. As others have posted you should really post what code you've tried, as per site rules.
-The effect you're looking for is probably provided by Vector3.SmoothDamp: https://docs.unity3d.com/ScriptReference/Vector3.SmoothDamp.html
-Use it to smooth out both the position and rotation (using Quaterion.Euler).
-",Tilt
-"I am currently learning  walrus := , and when I do this coding and add to the list and then print it, a list appears with all the items True.
-foods = []
-while food := input(""what  food do you like: "") != 'quit':
-    foods.append(food)
-enter code here
-print(foods)
-
-","1. The walrus operator assignment has lower precedence as compared to the relational operators. So saying:
-food := input(""what  food do you like: "") != 'quit':
-
-Evaluates as
-food = <result of (input(""what  food do you like: "") != 'quit')>
-
-And until the input is quit it always returns True causing all values of food to be True and foods to be a list of all True.
-You can try using:
-(food := input(""what  food do you like: "")) != 'quit':
-
-",Walrus
-"I am working on a selenium wrapper. I want to check if an element on the webpage is visible . The function gets the input variable selector which follows the pattern ""selector=value"" so e.g ""id=content"" but could also be this ""link_text=Hello World"". The search function splits that string into its two parts for searching the actual element and returns it for use in the errorhandling. The excpetion message should state both the selector and the value:
-class KeywordFailed(Exception):
-    
-    def __init__(self, message):
-        self.message = message
-        super().__init__(self.message)
-
-class ElementFoundButNotVisible(KeywordFailed):
-    def __init__(self, element, selector):
-        super().__init__(""element: \"""" + element + ""\"" with selector: \"""" + selector + ""\"" is not visible"")
-
-class DefaultKeywords:
-
-    browser = webdriver.Firefox()
-
-    def selectElement(self, selector):
-        selectorValue = selector.split(""="")[1]
-        selector = selector.split(""="")[0]
-        try:
-            if selector == ""id"":
-                element = self.browser.find_element_by_id(selectorValue)
-            elif selector == ""xpath"":
-                element = self.browser.find_element_by_xpath(selectorValue)
-            elif selector == ""link_text"":
-                element = self.browser.find_element_by_link_text(selectorValue)
-            elif selector == ""partial_link_text"":
-                element = self.browser.find_element_by_partial_link_text(selectorValue)
-            elif selector == ""name"":
-                element = self.browser.find_element_by_name(selectorValue)
-            elif selector == ""class_name"":
-                element = self.browser.find_element_by_class_name(selectorValue)
-            elif selector == ""css_selector"":
-                element = self.browser.find_element_by_css_selector(selectorValue)
-            elif selector == ""tag"":
-                element = self.browser.find_element_by_tag_name(selectorValue)
-        except NoSuchElementException:
-            raise ElementNotFound(selectorValue, selector)
-        else:
-            return element, selector
-
-    def findAndCheckIfVisible(self, selector):
-        if (value, selector) := not self.selectElement(selector).is_displayed():
-            raise ElementFoundButNotVisible(element, selector)
-...
-
-When executing though I get the following error:
-SyntaxError: cannot use assignment expressions with tuple
-
-I could move the separation process into an function of its own and just call it one in the exception and in the search function but I really don't want to do that as that would mean I am executing the same code twice.
-","1. You cannot use a form of unpacking with assignment expressions. Instead, consider using the assignment expression to create a name to the returned tuple, and then unpacking from the name from the expression:
-def findAndCheckIfVisible(self, selector):
-   if not (result:=self.selectElement(selector).is_displayed()):
-      raise ElementFoundButNotVisible(element, selector)
-   value, selector = result
-
-Also, instead of explicit conditionals for each selector type, consider using getattr:
-try:
-   element = getattr(self.browser, f'find_element_by_{selector}')(selectorValue)
-except NoSuchElementException:
-   pass
-
-",Walrus
-"In my Django project, I am using walrus to cache location names.
-eg: New Zealand, New York City, Newcastle e.t.c
-So, when I am searching for a key 'new', I am expecting it to return all the above locations but it is only giving me Newcastle. But I when I use 'n' or 'ne' as a key I am getting all of these. Any help is appreciated.
-","1. Finally found out the issue, when you initialise an object of walrus there is an option to pass a stopwords_file . If you don't pass any the default file defined inside the library called stopwords.txt is taken.
-This stopwords.txt file had a lot of words listed like 'new'. So whenever a word from the stopwords file is found in the word to index, it will not index that particular word.
-In my case 'new' was present in stopwords.txt. So when it indexed 'New York' it didn't map the word 'New york' to 'new', but it mapped it to 'york'. That is why I couldn't search with 'new'.
-I solved it by initialising the walrus db search object with an empty stopwords_file. 
-",Walrus
-"I've been working on this problem https://open.kattis.com/problems/walrusweights. I saw that someone else had asked about it here, but my approach to this problem is completely different.
-In the problem, you must find a combination in an array that's sum is closest to 1000. Here's my solution, it works well under the time limit (0.26s, limit is 2s), however, after 31 test cases, it gives me wrong answer.
-In my program, I first read all the numbers and make it to an array size n + 1 (with a zero as the first number, I'll explain shortly), and then I call this method:
-public static void combination(int index, boolean use, int currentSum, int closest){
-    HS.add(currentSum);
-    HS.add(closest);
-    if(index == size){
-        return;
-    }
-    if(use)
-        currentSum += array[index];
-    index++;
-    if(currentSum == 0){ //would interfere with the if statement below, if it's 0, it will always be closer to 1000 
-        combination(index, true, currentSum, closest);
-        combination(index, false, currentSum, closest);
-    }
-    else{
-        if(Math.abs(1000 - currentSum) < Math.abs(1000 - closest)){//so, if the currentSum is closer to 1000 than the closest so far
-            closest = currentSum; //this is now the closest one
-        }
-        else //otherwise, there's no point going on with further changes to this combination, it will never be closest
-            return;
-        combination(index, true, currentSum, closest);
-        combination(index, false, currentSum, closest);
-    }
-}
-
-with:
-combination(0, nums, false, 0, 1000001); //limit of weights is 1000000
-
-In the combination method, the parameters are the index you're currently on, the array, whether you will be adding the current entry to the sum, the current sum, and the highest combination closest to 1000 so far.
-I made a method that once all of the combinations are done, it gets the one closest to 1000 but I'm positive that that works, and it's pretty simple so it's not worth showing, unless needed.
-Can anyone tell me what I'm doing wrong? Is the logic of the combination method incorrect, or is there an extra check or something of the sort I'm missing?
-","1. A bit late, but I've answered this and will leave it here for others to see if they need to.
-http://hastebin.com/etapohavud.cpp
-I made a recursive method that traverses the array of numbers (which was previously sorted), only checking sums that could lead to the closest one, adding all possible ones to an ArrayList. It is sorted so I would not have to worry about finding a smaller number ahead, which could change the whole current sum that I am checking.
-In short, I calculated all viable combinations that could end up being the closest to 1000 and then found the one closest to it.
-",Walrus
-"enter image description hereDuring configuration of a private eucalyptus cloud on centos6 the following warn is coming on running the command source eucarc : 
-WARN: An OSG is either not registered or not configured. S3_URL is not set. Please register an OSG and/or set a valid s3 endpoint and download credentials again. Or set S3_URL manually to http://OSG-IP:8773/services/objectstorage
-The OSG is set to use Walrus.
-Even though the OSG is enabled state. The eucalyptus console is also not showing on the host ip.
-we have two machines one hosts the NC and the other hosts the CC SC Walrus CLC.
-How to resolve it?
-","1. You need to edit the WARN line in the eucarc file with the IP address of the host where User Facing Services is installed,
-e.g if it's a all-in-one system then IP of this machine, or if it's only one frontend and one node controller, mostly like it will be the IP address of your Frontend where all the services are installed.
-So, in this case, edit the WARN line in eucarc file with something like this:
-export S3_URL=http://192.168.14.148:8773/services/objectstorage
-
-",Walrus
-"I am trying build a library with C# 6.0 code in AppVeyor. I have tried configurations in this update from AppVeyor, this discussion and this blog post.
-Here's what I did:
-
-Select Visual Studio 2015 as operating system from AppVeyor web interface
-Add MSBuild 14.0 folder to the path (tried both from web interface and appveyor.yml)
-SET PATH=C:\Program Files (x86)\MSBuild\14.0\Bin\;%PATH%
-
-Changed these lines in solution file
-# Visual Studio 14
-VisualStudioVersion = 14.0.23107.0
-
-Tried to invoke MSBuild with custom build script
-
-None of these worked. It still picks up MSBuild 12.0 and fails. What else can I try? There are people who got it working, I can't see what I'm missing.
-","1. In addition to what you tried above, you need to make sure you used the Visual Studio 2015 image.
-",Appveyor
-"My Argo Workflow has a template that generates the following Config Map:
-{
-  ""apiVersion"": ""v1"",
-  ""kind"": ""ConfigMap"",
-  ""metadata"": { ""name"": ""configmap"" },
-  ""data"":
-    {
-      ""ELASTIC_SEARCH_URL"": ""https://staging-aaa.domain.com"",
-      ""EXTENSION_PATH"": ""/dist"",
-      ""GRAPHQL_GATEWAY_URL"": ""https://graphql-gateway.stg.domain.com/graphql"",
-      ""bucket_url"": ""stg-bucket/fie/to/path"",
-    },
-}
-
-I use this value in one of my other templates like this:
-...
-envFrom:
-- configMapRef:
-    name: ""{{inputs.parameters.configmap}}""
-
-I also want to get a ""2-in-1"" by getting bucket_url within that output, so I created this template to test if I'm able to print what I want (I can't withParam over the original output, so I added [] around {{steps.create-configmap.outputs.parameters.configmap}}):
-- - name: print-with-loop
-    template: print-with-loop-tmpl
-    arguments:
-      parameters:
-        - name: data
-          value: ""{{item}}""
-    withParam: ""[{{steps.create-configmap.outputs.parameters.configmap}}]""
-
-The output of this template is exactly the Config Map itself:
-{apiVersion:v1,data:{ELASTIC_SEARCH_URL:https://staging-aaa.domain.com,EXTENSION_PATH:/dist,GRAPHQL_GATEWAY_URL:https://graphql.stg.domain.com/graphql,bucket_url:stg-bucket/fie/to/path},kind:ConfigMap,metadata:{name:env-configmap}}
-
-I can also print item.data within that Config Map:
-{ELASTIC_SEARCH_URL:https://staging-aaa.domain.com,EXTENSION_PATH:/dist,GRAPHQL_GATEWAY_URL:https://graphql.stg.domain.com/graphql,bucket_name:stg-bucket,ext_path:extension/stable/dist-raw.tar.gz,bucket_url:stg-bucket/extension/stable/dist-raw.tar.gz}
-
-However I can't access any data within item.data. If I use item.data.bucket_url or item.data['bucket_url'], it doesn't work and I get errors from Argo.
-I tried to manipulate the output using sprig but I wasn't able to find a solution. Basically I'm trying to fetch bucket_url to use in another template within this workflow.
-Reproduce the issue yourself
-
-Run your Argo server
-Create a Workflow yaml file
-Run argo submit with your new workflow file.
-That's it :)
-
-I made the smallest template I can that should produce the exact same result. If you run Argo locally like I do, maybe give it a try:
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: test-
-spec:
-  entrypoint: main
-  arguments:
-    parameters:
-      - name: cluster
-        value: ""stg""
-      - name: version
-        value: ""stg/stable""
-
-  templates:
-    - name: main
-      steps:
-        - - name: create-configmap
-            template: create-configmap-tmpl
-
-        - - name: print-with-loop
-            template: print-with-loop-tmpl
-            arguments:
-              parameters:
-                - name: data
-                  value: ""{{item}}""
-            withParam: ""[{{steps.create-configmap.outputs.parameters.configmap}}]""
-
-    - name: create-configmap-tmpl
-      inputs:
-        artifacts:
-          - name: python-script
-            path: /tmp/script.py
-            raw:
-              data: |
-                import json
-
-                def create_simple_config_map():
-                    config = {
-                        ""ELASTIC_SEARCH_URL"": ""https://example-domain.com"",
-                        ""GRAPHQL_GATEWAY_URL"": ""https://graphql.example.com/graphql"",
-                        ""bucket_name"": ""example-bucket"",
-                        ""ext_path"": ""example/path/file"",
-                        ""bucket_url"": ""example-bucket/example/path/file""
-                    }
-                    
-                    return config
-
-                def main():
-                    config = create_simple_config_map()
-
-                    configmap = {
-                        ""apiVersion"": ""v1"",
-                        ""kind"": ""ConfigMap"",
-                        ""metadata"": {
-                            ""name"": ""env-configmap""
-                        },
-                        ""data"": config
-                    }
-
-                    with open(""/tmp/configmap.json"", ""w"") as file:
-                        json.dump(configmap, file, indent=4)
-
-                    print(json.dumps([config], indent=4))
-
-                if __name__ == ""__main__"":
-                    main()
-      container:
-        image: python:3.11
-        command: [""python"", ""/tmp/script.py""]
-      outputs:
-        parameters:
-          - name: configmap
-            valueFrom:
-              path: /tmp/configmap.json
-
-    - name: print-with-loop-tmpl
-      inputs:
-        parameters:
-          - name: data
-      script:
-        image: bash
-        command: [bash]
-        source: |
-          echo ""{{inputs.parameters.data}}""
-
-The step create-configmap-tmpl generates a valid Config Map, you can also run it locally:
-import json
-
-def create_simple_config_map():
-    config = {
-        ""ELASTIC_SEARCH_URL"": ""https://example-domain.com"",
-        ""GRAPHQL_GATEWAY_URL"": ""https://graphql.example.com/graphql"",
-        ""bucket_name"": ""example-bucket"",
-        ""ext_path"": ""example/path/file"",
-        ""bucket_url"": ""example-bucket/example/path/file""
-    }
-    
-    return config
-
-def main():
-    config = create_simple_config_map()
-
-    configmap = {
-        ""apiVersion"": ""v1"",
-        ""kind"": ""ConfigMap"",
-        ""metadata"": {
-            ""name"": ""configmap""
-        },
-        ""data"": config
-    }
-
-    with open(""/tmp/configmap.json"", ""w"") as file:
-        json.dump(configmap, file, indent=4)
-
-    print(json.dumps([config], indent=4))
-
-if __name__ == ""__main__"":
-    main()
-
-The output of this script is the following:
-{
-    ""apiVersion"": ""v1"",
-    ""kind"": ""ConfigMap"",
-    ""metadata"": {
-        ""name"": ""configmap""
-    },
-    ""data"": {
-        ""ELASTIC_SEARCH_URL"": ""https://example-domain.com"",
-        ""GRAPHQL_GATEWAY_URL"": ""https://graphql.example.com/graphql"",
-        ""bucket_name"": ""example-bucket"",
-        ""ext_path"": ""example/path/file"",
-        ""bucket_url"": ""example-bucket/example/path/file""
-    }
-}
-
-You can now try to play around with the printing:
-- - name: print-with-loop
-    template: print-with-loop-tmpl
-    arguments:
-    parameters:
-        - name: data
-        value: ""{{item}}""
-    withParam: ""[{{steps.create-configmap.outputs.parameters.configmap}}]""
-
-
-If we use item, it prints the entire Config Map
-If we use item.data, it would also work.
-
-The problem is accessing item.data.bucket_url or item.data['bucket_url']. It won't work.
-As mentioned, I tried various sprig functions like toJson, lists and dict manipulation, but nothing worked.
-","1. I ended up using a different parameter for withParam:
-        - - name: print-with-loop
-            template: print-with-loop-tmpl
-            arguments:
-              parameters:
-                - name: data
-                  value: ""{{item.bucket_url}}""
-            withParam: ""{{steps.create-configmap.outputs.result}}""
-
-Since I'm printing the json.dump that I'm doing in the script that generates the ConfigMap, I can easily access bucket_url. The above template's output is exactly what I needed.
-",Argo
-"Kubernetes version : 1.23 
-Container runtime: Docker
-kubectl describe node node1
-  Type     Reason             Age                   From     Message
-  ----     ------             ----                  ----     -------
-  Warning  ContainerGCFailed  2m35s (x507 over 9h)  kubelet  rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (16777539 vs. 16777216)
-
-Worker node details 
-df -h
-Filesystem      Size  Used Avail Use% Mounted on
-devtmpfs         32G     0   32G   0% /dev
-tmpfs            32G     0   32G   0% /dev/shm
-tmpfs            32G  1.1M   32G   1% /run
-tmpfs            32G     0   32G   0% /sys/fs/cgroup
-/dev/nvme0n1p1  200G   36G  165G  18% /
-tmpfs           6.3G     0  6.3G   0% /run/user/1000
-
-docker system df
-TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
-Images          27        26        18.23GB   1.433GB (7%)
-Containers      24577     11        1.333GB   1.333GB (100%)
-Local Volumes   0         0         0B        0B
-Build Cache     0         0         0B        0B
-
-Using argo-workflows as container orchestration engine on Amazon EKS, I have enough space on worker nodes and no issues with memory/CPUs, issue might be because of lot of dead containers on worker nodes(24577 generated by argo workflows). In earlier versions of kubelet configuration, there is an option to delete dead containers --maximum-dead-containers but it is deprecated now and it is replaced with --eviction-hard or --eviction-soft settings, with available eviction settings I can only configure memory nodefs pid eviction signals
-How to specify --maximum-dead-containers or equivalent signal using updated --eviction-hard or --eviction-soft settings ?
-kubeletConfiguration : https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
-If I connect to worker node and run docker system prune on worker node, it becomes available but it deletes all  the workflow logs, we want to keep logs for 7 days at least
-","1. If you are using AL2 AMIs then you can configure the '--maximum-dead-containers' flag by following userdata script.
-userData: |-
-#!/bin/bash
-cat << 'EOF' > /etc/eks/containerd/kubelet-containerd.service
-[Unit]
-Description=Kubernetes Kubelet
-Documentation=https://github.com/kubernetes/kubernetes
-After=containerd.service sandbox-image.service
-Requires=containerd.service sandbox-image.service
-
-[Service]
-Slice=runtime.slice
-ExecStartPre=/sbin/iptables -P FORWARD ACCEPT -w 5
-ExecStart=/usr/bin/kubelet \
-    --config /etc/kubernetes/kubelet/kubelet-config.json \
-    --kubeconfig /var/lib/kubelet/kubeconfig \
-    --container-runtime-endpoint unix:///run/containerd/containerd.sock \
-    --image-credential-provider-config /etc/eks/image-credential-provider/config.json \
-    --image-credential-provider-bin-dir /etc/eks/image-credential-provider \
-    --maximum-dead-containers=100 \
-    $KUBELET_ARGS \
-    $KUBELET_EXTRA_ARGS
-
-Restart=on-failure
-RestartForceExitStatus=SIGPIPE
-RestartSec=5
-KillMode=process
-CPUAccounting=true
-MemoryAccounting=true
-
-[Install]
-WantedBy=multi-user.target
-EOF
-
-",Argo
-"I have a workflow template which outputs an artifact, this artifact has to be passed to another workflow template as an input. how we can do that? I'm following the way below which is not working
-Here is WorflowTemplate1.yaml
-apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: arfile
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      volumes:
-        - name: vol
-          emptyDir: {}
-      inputs:
-        parameters:
-
-      script:
-        image: ""ubuntu""
-        volumeMounts:
-          - name: vol
-            mountPath: ""{{inputs.parameters.Odir}}""
-        command: [""bash""]
-        source: |
-          #!/usr/bin/env bash
-          echo ""This is artifact testing"" > /tmp/arfile
-
-      outputs:
-        parameters:
-          - name: arfile
-            path: ""{{inputs.parameters.Odir}}/arfile""
-
-Here is the WorkflowTemplate2.yaml
-apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: bfile
-spec:
-  entrypoint: main
-  templates:
-      - name: main
-        volumes:
-          - name: vol
-            emptyDir: {}
-        inputs:
-          parameters:
-            - name: image
-              value: ""ubuntu""
-            - name: Odir
-              value: ""/tmp""
-          artifacts:
-            - name: arfile
-              path: /tmp/arfile
-        container:
-          image: ""ubuntu""
-          command: [""cat""]
-          args:
-           - /tmp/arfile
-
-Here is the workflow which is calling the above two workflow templates.I'm unable to pass artifacts of workflowtemplate1 to workflowtemplate2 from this workflow.
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: apr-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      outputs:
-        artifacts:
-          - name: arfile
-            from: ""tasks['dfile'].outputs.artifacts.arfile""
-
-      dag:
-        tasks:
-          - name: dfile
-            templateRef:
-              name: arfile
-              template: main
-            arguments:
-              parameters:
-                - name: bimg
-                  value: ""ubuntu""
-
-          - name: bci
-            depends: dfile
-            templateRef:
-              name: bfile
-              template: main
-            arguments:
-              parameters:
-                - name: img
-                  value: ""ubuntu""
-              artifacts:
-                - name: arfile
-                  from: ""{{tasks.dfile.outputs.artifacts.arfile}}""
-
-What's wrong I'm doing here?
-","1. I think I found the issue. I need to use artifacts instead of parameters in WorkflowTemplate1.yaml in outputs code block
-here's the fix
-outputs:
-  artifacts:
-    - name: arfile
-      path: ""{{inputs.parameters.Odir}}/arfile""
-
-",Argo
-"I have the ArgoCD server running and wanna define a Cluster without the CLI. I wanna practice GitOps, so I wanna declare my ArgoCD-cluster config in Git.
-In the CLI I could do: argocd cluster add but how to do that with a Kubernetes manifest?
-I didn't found how to create that Cluster declarative. I found how to create Repositories, and Projects, but nothing for something like kind: cluster.
-I am creating my clusters with Crossplane (Crossplane creates clusters by k8s manifests). Crossplane saves the kubeconfig of it's created clusters in Secrets files, which looks like this:
-apiVersion: v1
-kind: Secret
-metadata:
-  name: cluster-details-my-cluster
-  namespace: default
-  uid: 50c7acab-3214-437c-9527-e66f1d563409
-  resourceVersion: '12868'
-  creationTimestamp: '2022-01-06T19:03:09Z'
-  managedFields:
-    - manager: crossplane-civo-provider
-      operation: Update
-      apiVersion: v1
-      time: '2022-01-06T19:03:09Z'
-      fieldsType: FieldsV1
-      fieldsV1:
-        f:data:
-          .: {}
-          f:kubeconfig: {}
-        f:type: {}
-  selfLink: /api/v1/namespaces/default/secrets/cluster-details-my-cluster
-data:
-  kubeconfig: >-
-    YXBpVmVyc2lvbjogdjEKY2x1c3RlcnM6Ci0gY2x1c3RlcjoKICAgIGNlcnRpZmljYXRlLWF1dGhvcml0eS1kYXRhOiBMUzB0TFMxQ1JVZEpUaUJEUlZKVVNVWkpRMEZVUlMwdExTMHRDazFKU1VKbFJFTkRRVkl5WjBGM1NVSkJaMGxDUVVSQlMwSm5aM0ZvYTJwUFVGRlJSRUZxUVdwTlUwVjNTSGRaUkZaUlVVUkVRbWh5VFROTmRHTXlWbmtLWkcxV2VVeFhUbWhSUkVVeVRrUkZNRTlVVlROT1ZFbDNTR2hqVGsxcVNYZE5WRUV5VFZScmQwMXFUWGxYYUdOT1RYcEpkMDFVUVRCTlZHdDNUV3BOZVFwWGFrRnFUVk5_SHORTENED
-type: Opaque
-
-
-The data.kubeconfig content is a regular bas64 encoded kubeconfig, so it's easy to decode, like this:
-apiVersion: v1
-clusters:
-- cluster:
-    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHd_SHORTENED
-    server: https://MY.IP.TO.K8S:6443
-  name: my-cluster
-contexts:
-- context:
-    cluster: my-cluster
-    user: my-cluster
-  name: my-cluster
-current-context: my-cluster
-kind: Config
-preferences: {}
-users:
-- name: my-cluster
-  user:
-    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJQS9adEZFT1Avcnd3Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOalF4TkRrMU56VXlNQjRYRFRJeU1ERXdOakU1TURJek1sb1hEVEl6TURFdwpOakU1TURJek1sb3dNREVYT_SHORTENED
-    client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUpJNlVhTDlLem9yL1VpdzlXK1NNUTAxV1BES2ZIK_SHORTENED
-
-Do I really need a manually intervention and have to break GitOps practice? Only to tell ArgoCD where my clusters and their configs are? The config is already in the cluster.
-k get secret cluster-details-my-cluster
-NAME                                 TYPE     DATA   AGE
-cluster-details-my-cluster   Opaque   1      158m
-
-Thank you very much in advance
-","1. Since I first looked at Crossplane & ArgoCD I wondered, what the Crossplane ArgoCD provider is about. Now that I finally created a fully comprehensive setup, I came to the exact same point as Jan (who created the issues argo-cd/issues/8107 & provider-argocd/issues/18 pointing me to the right direction)! The GitOps approach stops right at the point, where we need to tell ArgoCD about a Crossplane created Kubernetes cluster (regardless which one, in my case it's AWS EKS - but it will be the same for Civo and others).
-Luckily the Crossplane ArgoCD provider has exactly what we need! Using it we can create a Cluster resource which is able to represent the Kubernetes cluster which was created by Crossplane. This Cluster itself can be referenced again by an ArgoCD Application managing an application we finally want to deploy (if you need an fully working example of a Crossplane EKS cluster (nested) Composition for testing, look at this repo).
-In a nutshell the steps are the following:
-
-1. Install Crossplane ArgoCD Provider
-2. Create ArgoCD user & RBAC role for Crossplane ArgoCD Provider as ConfigMap patches
-3. Create ArgoCD API Token for Crossplane ArgoCD Provider
-4. Create Secret containing the ARGOCD_API_TOKEN & configure Crossplane ArgoCD Provider
-5. Create a Cluster in ArgoCD referencing Crossplane created Kubernetes cluster
-Optional: 6. Create an ArgoCD Application to use the automatically configured cluster
-
-At the end of this answer I will show how to do all the steps in a fully unattended fashion in GitHub Actions.
-Let's do them all after another:
-
-1. Install Crossplane ArgoCD Provider
-Let's install the Crossplane ArgoCD provider in a provider-argocd.yaml:
-apiVersion: pkg.crossplane.io/v1
-kind: Provider
-metadata:
-  name: provider-argocd
-spec:
-  package: xpkg.upbound.io/crossplane-contrib/provider-argocd:v0.6.0
-  packagePullPolicy: IfNotPresent # Only download the package if it isn’t in the cache.
-  revisionActivationPolicy: Automatic # Otherwise our Provider never gets activate & healthy
-  revisionHistoryLimit: 1
-
-Apply it via:
-kubectl apply -f provider-argocd.yaml
-
-2. Create ArgoCD user & RBAC role for Crossplane ArgoCD Provider
-As stated in the docs we need to create an API token for the ProviderConfig of the Crossplane ArgoCD provider to use. To create the API token, we first need to create a new ArgoCD user.
-Therefore we enhance the ArgoCD provided ConfigMap argocd-cm (in a file argocd-cm-patch.yml):
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: argocd-cm
-data:
-  # add an additional local user with apiKey capabilities for provider-argocd
-  # see https://github.com/crossplane-contrib/provider-argocd?tab=readme-ov-file#getting-started-and-documentation
-  accounts.provider-argocd: apiKey      
-
-As the ArgoCD docs about user management state this is not enough:
-
-""each of those users will need additional RBAC rules set up, otherwise they will fall back to the default policy specified by policy.default field of the argocd-rbac-cm ConfigMap.""
-
-So we need to patch the argocd-rbac-cm ConfigMap also (argocd-rbac-cm-patch.yml):
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: argocd-rbac-cm
-data:
-  # For the provider-argocd user we need to add an additional rbac-rule
-  # see https://github.com/crossplane-contrib/provider-argocd?tab=readme-ov-file#create-a-new-argo-cd-user
-  policy.csv: ""g, provider-argocd, role:admin""      
-
-Both ConfigMap changes can be achieved through various ways. I used Kustomize for both in my setup, since I also install ArgoCD that way. I describe my Kustomize-based setup in detail in this answer. Here's my kustomization.yaml for this case here:
-apiVersion: kustomize.config.k8s.io/v1beta1
-kind: Kustomization
-
-resources:
-- github.com/argoproj/argo-cd//manifests/cluster-install?ref=v2.10.2
-
-## changes to config maps
-patches:
-- path: argocd-cm-patch.yml
-- path: argocd-rbac-cm-patch.yml
-
-namespace: argocd
-
-3. Create ArgoCD API Token for Crossplane ArgoCD Provider
-First we need to access the argocd-server Service somehow. In the simplest manner we create a port forward (if you need to do that unattended in a CI/CD setup, you can append  & at the end to run the command in the background - see https://stackoverflow.com/a/72983554/4964553""):
-kubectl port-forward -n argocd --address='0.0.0.0' service/argocd-server 8443:443
-
-We also need to have the ArgoCD password ready:
-ARGOCD_ADMIN_SECRET=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=""{.data.password}"" | base64 -d; echo)
-
-Now we create a temporary JWT token for the provider-argocd user we just created (we need to have jq installed for this command to work):
-# be sure to have jq installed via 'brew install jq' or 'pamac install jq' etc.
-
-ARGOCD_ADMIN_TOKEN=$(curl -s -X POST -k -H ""Content-Type: application/json"" --data '{""username"":""admin"",""password"":""'$ARGOCD_ADMIN_SECRET'""}' https://localhost:8443/api/v1/session | jq -r .token)
-
-Now we finally create an API token without expiration that can be used by provider-argocd:
-ARGOCD_API_TOKEN=$(curl -s -X POST -k -H ""Authorization: Bearer $ARGOCD_ADMIN_TOKEN"" -H ""Content-Type: application/json"" https://localhost:8443/api/v1/account/provider-argocd/token | jq -r .token)
-
-You can double check in the ArgoCD UI at Settings/Accounts if the Token got created:
-
-4. Create Secret containing the ARGOCD_API_TOKEN & configure Crossplane ArgoCD Provider
-The ARGOCD_API_TOKEN can be used to create a Kubernetes Secret for the Crossplane ArgoCD Provider:
-kubectl create secret generic argocd-credentials -n crossplane-system --from-literal=authToken=""$ARGOCD_API_TOKEN""
-
-Now finally we're able to tell our Crossplane ArgoCD Provider where it should obtain the ArgoCD API Token from. Let's create a ProviderConfig at provider-config-argocd.yaml:
-apiVersion: argocd.crossplane.io/v1alpha1
-kind: ProviderConfig
-metadata:
-  name: argocd-provider
-spec:
-  credentials:
-    secretRef:
-      key: authToken
-      name: argocd-credentials
-      namespace: crossplane-system
-    source: Secret
-  insecure: true
-  plainText: false
-  serverAddr: argocd-server.argocd.svc:443
-
-Also apply the ProviderConfig via:
-kubectl apply -f provider-config-argocd.yaml
-
-5. Create a Cluster in ArgoCD referencing Crossplane created Kubernetes cluster
-Now we're where we wanted to be: We can finally create a Cluster in ArgoCD referencing the Crossplane created EKS cluster. Therefore we make use of the Crossplane ArgoCD Providers Cluster CRD in our infrastructure/eks/argoconfig/cluster.yaml:
-apiVersion: cluster.argocd.crossplane.io/v1alpha1
-kind: Cluster
-metadata:
-  name: argo-reference-deploy-target-eks
-  labels:
-    purpose: dev
-spec:
-  forProvider:
-    config:
-      kubeconfigSecretRef:
-        key: kubeconfig
-        name: eks-cluster-kubeconfig # Secret containing our kubeconfig to access the Crossplane created EKS cluster
-        namespace: default
-    name: deploy-target-eks # name of the Cluster registered in ArgoCD
-  providerConfigRef:
-    name: argocd-provider
-
-
-Be sure to provide the forProvider.name AFTER the forProvider.config, otherwise the name of the Cluster will we overwritten by the EKS server address from the kubeconfig!
-
-The providerConfigRef.name.argocd-provider references our ProviderConfig, which gives the Crossplane ArgoCD Provider the rights (via our API Token) to change the ArgoCD Server configuration (and thus add a new Cluster).
-As the docs state `kubeconfigSecretRef' is described at what we need:
-
-KubeconfigSecretRef contains a reference to a Kubernetes secret entry that contains a raw kubeconfig in YAML or JSON.
-
-The Secret containing the exact EKS kubeconfig is named eks-cluster-kubeconfig by our EKS Configuration and resides in the default namespace.
-Let's create the Cluster object, which should register our K8s cluster in ArgoCD:
-kubectl apply -f cluster.yaml
-
-If everything went correctly, a kubectl get cluster should state READY and SYNCED as True:
-kubectl get cluster
-NAME                               READY   SYNCED   AGE
-argo-reference-deploy-target-eks   True    True     21s
-
-And also in the ArgoCD UI you should find the newly registerd Cluster now at Settings/Clusters:
-
-6. Create an ArgoCD Application to use the automatically configured cluster
-Having both in place, we can craft a matching ArgoCD Application:
-apiVersion: argoproj.io/v1alpha1
-kind: Application
-metadata:
-  name: microservice-api-spring-boot
-  namespace: argocd
-  labels:
-    crossplane.jonashackt.io: application
-  finalizers:
-    - resources-finalizer.argocd.argoproj.io
-spec:
-  project: default
-  source:
-    repoURL: https://github.com/jonashackt/microservice-api-spring-boot-config
-    targetRevision: HEAD
-    path: deployment
-  destination:
-    namespace: default
-    name: deploy-target-eks
-  syncPolicy:
-    automated:
-      prune: true    
-    retry:
-      limit: 5
-      backoff:
-        duration: 5s 
-        factor: 2 
-        maxDuration: 1m
-
-As you can see we use our Cluster name deploy-target-eks as spec.destination.name (NOT spec.destination.server). This will then look into Argo's Cluster list and should find our deploy-target-eks.
-
-Do it all automatically in CI/CD like GitHub Actions
-I also added all these steps to my GitHub Actions workflow. There's only one difference: running the kubectl port-forward command with a attached  & to have that port forward run in the background (see https://stackoverflow.com/a/72983554/4964553):
-      - name: Prepare Secret with ArgoCD API Token for Crossplane ArgoCD Provider
-        run: |
-          echo ""--- Patch ConfigMaps argocd-cm & argocd-rbac-cm (and install ArgoCD)""
-          kubectl apply -k argocd/install
-          
-          echo ""--- Do Crossplane installation etc""
-          
-          echo ""--- Install Crossplane argocd-provider""
-          kubectl apply -f provider-argocd.yaml
-          
-          echo ""--- Access the ArgoCD server with a port-forward in the background, see https://stackoverflow.com/a/72983554/4964553""
-          kubectl port-forward -n argocd --address='0.0.0.0' service/argocd-server 8443:443 &
-          
-          echo ""--- Extract ArgoCD password""
-          ARGOCD_ADMIN_SECRET=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=""{.data.password}"" | base64 -d; echo)
-
-          echo ""--- Create temporary JWT token for the `provider-argocd` user""
-          ARGOCD_ADMIN_TOKEN=$(curl -s -X POST -k -H ""Content-Type: application/json"" --data '{""username"":""admin"",""password"":""'$ARGOCD_ADMIN_SECRET'""}' https://localhost:8443/api/v1/session | jq -r .token)
-          
-          echo ""--- Create ArgoCD API Token""
-          ARGOCD_API_TOKEN=$(curl -s -X POST -k -H ""Authorization: Bearer $ARGOCD_ADMIN_TOKEN"" -H ""Content-Type: application/json"" https://localhost:8443/api/v1/account/provider-argocd/token | jq -r .token)
-
-          echo ""--- Already create a namespace for crossplane for the Secret""
-          kubectl create namespace crossplane-system
-
-          echo ""--- Create Secret containing the ARGOCD_API_TOKEN for Crossplane ArgoCD Provider""
-          kubectl create secret generic argocd-credentials -n crossplane-system --from-literal=authToken=""$ARGOCD_API_TOKEN""
-          
-          echo ""--- Create Crossplane argocd-provider ProviderConfig referencing the Secret containing the ARGOCD_API_TOKEN""
-          kubectl apply -f provider-config-argocd.yaml
-          
-          echo ""--- Create a Cluster in ArgoCD referencing our Crossplane created Kubernetes cluster""
-          kubectl apply -f cluster.yaml
-
-You can even reduce the kubectl apply -f statements more, see my full setup on GitHub: https://github.com/jonashackt/crossplane-argocd where I use ArgoCD's App of Apps feature.
-",Argo
-"My team has some old bamboo pipelines, where secret and password are configured in bamboo variables and bamboo masks these values with ***** nobody knows the passwords now and it has not been documented.
-Is there any way to access/print and see the values in bamboo [secret/password] variable?
-","1. There's a trick to read secret variables:
-Create a Script task with content
-echo ${bamboo_password_variable} > text.txt
-
-Add artifact definition for *.txt pattern for a Job
-Run build and look at text.txt artifact content.
-
-2. Another solution would be to not display the secret as it is but to display it slightly modified.
-For example you could create a script at the start of your Bamboo task (or wherever you want it doesn't matter) and to split the secret value and then display it on 2 different lines.
-This way Bamboo would not replace the printed secret by the infamous *******
-echo ----------------------------
-secret=${bamboo.blabla.secret}
-middle=$(echo $secret | awk '{print length($0)/2}')
-echo $secret | cut -c1-$middle
-echo $secret | cut -c$(($middle+1))-$(echo $secret | awk '{print length}')
-echo ----------------------------
-
-(I'm not a bash expert, so there is probably a better way to write this script).
-And the output should look like this:
-build   28-May-2024 21:35:42    ----------------------------
-build   28-May-2024 21:35:42    my-secre
-build   28-May-2024 21:35:42    t-value
-build   28-May-2024 21:35:42    ----------------------------
-
-You can then concatenate the 2 lines to get your secret value.
-Here it would be: my-secret-value
-A little bit hacky/dirty but it works 😊
-",Bamboo
-"I would like to use a bamboo plan variable in a deployment project.
-I inject the version of the project into ${bamboo.inject.version} during the build plan, and I would like to use it in the deployment project. (The artifact I am deploying uses the version in its name).
-I've tried referencing ${bamboo.planRepository.inject.version} and ${bamboo.inject.version} but neither of these work.
-I am happy to hear suggestions of other ways of doing this too.
-Edit: I did manage to achieve the results I wanted by adding the properties file as a build artifact, then exporting it into the deployment project. This seems rather roundabout but works. Any other ideas appreciated!
-","1. As suggested by ToAsT, set the scope of your injected variables to Result first, then add a variable inject.version into your plan, and make sure you also have ${bamboo.inject.version} for release versioning field of the deployment project.
-
-2. Yes you can, for that you need to use a task called Inject Variables.
-
-In your Build Plan Task creates a key=value file (can be a script task):
-
-echo ""DEPLOYMENT_VERSION={your_version_here}"" > /etc/.version
-
-
-In the next task, use the Inject Variables task to export this variable through your build plan and deployment projects (that is related):
-
-# this is the yaml used in bamboo-specs
-version: 2
-# ...
-  tasks:
-    - inject-variables:
-        file: /etc/.version
-        scope: RESULT # share between stages and deployment releases
-        namespace: anything_you_want
-
-When you run your deployment task, you can call the variable with the following name:
-echo ${bamboo.anything_you_want.DEPLOYMENT_VERSION}
-
-Sources:
-
-https://confluence.atlassian.com/bamboo/configuring-a-variables-task-687213475.html
-https://docs.atlassian.com/bamboo-specs-docs/7.0.1/specs.html?yaml#task-inject-variables
-
-
-3. You could potentially add that version info into the release name itself, but if what you're doing works then stick with that. Check out what's offered in the free plugins.
-",Bamboo
-"Here is the rest documentation for buildkite and any request requires a auth token. Can this API be invoked from buildkite build somehow? Maybe some command that already grabbed token because it running inside Buildkite? or some additional step to get such a token?
-Ot the only way is to go https://buildkite.com/user/api-access-tokens section and create token manually even when I use buildkite API inside buildkite step?
-","1. The REST API uses a different token from the agent when it's running inside a job. If you want to use the REST API from inside a job, you'll have to find a way to retrieve a REST API token in your builds, as it's a secret, you should manage access to it as such (e.g. don't just put it in pipeline's environment variables).
-Once you have a job that has access to the REST API token, you can use it to call the REST API with a tool like CURL, or the BK CLI.
-",Buildkite
-"I am trying to create a maven-based buildkit image for the jenkins agent.
-If you create an image and run it as an agent, it takes infinite repetition.
-I think the image is built incorrectly, but it asks to inspect the Dockerfile.
-FROM jenkins/inbound-agent:alpine as jnlp
-
-FROM maven:3.6.3-jdk-11-slim as maven
-
-RUN apt-get update && \
-    apt-get install -y \
-        git \
-        libfontconfig1 \
-        libfreetype6
-
-FROM moby/buildkit
-
-COPY --from=jnlp /usr/local/bin/jenkins-agent /usr/local/bin/jenkins-agent
-COPY --from=jnlp /usr/share/jenkins/agent.jar /usr/share/jenkins/agent.jar
-
-ENTRYPOINT [""/usr/local/bin/jenkins-agent""]
-
-
-When only the maven agent is created without the ""FROM moby/buildkit"" statement, it runs without problems.
-","1. Updated answer with a full example:
-Your Dockerfile doesn't have references to the maven image.
-If you need to have buildit in your final image you can install it by downloading from the GitHub or copying it from a docker image as you have for Jenkins agent.
-I didn't test the docker file, but I hope you got the idea.
-FROM jenkins/inbound-agent:alpine as jnlp
-FROM moby/buildkit as buildkit
-
-FROM maven:3.6.3-jdk-11-slim
-
-RUN apt-get update && \
-    apt-get install -y \
-        git \
-        libfontconfig1 \
-        libfreetype6
-
-COPY --from=buildkit /usr/bin/build* /usr/local/bin/
-COPY --from=jnlp /usr/local/bin/jenkins-agent /usr/local/bin/jenkins-agent
-COPY --from=jnlp /usr/share/jenkins/agent.jar /usr/share/jenkins/agent.jar
-
-ENTRYPOINT [""/usr/local/bin/jenkins-agent""]
-
-Update:
-Per request in the comments, here is the version that might work for JDK8. You need to match all the components to have the same version to work:
-FROM jenkins/inbound-agent:latest-alpine-jdk8 as jnlp
-FROM moby/buildkit as buildkit
-
-FROM maven:3.6.3-jdk-8-slim
-
-RUN apt-get update && \
-    apt-get install -y \
-        git \
-        libfontconfig1 \
-        libfreetype6
-
-COPY --from=buildkit /usr/bin/build* /usr/local/bin/
-COPY --from=jnlp /usr/local/bin/jenkins-agent /usr/local/bin/jenkins-agent
-COPY --from=jnlp /usr/share/jenkins/agent.jar /usr/share/jenkins/agent.jar
-
-ENTRYPOINT [""/usr/local/bin/jenkins-agent""]
-
-",Buildkite
-"I have a .buildkite.yml file with steps similar to the following:
-steps:
-  - name: ""Release""
-  command: ./auto/release
-  
-  - block: ""Deploy with parameters""
-    fields:
-      - select: ""Environment""
-        key: ""BUILD_STAGE""
-        hint: ""Which Environment do you want to deploy?""
-        options:
-          - label: ""test""
-            value: ""test""
-          - label: ""sit""
-            value: ""sit""
-  - wait
-  - name: ""Deploy app to ${BUILD_STAGE} env""
-        command: ./auto/deploy
-
-I want a prompt to show what the user selected.
-Obviously the last command name has a blank for BUILD_STAGE cause the user has not made a selection prior to the steps being generated.
-Any ideas? Advice?
-","1. You could create another step after ""Deploy with parameters"" to display the input. Something like this might work:
-  - label: ""Selected Environment""
-    command: echo ""You have selected the following environment:""; buildkite-agent meta-data get BUILD_STAGE
-
-",Buildkite
-"The CircleCI slack orbs docs can be found here: https://circleci.com/developer/orbs/orb/circleci/slack.  In the only_notify_on_branch example they show how you can manually hard code a user id into the notification template i.e.: mentions: '<@U8XXXXXXX> but if you try and add a variable in this place, e.g. mentions: <@SLACK_USER_ID> it will just print out the user id in the template not actually mention the user
-","1. This answer works when using the inbuilt CircleCI slack templates like basic_fail_1, I don't know if it works when using a custom block.
-Short version: 
-Need to directly export a slack user id to the SLACK_PARAM_MENTIONS variable (CircleCI templates recognise this environment variable): echo ""export SLACK_PARAM_MENTIONS='<@$SLACK_USER_ID>'"" >> $BASH_ENV. This needs to be done in a job step before any step that could fail.
-Longer version and code: 
-In the .circleci/config.yml file you will have access to a $CIRCLE_USERNAME variable which will be the committer's bitbucket/github username. This needs to be mapped to a user id for all user's that you want to be notified on a failure. A user's slack id can be found on their slack profile in the desktop app.
-jobs:
-  build_app:
-    ...
-    steps:
-      - ...
-      - map_circle_to_slack
-      - run: yarn install
-      - run: yarn build
-      - slack/notify:
-          event: fail
-          channel: '<slack-channel-id>'
-          template: basic_fail_1
-
-commands:
-    map_circle_to_slack:
-        steps:
-          - run:
-              name: Map circle username to slack user id
-              command: |
-                case $CIRCLE_USERNAME in
-                              'user1-bitbucket-or-github-username')
-                                  SLACK_USER_ID='user1-slackid'
-                                    ;;
-                              'user2-bitbucket-or-github-username')
-                                  SLACK_USER_ID='user2-slackid'
-                                    ;;
-                              'user3-bitbucket-or-github-username')
-                                    SLACK_USER_ID='user3-slackid'
-                                      ;;
-                                *)
-                          esac
-                echo ""export SLACK_PARAM_MENTIONS='<@$SLACK_USER_ID>'"" >> $BASH_ENV 
-
-
-",CircleCI
-"When running npm test this error occurs, anybody know how to fix it?
- FAIL  src/App.test.js (13.306s)
-  × renders learn react link (2323ms)
-  ● renders learn react link
-    Invariant failed: You should not use <withRouter(App) /> outside a <Router>
-      4 |
-      5 | test('renders learn react link', () => {
-    > 6 |   const { getByText } = render(<App />);
-        |                         ^
-      7 |   const linkElement = getByText(/learn react/i);
-      8 |   expect(linkElement).toBeInTheDocument();
-      9 | });
-
-I've tried adding BrowserRouter in the app.test.js too, but that did nothing.
-index.js
-import React from 'react';
-import ReactDOM from 'react-dom';
-import './index.css';
-import App from './App';
-import * as serviceWorker from './serviceWorker';
-import { BrowserRouter } from 'react-router-dom';
-ReactDOM.render(
-    <BrowserRouter>
-            <App />
-    </BrowserRouter>, document.getElementById('root'));
-serviceWorker.register();
-
-App.js contains the routing of the website, with a transition effect and a switch which contains the routes.
-App.js
-import React from 'react';
-import {GlobalStyle} from ""./global.styles"";
-import Footer from ""./components/footer/footer.component"";
-import {MainContainer} from ""./components/common/Container.component"";
-import Navbar from ""./components/navbar/nav/nav.component"";
-import {CSSTransition, TransitionGroup} from ""react-transition-group"";
-import Particles from ""react-particles-js"";
-import { Switch, Route } from 'react-router-dom';
-import { withRouter } from ""react-router"";
-import HomePage from ""./pages/homepage/homepage.component"";
-import ProcessPage from ""./pages/process/process.component"";
-import ProcessIndepth from ""./pages/process/process-indepth/processIndepth.component"";
-import ServicePage from ""./pages/service/service.component"";
-import AboutPage from ""./pages/about/about.component"";
-import ContactPage from ""./pages/contact/contact.component"";
-import Cookies from ""./pages/cookies/cookies.component"";
-class App extends React.Component {
-    constructor() {
-        super();
-        this.state = {
-            navbarOpen: false,
-            showSuccess: true,
-        }
-    }
-    handleNavbar = () => {
-        this.setState({
-            navbarOpen: !this.state.navbarOpen
-        });
-    };
-    render() {
-        const {location} = this.props;
-        return (
-        <div>
-                <div>
-                    <MainContainer>
-                        <GlobalStyle/>
-                        <Navbar
-                            navbarState={this.state.navbarOpen}
-                            handleNavbar={this.handleNavbar}
-                        />
-                        <Route render={({location}) => (
-                            <TransitionGroup>
-                                <CSSTransition
-                                    key={location.key}
-                                    classNames=""fade""
-                                    timeout={800}
-                                >
-                                    <Switch>
-                                        <Route exact path='/' component={HomePage} />
-                                        <Route exact path='/proces' component={ProcessPage} />
-                                        <Route exact path='/samarbejdsproces' component={ProcessIndepth} />
-                                        <Route exact path='/services' component={ServicePage}/>
-                                        <Route exact path='/om_os' component={AboutPage}/>
-                                        <Route exact path='/kontakt' component={ContactPage}/>
-                                        <Route exact path='/cookies' component={Cookies} />
-                                    </Switch>
-                                </CSSTransition>
-                            </TransitionGroup>
-                        )}
-                        />
-                        {location.pathname !== ""/"" && <Footer/>}
-                    </MainContainer>
-                </div>
-        </div>
-        );
-    }
-}
-export default withRouter(App);
-
-App.test.js
-import React from 'react';
-import { render } from '@testing-library/react';
-import App from './App';
-test('renders learn react link', () => {
-  const { getByText } = render(<App />);
-  const linkElement = getByText(/learn react/i);
-  expect(linkElement).toBeInTheDocument();
-});
-
-The purpose of the test is to deploy the website with circleci.
-",,CircleCI
-"[ERROR] Error executing Maven.
-[ERROR] The specified user settings file does not exist: /home/circleci/project/ .folder/mvn-settings.xml
-I have a script build-project.sh to build a mvn project.
-This script takes the environment variable CUSTOM_MVN_OPTS, where I specify the path to the custom settings (e.g. private repo locations, etc.).
-When I run the script in CircleCI or any CICD pipeline under docker env it's throwing above error.
-#!/bin/sh
-# build-project.sh
-
-mvn ${CUSTOM_MVN_OPTS} package
-
-What I expect? build-project.sh to build artifacts.
-CUSTOM_MVN_OPTS=""-s .folder/mvn-settings.xml"" build-project.sh
-
-","1. After debugging found the solution.
-Fix: hardcode the -s option within the script, you cannot pass this thru env variable.
-#!/bin/sh
-# build-project.sh
-
-if [ -z ""${CUSTOM_MVN_OPTS}"" ]; then
-  mvn package
-else 
-  mvn -s ${CUSTOM_MVN_OPTS} package
-fi
-
-",CircleCI
-"I have below step to go through for loop. However I'm getting below synatx error.
-Code:
-steps:
-  arti_lib_deploy:
-    stage: build image
-    type: freestyle
-    title: ""Deploy Libs to Artifactory""
-    image: 'hub.artifactory.gcp.xxxx/curlimages/curl:latest'
-    commands:
-      - LIB_FOLDERS=[""lib1"",""lib2""]
-      - >
-        for LIB_FOLDER in ""${LIB_FOLDERS[@]}""; do
-         echo ""FolderName- ${LIB_FOLDER}""
-         curl -X GET -kv https://xxxx.service.test/entitlements/effPermissions?permissionId= ""${LIB_FOLDER}""
-        done
-
-Error:
-Executing command: LIB_FOLDERS=[""lib1"",""lib2""]
-[2023-08-06T10:54:14.700Z] ------------------------------
-Executing command: for LIB_FOLDER in ""${LIB_FOLDERS[@]}""; do
-echo ""FolderName-${LIB_FOLDER }""
-done
-
-[2023-08-06T10:54:14.701Z] /bin/sh: syntax error: bad substitution
-[2023-08-06T10:54:15.036Z] Reading environment variable exporting file contents.[2023-08-06T10:54:15.052Z] Reading environment variable exporting file contents.[2023-08-06T10:54:16.224Z] [SYSTEM]
-Message
-Failed to run freestyle step: Deploy Libs to Artifactory
-Caused by
-Container for step title: Deploy Libs to Artifactory, step type: freestyle, operation: Freestylestep. 
-Failed with exit code: 2
-Documentation Link https://codefresh.io/docs/docs/codefresh-yaml/steps/freestyle/
-Exit code
-2
-Name
-NonZeroExitCodeError
-
-Sh command:
- commands:
-   - LIB_FOLDERS=""lib1 lib2""
-   - for LIB_FOLDER in $LIB_FOLDERS;
-     do
-      echo ""FolderName- $LIB_FOLDER""
-     done
-
-Error:
-Executing command: LIB_FOLDERS=""lib1 lib2""
-------------------------------
-/bin/sh: syntax error: unexpected end of file (expecting ""done"")
-[2023-08-06T23:48:22.532Z] Reading environment variable exporting file contents.
-[2023-08-06T23:48:22.543Z] Reading environment variable exporting file contents.
-
-","1. I'm not familiar with Codefresh, but your question gives me some vague ideas about how it interacts with the shell.
-Your original code has ""${LIB_FOLDERS[@]}"" in one of the commands. That's bash-specific syntax (it expands to all the elements of the LIB_FOLDERS array), but the error message indicates that Codefresh uses /bin/sh.  Apparently /bin/sh is not bash on your system (it typically isn't); perhaps it's ash or dash.
-One solution would be to avoid bash-specific commands. Another would be to figure out how to persuade Codefresh to use bash rather than /bin/sh. Or you could probably write an external bash script that's invoked as a command.
-Your second attempt, after some comments was this:
- commands:
-   - LIB_FOLDERS=""lib1 lib2""
-   - for LIB_FOLDER in $LIB_FOLDERS;
-     do
-      echo ""FolderName- $LIB_FOLDER""
-     done
-
-which gave:
-/bin/sh: syntax error: unexpected end of file (expecting ""done"")
-
-This suggests that each command preceded by -  is passed to /bin/sh separately. One solution is to write the loop in a single line, for example:
- commands:
-   - LIB_FOLDERS=""lib1 lib2""
-   - for LIB_FOLDER in $LIB_FOLDERS ; do echo ""FolderName- $LIB_FOLDER"" ; done
-
-And again, if this gets too complicated, it might be best to put the commands in an external script. Note in particular that lib1 and lib2 happen to be single words, so combining them into a string that you then split works. If there were spaces in those folder names, you'd have to do something more complicated.
-
-2. Same code works with multilines in Codefresh as below:
-commands:
-   - LIB_FOLDERS=""lib1 lib2""
-   - >
-     for LIB_FOLDER in $LIB_FOLDERS; 
-      do echo ""FolderName- $LIB_FOLDER"";
-     done
-
-",Codefresh
-"I have a CodeFresh, GitHub pull-request pipeline.
-There are 2 scenarios where a PR marks as ""Failed"", when ideally it would show as ""Pending"" or no status.
-Scenario 1:
-When a new event is triggered, it terminates the previous build (as expected)
-
-Build was terminated by pipeline policy - new build triggered by pull-request on branch <my-branch>
-
-This is all great, but the build then shows as ""Failed"" on GitHub.  Theoretically, the new build would undo the ""failed"" status, but this can take quite some time, and it is difficult to follow what the latest running build is.  My terminationPolicy spec looks like this:
-terminationPolicy:
-  - type: branch
-    event: onCreate
-
-Termination Policy Docs:
-https://codefresh.io/docs/docs/integrations/codefresh-api/?#full-pipeline-specification
-Scenario 2:
-We want to bypass the build based on labels applied.  Ex: ""skip-test"", or be able to run tests without the limitations of the branchRegex.
-steps:
-  harakiri:
-    ...
-    commands:
-      - codefresh terminate ${{CF_BUILD_ID}}
-    when:
-      condition:
-        any:
-          isWorkInProgress: ""match('${{CF_PULL_REQUEST_LABELS}}', 'WIP', false) == true""
-
-Again, works great.  But marks the PR as ""failed"".
-
-If there were a way to inject a command into either of these, I could work with that. But how we have it laid out, it requires entire step to change the status to ""Pending"".  (so I can't simply add an extra ""command"" to the harakiri step)
-Any thoughts?
-","1. Scenario 1
-I can suggest you use github-status-updater with hooks (instead of default status updates)
-So basically it will set pending status at the build start (and will keep this status if it's terminated by policy).
-hooks:
-  on_success:
-    title: Set GitHub deployment status to ""success""
-    image: cloudposse/github-status-updater
-    environment:
-      - GITHUB_ACTION=update_state
-      - GITHUB_TOKEN=${{GITHUB_TOKEN}}
-      - GITHUB_OWNER=${{CF_REPO_OWNER}}
-      - GITHUB_REPO=${{CF_REPO_NAME}}
-      - GITHUB_REF=${{CF_REVISION}}
-      - GITHUB_CONTEXT=Codefresh CI - Build
-      - GITHUB_STATE=success
-      - GITHUB_TARGET_URL=${{CF_BUILD_URL}}
-  on_fail:
-    title: Set GitHub deployment status to ""failure""
-    image: cloudposse/github-status-updater
-    environment:
-      - GITHUB_ACTION=update_state
-      - GITHUB_TOKEN=${{GITHUB_TOKEN}}
-      - GITHUB_OWNER=${{CF_REPO_OWNER}}
-      - GITHUB_REPO=${{CF_REPO_NAME}}
-      - GITHUB_REF=${{CF_REVISION}}
-      - GITHUB_CONTEXT=Codefresh CI - Build
-      - GITHUB_STATE=failure
-      - GITHUB_TARGET_URL=${{CF_BUILD_URL}}
-  on_elected:
-    title: Set GitHub deployment status to ""pending""
-    image: cloudposse/github-status-updater
-    environment:
-      - GITHUB_ACTION=update_state
-      - GITHUB_TOKEN=${{GITHUB_TOKEN}}
-      - GITHUB_OWNER=${{CF_REPO_OWNER}}
-      - GITHUB_REPO=${{CF_REPO_NAME}}
-      - GITHUB_REF=${{CF_REVISION}}
-      - GITHUB_CONTEXT=Codefresh CI - Build
-      - GITHUB_STATE=pending
-      - GITHUB_TARGET_URL=${{CF_BUILD_URL}}    
-
-To disable default status updates patch the pipeline spec with CLI
-codefresh get pip <name> -o yaml > file.yml
-spec:
-  options:
-    enableNotifications: false
-
-codefresh replace -f file.yml
-",Codefresh
-"I've created a Codefresh pipeline to deploy an artifact to Gitlab Package Registry. Source code is also in Gitlab.
-I'm able to publish my artifact using a Gitlab Personal Access Token, but when I try to do it using a Gitlab Deploy Token, it fails (401 unauthorized error), no matter if I use Codefresh for it or not.
-I have defined this using Gradle, to publish to Gitlab Package Registry:
-    repositories {
-        maven {
-            url ""https://gitlab.com/api/v4/projects/<group_id>/packages/maven""
-            credentials(HttpHeaderCredentials) {
-                name = ""Private-Token""
-                value = '<private_token>'
-            }
-            authentication {
-                header(HttpHeaderAuthentication)
-            }
-        }
-    }
-
-I use the right <group_id> and <private_token> values, they are changed here for security reasons.
-If I provide my Personal Access Token in <private_token>, I can publish to Gitlab Package Registry without any problem. But when I use a generated Deploy Token, it fails. Both my Personal Access Token and the Deploy Token have the same name and username (in the case of Deploy Token).
-I'm getting a 401 unauthorized error:
-* What went wrong:
-Execution failed for task ':publishMavenJavaPublicationToMavenRepository'.
-> Failed to publish publication 'mavenJava' to repository 'maven'
-   > Could not write to resource 'https://gitlab.com/api/v4/projects/<group_id>/packages/maven/mypackageroute/mypackage/0.1/mypackage-0.1.jar'.
-      > Could not PUT 'https://gitlab.com/api/v4/projects/<group_id>/packages/maven/mypackageroute/mypackage/0.1/mypackage-0.1.jar'. Received status code 401 from server: Unauthorized
-
-Does anyone know what I'm doing wrong?
-Thank you very much
-","1. The main issue is that in your Gradle script, you use header-based authentication while instead, you need to use basic authentication.
-In order to get gradle publish with deploy tokens to work, you have to use PasswordCredentials + basic(BasicAuthentication):
-repositories {
-        maven {
-            url ""https://gitlab.com/api/v4/projects/<project_id>/packages/maven""
-            credentials(PasswordCredentials) {
-                username = <username>
-                password = <password>
-            }
-            authentication {
-                basic(BasicAuthentication)
-            }
-        }
-    }
-
-
-2. You need to set name to ""Deploy-Token"" when using a deploy token, i.e.
-repositories {
-    maven {
-        url ""https://gitlab.com/api/v4/projects/<group_id>/packages/maven""
-        credentials(HttpHeaderCredentials) {
-            name = ""Deploy-Token""
-            value = '<deploy_token>'
-        }
-        authentication {
-            header(HttpHeaderAuthentication)
-        }
-    }
-}
-
-Private-Token is used for personal access tokens, and Job-Token for CI access tokens.
-Note here that name is the name of the header added to the http request and is not related to the name or username of the token itself.
-",Codefresh
-"I am trying to run the concourse worker using a docker image on a gentoo host. When running the docker image of the worker in privileged mode I get:
-iptables: create-instance-chains: iptables: No chain/target/match by that name.
-
-My docker-compose file is
-version: '3'
-
-services:
-  worker:
-     image: private-concourse-worker-with-keys
-     command: worker
-     ports:
-     - ""7777:7777""
-     - ""7788:7788""
-     - ""7799:7799""
-     #restart: on-failure
-     privileged: true
-     environment:
-     - CONCOURSE_TSA_HOST=concourse-web-1.dev
-     - CONCOURSE_GARDEN_NETWORK
-
-My Dockerfile
-FROM concourse/concourse
-
-COPY keys/tsa_host_key.pub /concourse-keys/tsa_host_key.pub
-COPY keys/worker_key /concourse-keys/worker_key
-
-Some more errors
-worker_1  | {""timestamp"":""1526507528.298546791"",""source"":""guardian"",""message"":""guardian.create.containerizer-create.finished"",""log_level"":1,""data"":{""handle"":""426762cc-b9a8-47b0-711a-8f5ce18ff46c"",""session"":""23.2""}}
-worker_1  | {""timestamp"":""1526507528.298666477"",""source"":""guardian"",""message"":""guardian.create.containerizer-create.watch.watching"",""log_level"":1,""data"":{""handle"":""426762cc-b9a8-47b0-711a-8f5ce18ff46c"",""session"":""23.2.4""}}
-worker_1  | {""timestamp"":""1526507528.303164721"",""source"":""guardian"",""message"":""guardian.create.network.started"",""log_level"":1,""data"":{""handle"":""426762cc-b9a8-47b0-711a-8f5ce18ff46c"",""session"":""23.5"",""spec"":""""}}
-worker_1  | {""timestamp"":""1526507528.303202152"",""source"":""guardian"",""message"":""guardian.create.network.config-create"",""log_level"":1,""data"":{""config"":{""ContainerHandle"":""426762cc-b9a8-47b0-711a-8f5ce18ff46c"",""HostIntf"":""wbpuf2nmpege-0"",""ContainerIntf"":""wbpuf2nmpege-1"",""IPTablePrefix"":""w--"",""IPTableInstance"":""bpuf2nmpege"",""BridgeName"":""wbrdg-0afe0000"",""BridgeIP"":""x.x.0.1"",""ContainerIP"":""x.x.0.2"",""ExternalIP"":""x.x.0.2"",""Subnet"":{""IP"":""x.x.0.0"",""Mask"":""/////A==""},""Mtu"":1500,""PluginNameservers"":null,""OperatorNameservers"":[],""AdditionalNameservers"":[""x.x.0.2""]},""handle"":""426762cc-b9a8-47b0-711a-8f5ce18ff46c"",""session"":""23.5"",""spec"":""""}}
-worker_1  | {""timestamp"":""1526507528.324085236"",""source"":""guardian"",""message"":""guardian.iptables-runner.command.failed"",""log_level"":2,""data"":{""argv"":[""/worker-state/3.6.0/assets/iptables/sbin/iptables"",""--wait"",""-A"",""w--instance-bpuf2nmpege-log"",""-m"",""conntrack"",""--ctstate"",""NEW,UNTRACKED,INVALID"",""--protocol"",""all"",""--jump"",""LOG"",""--log-prefix"",""426762cc-b9a8-47b0-711a-8f5c "",""-m"",""comment"",""--comment"",""426762cc-b9a8-47b0-711a-8f5ce18ff46c""],""error"":""exit status 1"",""exit-status"":1,""session"":""1.26"",""stderr"":""iptables: No chain/target/match by that name.\n"",""stdout"":"""",""took"":""1.281243ms""}}
-
-","1. It turns out it was because we were missing the log kernel module for iptables compiled into our distro.
-
-2. These are the env which I have used in my docker-compose to make it work:
-environment:
-  CONCOURSE_POSTGRES_HOST: concourse-db
-  CONCOURSE_POSTGRES_USER: concourse_user
-  CONCOURSE_POSTGRES_PASSWORD: concourse_pass
-  CONCOURSE_POSTGRES_DATABASE: concourse
-  CONCOURSE_EXTERNAL_URL: http://localhost:8080
-  CONCOURSE_ADD_LOCAL_USER: test:test
-  CONCOURSE_MAIN_TEAM_LOCAL_USER: test
-  # instead of relying on the default ""detect""
-  CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
-  CONCOURSE_CLIENT_SECRET: Y29uY291cnNlLXdlYgo=
-  CONCOURSE_TSA_CLIENT_SECRET: Y29uY291cnNlLXdvcmtlcgo=
-  CONCOURSE_X_FRAME_OPTIONS: allow
-  CONCOURSE_CONTENT_SECURITY_POLICY: ""*""
-  CONCOURSE_CLUSTER_NAME: tutorial
-  CONCOURSE_WORKER_CONTAINERD_DNS_SERVER: ""8.8.8.8""
-  # For ARM-based machine, change the Concourse runtime to ""houdini""
-  CONCOURSE_WORKER_RUNTIME: ""containerd""
-
-Reference: https://concourse-ci.org/quick-start.html
-
-3. You should choose container runtime, I recommend containerd.
-You can do with that env CONCOURSE_WORKER_RUNTIME: ""containerd""
-",Concourse
-"I'm trying to setup a concourse pipeline that builds an image and pushes it to a quay registry. However, it keeps failing with:
-Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
-
-This is the pipeline file:
-resources:
-- name: source-code
-  type: git
-  source:
-    uri: gitlab.git
-    branch: main
-    username: ((gitlab-auth.username))
-    password: ((gitlab-auth.password))
-
-- name: kaniko-image
-  type: registry-image
-  source:
-    repository: gcr.io/kaniko-project/executor
-    tag: debug
-
-- name: push-image
-  type: registry-image
-  source:
-    repository: quay.io/gitlab
-    username: ((quay-gitlab-mr.username))
-    password: ((quay-gitlab-mr.password))
-
-jobs:
-- name: build-and-push-image
-  plan:
-  - get: source-code
-    trigger: true
-  - get: kaniko-image
-  - task: build-task-image
-    config:
-      platform: linux
-      image_resource:
-        type: registry-image
-        source:
-          repository: quay.io
-          tag: kaniko-v1
-      inputs:
-      - name: source-code
-      params:
-        CONTEXT: source-code
-        DOCKERFILE: Dockerfile
-        IMAGE_NAME: quay.io/gitlab
-        TAG: 1.0.5
-      run:
-        path: /kaniko/executor
-        args:
-        - --context=${CONTEXT}
-        - --destination=${IMAGE_NAME}:${TAG}
-        - --dockerfile=${DOCKERFILE}
-        - --force
-  - put: push-image
-    params:
-      image: source-code/image.tar
-
-My understanding is that when concourse pulls the source code down into the worker, it does so in a directory called source-code so thats where the Dockerfile should be as it is in the root of my directory. I have tried using workspace, variations of directory structures and specifying the tmp dir where the concourse logs show it is cloning to. But all result in the same error
-When I don't use Kaniko and just do a normal build in a privileged task, I can build the image fine and push. But it fails with Kaniko and I cannot run privileged in my use case.
-Any ideas what is wrong?
-","1. Well, I had quite some trauma from this one. I got it working for a project so I can give you two essential tips when working with concourse and kaniko:
-
-Note that concourse will put inputs into a specifically named subfolder in /tmp, which is probably different every time. You can fix this (although I don't know if this is intended as the documentation seems to tell that absolute paths are not allowed) by providing an absolute path where the input should be put.
-
-The problem is you are not calling a shell, which means there is no substitution from environment variables. If you put the arguments directly into the args, it works.
-
-
-Example:
-  - task: kaniko-build-buildah
-    config:
-      platform: linux
-      image_resource:
-        type: registry-image
-        source:
-          repository: gcr.io/kaniko-project/executor
-          tag: v1.22.0
-      inputs:
-        - name: repo
-          path: /workspace/repo
-      run:
-        path: /kaniko/executor
-        args:
-          - --dockerfile=Containerfile
-          - --context=dir:///workspace/repo/dockerfile_dir
-          - --destination=registry/repo/image:latest
-          - --cache=true
-
-",Concourse
-"Q1
-I would like declare variable with default values and use it in the concourse resources
-e.g. in the start of the pipeline.yml, the below variable is declared.
-PROJECT: hello-world
-
-and then use it in resources/resource_types like
-groups:
-  - name: ((PROJECT))
-    jobs:
-      - pull-request
-      - create-artifacts
-
-Now getting error like
-  - groups.((PROJECT)): '((PROJECT))' is not a valid identifier: must start with a lowercase letter
-
-Actually the variable is only resolved when --var PROJECT=hello-world is passed when setting the pipeline.
-Curious; why it is not referring the variable declared inside the pipeline.yml
-I do not want to pass any additional argument when setting the pipeline, would like to declare it in the yml itself.
-Q2:
-The above question Q1 was resolved with anchors and aliases. Please refer my answer.
-But the same is not working with resources
-REPOSITORY_NAME: &REPOSITORY_NAME hello-world-repo
-
-resources:
-  - name: pull-request-branch
-    check_every: 1m
-    type: pull-request
-    icon: source-pull
-    source:
-      repo: cahcommercial/*REPOSITORY_NAME
-
-Any help please.
-","1. Q1:
-PROJECT: &PROJECT hello-world
-
-
-groups:
-  - name: *PROJECT
-    jobs:
-      - pull-request
-      - create-artifacts
-
-Q2
-REPOSITORY_NAME: &REPOSITORY_NAME
-  repo: owner/hello-world-repo
-
-resources:
-  - name: pull-request-branch
-    check_every: 1m
-    type: pull-request
-    icon: source-pull
-    source:
-      <<: *REPOSITORY_NAME
-
-Finally I am feeling that passing the variables from command line is the best option. It allows me to reuse the pipeline.yml
-https://concourse-ci.org/vars.html#static-vars
-fly -t target set-pipeline --pipeline pipeline-name \
-  -c pipeline.yml \
-  -v PROJECT=hello-world \
-
-and then use the variable syntax in the pipeline.yml
-groups:
-  - name: ((PROJECT))
-    jobs:
-      - pull-request
-      - create-artifacts
-
-",Concourse
-"Failed to upload file. The AWS Access Key ID you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: eea5dbcc-67a3-4d4b-a36f-16ebf7f055ae; S3 Extended Request ID: null)
-or
-HWCCON0006E: An unexpected error occurred when calling restAPI http://middle-end.icd-control.svc.cluster.local:8080/v4/ibm/deployments/crn%3Av1%3Aibm%3Alocal%3Adashdb-for-transactions%3Aus-south%3Aa%2F6ffa17fcf55f41cf9c082c34e772b17c%3Ab0aebb68-94fa-46ec-a1fc-1c999edb6187%3A%3A. Please retry later. If this error persists, please record the operation procedure and report to your console administrator.
-I'm doing data engineering course from CourseEra and enrolled in IBM data enginnering course ""Introduction to Data Engineering"". While attending a lab, I try to load a CSV file from my local machine to IBM Cloud db2 but I face this error continuously.
-","1. 
-Do you have an aws account ?
-Create an access key on your user ( not the root user, root user is the user with the account email )
-install awscli on your local machine .
-configure awscli ,with aws configure
-try again.
-
-Good luck !
-
-2. I've just got the same error when trying to upload local CSV file to IBM cloud DB2 database. It seems to be a kind of system error, unrelated to what you're specifically doing. AWS account is irrelevant there, both you and me are trying to upload local CSV data file to IBM cloud database, if I understand right. Just to be sure this is the situation, please re-check you picked up ""Local Computer"" as a source for the data, and not ""AWS S3"".
-IBM UI Data load
-",Concourse
-"Sometimes when a concourse pipeline is getting build, it tries to use the previous version of resource, not the latest one. I could confirm this because the resource hash don't match.
-Please let me know a solution to flush the resource hash.
-","1. Concourse v7.4.0 (released August 2021) adds the command
-fly clear-resource-cache -r <pipeline>/<resource>
-
-which will do what you are looking for.
-See:
-
-the documentation for clear-resource-cache.
-the release notes for v7.4.0
-
-
-2. The only way to flush the resource cache is to restart all the workers, as this will clear your ephemeral disks.
-
-3. The fly clear-resource-cache -r <pipeline>/<resource> command doesn't work for older versions.
-To achieve similar results, you can use
-fly -t ci clear-task-cache -j <pipeline>/<resource> -s <step-name>
-
-Check the help command for more info:
-Usage:
-  fly [OPTIONS] clear-task-cache [clear-task-cache-OPTIONS]
-
-Application Options:
-  -t, --target=              Concourse target name
-  -v, --version              Print the version of Fly and exit
-      --verbose              Print API requests and responses
-      --print-table-headers  Print table headers even for redirected output
-
-Help Options:
-  -h, --help                 Show this help message
-
-[clear-task-cache command options]
-      -j, --job=             Job to clear cache from
-      -s, --step=            Step name to clear cache from
-      -c, --cache-path=      Cache directory to clear out
-      -n, --non-interactive  Destroy the task cache(s) without confirmation
-
-",Concourse
-"I want to add a custom variable that allows me to filter the data where tag value for ""process"" equals to the variable value in grafana dashboard. I am able to add a custom variable to the dashboard with value options ""process1"", ""process2"" and ""process3"", but when I use this variable in the query as
- |> filter(fn: (r) => r[""Process ID""] == ${process})
-it is giving me error undefined identifier process2.
-Although when I replace the variable ${process} with ""process2"" the query works correctly and filters out the data by that particular process, but it doesn't work when I use variable.
-How can I fix this issue?
-I tried using the variable in the flux query as
- |> filter(fn: (r) => r[""Process ID""] == ${process})
-but it is not working
-","1. Try to use advanced variable format options:
-  |> filter(fn: (r) => r[""Process ID""] == ${process:doublequote})
-
-",Flux
-"I would like to pass router params into Vuex actions, without having to fetch them for every single action in a large form like so:
-edit_sport_type({ rootState, state, commit }, event) {
-  const sportName = rootState.route.params.sportName <-------
-  const payload = {sportName, event}                 <-------
-  commit(types.EDIT_SPORT_TYPE, payload)
-},
-
-Or like so,
-edit_sport_type({ state, commit, getters }, event) {
-  const payload = {sportName, getters.getSportName}  <-------
-  commit(types.EDIT_SPORT_TYPE, payload)
-},
-
-Or even worse: grabbing params from component props and passing them to dispatch, for every dispatch.
-Is there a way to abstract this for a large set of actions?
-Or perhaps an alternative approach within mutations themselves?
-","1. To get params from vuex store action, import your vue-router's instance, then access params of the router instance from your vuex store via the router.currentRoute object.
-Sample implementation below:
-router at src/router/index.js:
-import Vue from 'vue'
-import VueRouter from 'vue-router'
-import routes from './routes'
-
-Vue.use(VueRouter)
-
-const router = new VueRouter({
-  mode: 'history',
-  routes
-})
-
-export default router
-
-import the router at vuex store:
-import router from '@/router'
-then access params at vuex action function, in this case ""id"", like below:
-router.currentRoute.params.id
-
-
-2. Not sure to understand well your question, but :
-This plugin keeps your router' state and your store in sync :
-https://github.com/vuejs/vuex-router-sync
-and it sounds like what you are looking for.
-
-3. You can use this function to get params into Vuex
-import router from './router';
-router.onReady(()=>{
-   console.log(router.currentRoute.params.sportName)
-})
-
-",Flux
-"I have applied keda scaledobject for my deployment, now i want to manage changes for git. So i tried to apply flux to this scaledobject but i am getting error like below
-**flux error for scaledobject : ScaledObject/jazz-keda dry-run failed (Forbidden): scaledobjects.keda.sh ""jazz-keda"" is forbidden: User ""system:serviceaccount:crystal-test:default-login-serviceaccount"" cannot patch resource ""scaledobjects"" in API group ""keda.sh"" at the cluster scope**
-
-Is it not possible to apply flux concept to keda object? i don't have admin permission to change anything in the cluster, someone help to figure it out.
-","1. As per the error, it seems Service Account which is associated with flux does not have sufficient permissions to modify KEDA ScaledObjects in your Kubernetes cluster, that's why you're facing this error.
-This error can be resolved by adding ClusterRole with required permissions to the service account which is associated with the Flux. As you do not have Admin permissions, you will have to request these below steps to your cluster administrator:
-
-Creating ClusterRole with appropriate permissions.
-
-Binding above ClusterRole to the flux service account, you will need to create ClusterRoleBinding for achieving this binding.
-
-
-Refer to this official Kubernetes document on Using RBAC Authorization, which allows you to dynamically configure policies through the Kubernetes API. The RBAC API declares ClusterRole and ClusterRoleBinding of Kubernetes objects.
-Then after applying above configurations with “kubectl apply -f<file>.yaml” above error may get resolved, as you do not have admin permissions this will be managed by Cluster Admin.
-Note : KEDA requires specific RBAC rules to allow service accounts to create, modify, and delete ScaledObjects. Use this command : kubectl auth can-i to check the permissions of your service account, refer to official kubernetes doc on how to use this command for more information.
-",Flux
-"I currently have an open merge request on GitLab. However, I want to rename the branch which is used for the merge request, without closing the merge request.
-","1. Unfortunately you can't do that. Only create a new merge request. Gitlab doesn't have this functionality in current version (17.x.x)
-",GitLab
-"I want to upgrade my Gitlab Debian edition from 16.9.1-ce to 17.0.1. Something has changed how do I upgrade.
-apt-get install gitlab-ce
-Reading package lists... Done
-Building dependency tree
-Reading state information... Done
-gitlab-ce is already the newest version (16.9.1-ce.0).
-0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.
-","1. You need to upgrade twice
-apt-get install gitlab-ee=16.11.3-ee.0
-apt-get install gitlab-ee=17.0.1-ee.0
-
-Which version of debian are you using?
-If APT isn't finding new packages maybe you can try reinstalling the apt source.
-curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.deb.sh | sudo bash
-
-Otherwise you can download it manually.
-https://packages.gitlab.com/app/gitlab/gitlab-ee/search?q=&filter=debs&filter=debs&dist=debian
-Had similar issue with linux mint virginia not being supported, replaced with ubuntu 22.04 and now it's ok (but debian should be supported).
-You can check a file, which will differ from mine
-cat /var/lib/apt/lists/packages.gitlab.com_gitlab_gitlab-ee_linuxmint_dists_virginia_InRelease
------BEGIN PGP SIGNED MESSAGE-----
-Hash: SHA256
-
-Origin: packages.gitlab.com/empty/deb/
-Label: packagecloud_generic_empty_deb_index
-
-
-if it's not an empty file you should have something like /var/lib/apt/lists/packages.gitlab.com_gitlab_gitlab-ee_ubuntu_dists_jammy_main_binary-amd64_Packages which contains all the packages
-
-2. To get this to work I had to update to the latest version of the Gitlab 16 releases.
-Step 1.
-curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
-
-Step 2.
-sudo apt-get install gitlab-ce=16.11.3-ce.0
-
-Step 3.
-sudo apt-get install gitlab-ce=17.0.1-ce.0
-
-",GitLab
-"I am trying to push to git and am getting this error message
-Enumerating objects: 57, done.
-Counting objects: 100% (56/56), done.
-Delta compression using up to 8 threads
-Compressing objects: 100% (40/40), done.
-error: RPC failed; HTTP 504 curl 22 The requested URL returned error: 504
-send-pack: unexpected disconnect while reading sideband packet
-Writing objects: 100% (41/41), 185.34 MiB | 1.51 MiB/s, done.
-Total 41 (delta 13), reused 0 (delta 0), pack-reused 0
-fatal: the remote end hung up unexpectedly
-Everything up-to-date
-
-I have already tried:
-git config http.postBuffer 524288000
-
-But this resulted in a different but similar message:
-Enumerating objects: 57, done.
-Counting objects: 100% (56/56), done.
-Delta compression using up to 8 threads
-Compressing objects: 100% (40/40), done.
-Writing objects: 100% (41/41), 185.34 MiB | 6.38 MiB/s, done.
-Total 41 (delta 13), reused 0 (delta 0), pack-reused 0
-error: RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
-send-pack: unexpected disconnect while reading sideband packet
-fatal: the remote end hung up unexpectedly
-Everything up-to-date
-
-Any help would be much appreciated!
-","1. Problem which was pointed out above was that I was trying to commit to much.
-I ran:
-git reset --soft HEAD~1
-
-Which removes the previous commits, I then committed and pushed smaller changes.
-
-2. If you are submitting too much, you can increase the http buffer size:
-git config --global http.postBuffer 120000000 (~120MB)
-
-3. First, ignore the previous commit with large files using:
-
-git reset --soft HEAD~1
-
-Re-run the command multiple times until the previous commit is the last successful commit you had.
-Then you can try increasing the buffer size for HTTP connections by setting http.postBuffer configuration option in Git. You can do this by running:
-
-config --global http.postBuffer 524288000
-
-This sets the buffer size to 500MB. You can adjust the value if needed.
-Finally, push your changes again to your remote repository.
-",GitLab
-"From jenkins script console, how can I initiate a build for a job?
-Tried:
-    for(job in Hudson.instance.getView(view_name).items) {
-    job.startBuild()
-}
-
-Error:
-groovy.lang.MissingMethodException: No signature of method: hudson.model.FreeStyleProject.startBuild() is applicable for argument types: () values: []
-","1. You can use run.scheduleBuild, as example
-
- cause = new hudson.model.Cause.RemoteCause(startServer, startNote)
- failedRuns.each{run -> run.scheduleBuild(cause)}
-
-
-
-2. Jenkins.instance.getAllItems(Job.class).each { jobitem ->     
-  if(!jobitem.isDisabled() && jobitem.getFullName() =~ /job name pattern/) {
-        println(""Job Name: "" + jobitem.getFullName())
-        Jenkins.instance.queue.schedule(jobitem, 0)
-  }  
-}
-
-Quick version above, however, it won't pass any parameter, even the job has default parameter values defined
-To fully utilise building a job like clicking via web page, you need to build your paramters set.
-The following answer is an example of passing parameters
-https://stackoverflow.com/a/42509501/5471097
-",Jenkins
-"I have jenkins groovy pipeline which triggers other builds. It is done in following script:
-for (int i = 0; i < projectsPath.size(); i++) {
-    stepsForParallel[jenkinsPath] = {
-        stage(""build-${jenkinsPath}"") {
-            def absoluteJenkinsPath = ""/${jenkinsPath}/BUILD""
-            build job: absoluteJenkinsPath, parameters: [[$class: 'StringParameterValue', name: 'GIT_BRANCH', value: branch],
-                                                         [$class: 'StringParameterValue', name: 'ROOT_EXECUTOR', value: rootExecutor]]
-        }
-    }
-}
-parallel stepsForParallel
-
-The problem is that my jobs depend on other common job, i.e. job X triggers job Y and job Z triggers job Y. What I'd like to achieve is that the job X triggers job Y and job Z waits for result of Y triggered by X.
-I suppose I need to iterate over all running builds and check if any build of the same type is running. If yes then wait for it. Following code could wait for build to be done:
-def busyExecutors = Jenkins.instance.computers
-                        .collect { 
-                          c -> c.executors.findAll { it.isBusy() }
-                        }
-                        .flatten()
-busyExecutors.each { e -> 
-    e.getCurrentWorkUnit().context.future.get()
-}
-
-My problem is that I need to tell which running job I need to wait. To do so I need to check:
-
-build parameters
-build environments variables
-job name
-
-How can i retreive this kind of data?
-I know that jenkins have silent period feature but after period expires new job will be triggered.
-EDIT1
-Just to clarify why I need this function. I have jobs which builds applications and libs. Applications depend on libs and libs depend on other libs. When build is triggered then it triggers downstream jobs (libs on which it depends).
-Sample dependency tree:
-A -> B,C,D,E
-B -> F
-C -> F
-D -> F
-E -> F
-
-So when I trigger A then B,C,D,E are triggered and F is also triggered (4 times). I'd like to trigger F only once.
-I have beta/PoC solution (below) which almost work. Right now I have following problems with this code:
-
-echo with text ""found already running job"" is not flushed to the screen until job.future.get() ends
-I have this ugly ""wait"" (for(i = 0; i < 1000; ++i){}). It is because result field isn't set when get method returns
-import hudson.model.*
-
-def getMatchingJob(projectName, branchName, rootExecutor){
-
-    result = null
-
-    def busyExecutors = []
-    for(i = 0; i < Jenkins.instance.computers.size(); ++i){
-        def computer = Jenkins.instance.computers[i]
-        for(j = 0; j < computer.getExecutors().size(); ++j){
-            def executor = computer.executors[j]
-            if(executor.isBusy()){
-                busyExecutors.add(executor)
-            }
-        }
-    }
-
-    for(i = 0; i < busyExecutors.size(); ++i){
-        def workUnit = busyExecutors[i].getCurrentWorkUnit()
-        if(!projectName.equals(workUnit.work.context.executionRef.job)){
-            continue
-        }
-        def context = workUnit.context
-        context.future.waitForStart()
-
-        def parameters
-        def env
-        for(action in context.task.context.executionRef.run.getAllActions()){
-            if(action instanceof hudson.model.ParametersAction){
-                parameters = action
-            } else if(action instanceof org.jenkinsci.plugins.workflow.cps.EnvActionImpl){
-                env = action
-            }
-        }
-
-        def gitBranchParam = parameters.getParameter(""GIT_BRANCH"")
-        def rootExecutorParam = parameters.getParameter(""ROOT_EXECUTOR"")
-
-        gitBranchParam = gitBranchParam ? gitBranchParam.getValue() : null
-        rootExecutorParam = rootExecutorParam ? rootExecutorParam.getValue() : null
-
-        println rootExecutorParam
-        println gitBranchParam
-
-        if(
-            branchName.equals(gitBranchParam)
-            && (rootExecutor == null || rootExecutor.equals(rootExecutorParam))
-        ){
-            result = [
-                ""future"" : context.future,
-                ""run"" : context.task.context.executionRef.run,
-                ""url"" : busyExecutors[i].getCurrentExecutable().getUrl()
-            ]
-        }
-    }
-    result
-}
-
-job = getMatchingJob('project/module/BUILD', 'branch', null)
-if(job != null){
-    echo ""found already running job""
-    println job
-    def done = job.future.get()
-    for(i = 0; i < 1000; ++i){}
-    result = done.getParent().context.executionRef.run.result
-    println done.toString()
-    if(!""SUCCESS"".equals(result)){
-        error 'project/module/BUILD: ' + result
-    }
-    println job.run.result
-}
-
-
-","1. I have a similar problem to solve. What I am doing, though, is iterating over the jobs (since an active job might not be executed on an executor yet).
-The triggering works like this in my solution:
-
-if a job has been triggered manually or by VCS, it triggers all its (recursive) downstream jobs
-if a job has been triggered by another upstream job, it does not trigger anything
-
-This way, the jobs are grouped by their trigger cause, which can be retrieved with
-@NonCPS
-def getTriggerBuild(currentBuild)
-{
-    def triggerBuild = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
-    if (triggerBuild) {
-        return [triggerBuild.getUpstreamProject(), triggerBuild.getUpstreamBuild()]
-    }
-    return null
-}
-
-I give each job the list of direct upstream jobs it has. The job can then check whether the upstream jobs have finished the build in the same group with
-@NonCPS
-def findBuildTriggeredBy(job, triggerJob, triggerBuild)
-{
-    def jobBuilds = job.getBuilds()
-    for (buildIndex = 0; buildIndex < jobBuilds.size(); ++buildIndex)
-    {
-        def build = jobBuilds[buildIndex]
-        def buildCause = build.getCause(hudson.model.Cause$UpstreamCause)
-        if (buildCause)
-        {
-            def causeJob   = buildCause.getUpstreamProject()
-            def causeBuild = buildCause.getUpstreamBuild()
-            if (causeJob == triggerJob && causeBuild == triggerBuild)
-            {
-                return build.getNumber()
-            }
-        }
-    }
-    return null
-}
-
-From there, once the list of upstream builds have been made, I wait on them.
-def waitForUpstreamBuilds(upstreamBuilds)
-{
-    // Iterate list -- NOTE: we cannot use groovy style or even modern java style iteration
-    for (upstreamBuildIndex = 0; upstreamBuildIndex < upstreamBuilds.size(); ++upstreamBuildIndex)
-    {
-        def entry = upstreamBuilds[upstreamBuildIndex]
-        def upstreamJobName = entry[0]
-        def upstreamBuildId = entry[1]
-        while (true)
-        {
-            def status = isUpstreamOK(upstreamJobName, upstreamBuildId)
-            if (status == 'OK')
-            {
-                break
-            }
-            else if (status == 'IN_PROGRESS')
-            {
-                echo ""waiting for job ${upstreamJobName}#${upstreamBuildId} to finish""
-                sleep 10
-            }
-            else if (status == 'FAILED')
-            {
-                echo ""${upstreamJobName}#${upstreamBuildId} did not finish successfully, aborting this build""
-                return false
-            }
-        }
-    }
-    return true
-}
-
-And abort the current build if one of the upstream builds failed (which nicely translates as a ""aborted build"" instead of a ""failed build"").
-The full code is there: https://github.com/doudou/autoproj-jenkins/blob/use_autoproj_to_bootstrap_in_packages/lib/autoproj/jenkins/templates/library.pipeline.erb
-The major downside of my solution is that the wait is expensive CPU-wise when there are a lot of builds waiting. There's the built-in waitUntil, but it led to deadlocks (I haven't tried on the last version of the pipeline plugins, might have been solved). I'm looking for ways to fix that right now - that's how I found your question.
-",Jenkins
-"I have a jenkins running on EC2 behind ALB,
-on the other aws account I have EKS with Fargate profile.
-I'm trying to run jenkins agents on fargate using the Kubernetes plugin (Can't use the ECS plugin for other reason)
-I get connection rejected:
-The logs indicate that the connection to the Jenkins master is being rejected because the acknowledgment sequence expected by the JNLP4 protocol is not being received
-I found that this is due to jnlp traffic from jenkins that can't pass through ALB, and needs to be transfered raw ,using NLB with tcp 50000 port connection.
-please assist, I don't want to change my jenkins setup.
-Tried alpine image, but the issue is with the ack type (jnlp)
-","1. With a similar setup, I solved using the WebSocket connection (see here) to connect the agents over HTTPS.
-If you are using SSL termination on the ALB, with a self-signed certificate, you need also to build a custom inbound-agent base image (see here) with the CA certificate.
-",Jenkins
-"I have tried multiple solutions in jenkins to copy a file on remote which is EC2 window server on AWS.
-
-Publish over ssh: provided key, hostname, username and password but connection is failed every time
-
-pipeline script:
-pipeline {
-agent any
-     stages {
-         stage('SCP') {
-             steps {
-                 bat '""C:\\Program Files\\Git\\usr\\bin\\scp.exe"" -i ""C:\\Live"" C:\\Windows\\System32\\config\\systemprofile\\AppData\\Local\\Jenkins\\.jenkins\\workspace\\MSDeploy\\abc.txt abc.txt'
-                 bat '""c:\\Program Files\\Git\\usr\\bin\\ssh.exe"" -i ""C:\\Live"" tom@xy.xyz.xy.xz ls -ltr'
-             }
-         }
-     }
- }
-
-where C:\Live is remote server directory and C:\\Windows\\System32\\config\\systemprofile\\AppData\\Local\\Jenkins\\.jenkins\\workspace\\MSDeploy\\abc.txt is the local directory, but it throws an error: shows no such file or directory found
-
-
-
-
-pipeline {
- agent any
- stage ('Deploy') {
-     steps {
-         withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'var', credentialsId: 'credid', secretKeyVariable: 'seckey']]) {
-             writeFile file: 'groovy1.txt', text: 'ls'
-             bat 'ls -l groovy1.txt'
-             bat 'cat groovy1.txt'
-         }
-     } 
- }
-}
-
-
-
-It does create file with text but doesn't work. None of the solutions worked for me.
-What I have missed?
-","1. You're using windows server, you need to use some tools to achieve this, copy over ssh probably won't work here. The best straight solution which I normally use it.
-Step 1:
-
-Create a network drive and attached it with shared folder.
-You can use bat command to attach
-
-
-net use N: \\ServerName\Folder /user:Administrator Password
-
-
-N is your drive
-
-Now from jenkins use the copy command.
-
-
-copy ""D:\Jenkins\file N:""
-
-You can use these commands in your jenkins file refer this link for the same
-https://thenucleargeeks.com/2020/11/24/how-to-run-batch-scripts-commands-in-jenkinsfile/
-This will work for your case, let me know if you face any issue.
-",Jenkins
-"How to interpret k6's test results?
-The official documentation does explain a bit on the different types of metrics reported by k6 and how to interpret them.
-But having read them both, I still am not able to put two and two together. So let me be very specific, is the following correct?
-
-The http_req_duration metric is the sum of:
-
-http_req_blocked: Time in the DNS lookup, connection, and TLS handshake phases.
-http_req_sending: Time to send the request.
-http_req_waiting: Time waiting for the server to process the request.
-http_req_receiving: Time to receive the response.
-
-For example, if you have:
-
-http_req_blocked: 20ms
-http_req_sending: 10ms
-http_req_waiting: 250ms
-http_req_receiving: 20ms
-
-The total http_req_duration would be 300ms.
-
-If so, why it's not matching my following k6 test result, if not, then what's the correct way to interpret k6's test results, including http_req_blocked, http_req_connecting, http_req_duration, http_req_waiting and http_req_waiting?
-
-
-
-","1. The docs on built-in metrics are quite clear what value the http_req_duration Trend metric tracks:
-
-Total time for the request. It’s equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times).
-
-It does not include the timings from http_req_blocked.
-",k6
-"I would like to give load of multiple users from K6 tool and the automation script is recorded in Playwright. When I run the script using K6 run test.js getting below error.
-Value is not an object: undefined
-import { exec } from 'k6/execution';
-import { sleep } from 'k6';
-export let options = {
-stages: [
-    { duration: '10m', target: 10 }, // Ramp up to 10 users over 1 minute
-    { duration: '9m', target: 10 }, // Stay at 10 users for 9 minutes
-    { duration: '1m', target: 0 }   // Ramp down to 0 users over 1 minute
-    ],
-thresholds: {
-    http_req_duration: ['p(95)<500'], // Thresholds for response time
-    }
-};
-
-export default function () {
-    exec('playwright test HRH.spec.js'); // Execute the Playwright script via command line
-    sleep(1); // Add some delay if needed
-}
-
-getting error in exec function.
-Below are code from HRH.spec.js
-import { test, expect } from '@playwright/test';
-test('test', async ({ page }) => {
-  await page.goto('https://www.google.com/');
-});
-
-Standalone Plywright script run result
-C:/k6project/tests> npx playwright test .\HRH.spec.js --headed
-
-Running 1 test using 1 worker
-
-  ✓  1 HRH.spec.js:3:5 › test (1.3s)
-
-  1 passed (3.0s)
-
-Above script with K6 tool run result
-C:/k6project/tests>k6 run script.js
-
-          /\      |‾‾| /‾‾/   /‾‾/
-     /\  /  \     |  |/  /   /  /
-    /  \/    \    |     (   /   ‾‾\
-   /          \   |  |\  \ |  (‾)  |
-  / __________ \  |__| \__\ \_____/ .io
-
-     execution: local
-        script: script.js
-        output: -
-
-     scenarios: (100.00%) 1 scenario, 10 max VUs, 40s max duration (incl. graceful stop):
-              * default: Up to 10 looping VUs for 10s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)
-
-ERRO[0000] TypeError: Value is not an object: undefined
-        at file:///C:/k6project/tests/script.js:51:9(5)  executor=ramping-vus scenario=default source=stacktrace
-ERRO[0000] TypeError: Value is not an object: undefined
-        at file:///C:/k6project/tests/script.js:51:9(5)  executor=ramping-vus scenario=default source=stacktrace
-
-","1. 
-You are confusing k6/execution with the xk6-exec extension.
-k6/execution comes built-in with k6 since version 0.35 (November 2021).
-
-k6/execution provides the capability to get information about the current test execution state inside the test script. You can read in your script the execution state during the test execution and change your script logic based on the current state.
-
-It is imported and used as follows:
-
-import exec from 'k6/execution';
-export default function () {
-  console.log(exec.scenario.name);
-}
-
-
-It's an object and not a function that can be called.
-
-Then there's xk6-exec:
-
-A k6 extension for running external commands.
-
-xk6-exec has a slightly different API:
-
-import exec from 'k6/x/exec';
-export default function () {
-  console.log(exec.command(""date""));
-  console.log(exec.command(""ls"",[""-a"",""-l""]));
-}
-
-
-(that's 'k6/x/exec', as opposed to 'k6/execution').
-But still, exec is not a callable function, but an object. The object has the property command which is a callable function. You might be able to change your import to import { command } from 'k6/x/exec' or import { command as exec } from 'k6/x/exec' to have the command function available as a global function, or to alias it to exec, respectively.
-To be able to use extension, you must compile k6 yourself with the xk6 system. Once you have your custom k6 binary, you can run your test scripts with it and import k6/x/exec.
-One more thing to note: exec.command does not take a string that is evaluated by a shell. Instead, it provides an API inspired by execv, which means you must provide the arguments in a single array, with each argument being an array element. Something like:
-exec.command('playwright', [ 'test', 'HRH.spec.js' ]);
-
-",k6
-"I'm writing a performance test with k6.io and I need to generate a new user for each request, the application checks for existing emails and will assume all request need to be linked to that existing user. This isn't the scenario I'm testing.
-I've got a function that builds me a user and then I import that function into my k6 script, but when I run the script, I get the following error:
-
-GoError: The moduleSpecifier ""./utils/generateConsumer"" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container
-
-I'm not running the code in a docker container, I'm running locally on my 2019 Macbook Pro running Sonoma 14.4.1. Once I've got the script running locally, I'll be using my Grafana Account to actually run the tests for testing our service.
-My file structure looks like this:
-.
-├── utils/
-│   └── generateConsumer.js
-├── script.js
-└── package.json
-
-Here is the generateConsumer.js file
-import { faker } from '@faker-js/faker';
-
-export function generateConsumer() {
-  return {
-    firstName: faker.person.firstName(),
-    lastName: faker.person.lastName(),
-    email: faker.person.email(),
-  };
-}
-
-
-And here is my script.js file
-import http from 'k6/http';
-import { sleep } from 'k6';
-import { generateConsumer } from './utils/generateConsumer';
-
-export const options = {
-  vus: 1,
-};
-
-function createConsumer() {
-  const consumer = generateConsumer();
-  return JSON.stringify(consumer);
-}
-
-export default function () {
-  const url = process.env.URL;
-  const payload = createConsumer();
-
-  const params = {
-    headers: {
-      'Content-Type': 'application/json',
-    },
-  };
-
-  http.post(url, payload, params);
-}
-
-","1. This needs to be import { generateConsumer } from './utils/generateConsumer.js';
-If you import a file, you must provide the full path to the file, including the file extension (.js).
-This is explained in the Local modules docs
-",k6
-"im trying to import a file from different folder but it gives me an error.
-enter image description here
-Error:
-
-ERROR[0000] GoError: The moduleSpecifier ""../data/GetServerToken"" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker.
-        at go.k6.io/k6/js.(*requireImpl).require-fm (native)
-        at file:///C:/Users/SameeraSenarath/OneDrive/GTN-Performance/GTN-Fintech-K6/k6-api-framework/src/tests/script-browser.js:3:0(28)  hint=""script exception""
-
-this is the run command i used ""k6 run \src\tests\script-browser.js""
-","1. 
-The moduleSpecifier ""../data/GetServerToken"" couldn't be found on local disk.
-
-For imports of local files, you need to provide the proper path to the file. This means that the file name must include the file extension (i.e. end with .js):
-import { createAssertion } from '../data/GetServerToken.js';
-
-",k6
-"I'm new to k6, and am currently looking at how to configure CSV results output. My challenge is that I'm not sure where to set these options.
-The documentation says you can configure some things (e.g timestamp) with options: https://k6.io/docs/results-output/real-time/csv/#csv-options
-However, I'm not sure where to set these options? I'm aware that you can declare options in an options object, but I'm not sure where in this object you would declare the CSV options? e.g I've tried the below and it does not work:
-export const options = {
-   timeFormat: ""rfc3339_nano""
-}
-
-Side note: I have managed to get the CSV options working by setting env vars e.g doing a export K6_CSV_TIME_FORMAT=""rfc3339_nano"" before running k6. However, I'd prefer to reduce the number of places I'm setting configuration if possible.
-","1. export K6_CSV_TIME_FORMAT=""rfc3339_nano""
-k6 run --out csv=results.csv your_script.js
-
-this is an example how you can run the k6 command with the --out flag
-",k6
-"I run NYC in my nodejs program that uses cluster to start child processes. I've used dumb-init to propagate the SIGINT to my program where I gracefully handle the signal.
-When I kill the program within the docker container, I get the coverage but when I do docker stop it doesn't wait for all the clusters to die, because of this NYC coverage isn't computed.
-Is there any way to delay docker from exiting so that the coverage data created is saved?
-The dockerfile executes a script which calls yarn start and in the start script, I've added NYC. I tried adding sleep in script after the yarn start but to no avail.
-My dockerfile looks like this:-
-FROM node:14
-ARG ENVIRONMENT_NAME
-ARG BUILD_NAME
-ARG APP_PATH
-ENV APP_PATH=${APP_PATH:-/default/path} 
-RUN mkdir -p ${APP_PATH}
-ADD . ${APP_PATH}
-WORKDIR ${APP_PATH}
-RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn yarn
-RUN yarn
-RUN yarn build:$BUILD_NAME
-
-
-FROM node:14-alpine
-ARG ENVIRONMENT_NAME
-ARG BUILD_NAME
-ARG APP_PATH
-ENV APP_PATH=${APP_PATH:-/default/path} 
-RUN yarn global add sequelize-cli@6.2.0 nyc
-RUN yarn add shelljs bull dotenv pg sequelize@6.6.5
-RUN apk add --no-cache dumb-init
-ADD scripts/migrate-and-run.sh ${APP_PATH}/
-ADD package.json ${APP_PATH}/
-ADD . ${APP_PATH}/
-COPY --from=0 ${APP_PATH}/dist ${APP_PATH}/dist
-ADD https://keploy-enterprise.s3.us-west-2.amazonaws.com/releases/latest/assets/freeze_time_arm64.so /lib/keploy/freeze_time_arm64.so
-RUN chmod +x /lib/keploy/freeze_time_arm64.so
-ENV LD_PRELOAD=/lib/keploy/freeze_time_arm64.so
-
-
-# Set working directory
-WORKDIR ${APP_PATH}
-ENTRYPOINT [""dumb-init""]
-
-# RUN echo ""hi""
-
-# Set entrypoint and command
-CMD [ ""sh"",""./scripts/migrate-and-run.sh""]
-
-# Expose port 9000
-EXPOSE 9000
-
-The script that is executed in the dockerfile
-set -a . "".env$ENVIRONMENT_NAME"" set +a
-sleep 10
-echo $BUILD_NAME
-if [ ""$BUILD_NAME"" == ""local"" ]
-then
-    npx sequelize-cli db:drop
-    npx sequelize-cli db:create
-fi
-
-npx sequelize-cli db:migrate
-
-# seed data for local builds
-if [ ""$BUILD_NAME"" == ""local"" ]
-then
-    for file in seeders/*
-    do
-        :
-        npx sequelize-cli db:seed --seed $file
-    done
-fi
-
-yarn start
-
-","1. Maybe there is a way to use the script to help resolve this problem.
-You can trap the signal in the script and send it to the running child processes and make the child wait till it is completed.
-#!/bin/sh
-
-set -a
-. "".env.$ENVIRONMENT_NAME""
-set +a
-
-# Function to handle SIGINT signal
-handle_int() {
-    echo ""SIGINT received, forwarding to child process...""
-    kill -INT ""$child"" 2>/dev/null
-    echo ""Waiting for child process to exit...""
-    wait ""$child""
-    echo ""Child process exited. Waiting for NYC coverage data to be fully written...""
-    sleep 10  # Wait for 50 seconds
-    echo ""Exiting after delay...""
-    exit 0
-}
-
-# Trap SIGINT signal
-trap 'handle_int' INT TERM
-
-sleep 10  # Ensure services like DB are up
-
-echo $BUILD_NAME
-if [ ""$BUILD_NAME"" = ""local"" ]; then
-    npx sequelize-cli db:drop
-    npx sequelize-cli db:create
-fi
-
-echo $LD_PRELOAD
-npx sequelize-cli db:migrate
-
-# Seed data for local builds
-if [ ""$BUILD_NAME"" = ""local"" ]; then
-    for file in seeders/*; do
-        npx sequelize-cli db:seed --seed $file
-    done
-fi
-
-# Start yarn and get its PID
-yarn start &
-
-child=$!
-echo ""Started yarn with PID $child""
-
-wait ""$child""
-
-Here the wait ""$child"" command in a shell script is used to pause the execution of the script until the process identified by the variable $child terminates.
-Why This Process Is Beneficial Synchronisation:
-
-You may make sure that any cleanup or follow-up actions (such as copying NYC coverage data) only take place after the program has ceased executing by waiting for the child process to die.
-It enables appropriate signal handling in the script. For instance, the script can wait for the child process to end before performing any necessary cleanup if it receives a SIGINT.
-It makes sure that commands that come after wait ""$child"" are only run when the child process has completed, preserving the proper sequence of events.
-
-",Keploy
-"I have a side project where I'm using Spring Boot, Liquibase and Postgres.
-I have the following sequence of tests:
-test1();
-test2();
-test3();
-test4();
-
-In those four tests, I'm creating the same entity. As I'm not removing the records from the table after each test case, I'm getting the following exception: org.springframework.dao.DataIntegrityViolationException
-I want to solve this problem with the following constraints:
-
-I don't want to use the @repository to clean the database.
-I don't want to kill the database and create it on each test case because I'm using TestContainers and doing that would increase the time it takes to complete the tests.
-
-In short: How can I remove the records from one or more tables after each test case without 1) using the @repository of each entity and 2) killing and starting the database container on each test case?
-","1. The simplest way I found to do this was the following:
-
-Inject a JdbcTemplate instance
-
-@Autowired
-private JdbcTemplate jdbcTemplate;
-
-
-Use the class JdbcTestUtils to delete the records from the tables you need to.
-
-JdbcTestUtils.deleteFromTables(jdbcTemplate, ""table1"", ""table2"", ""table3"");
-
-
-Call this line in the method annotated with @After or @AfterEach in your test class:
-
-@AfterEach
-void tearDown() throws DatabaseException {
-    JdbcTestUtils.deleteFromTables(jdbcTemplate, ""table1"", ""table2"", ""table3"");
-}
-
-I found this approach in this blog post:
-Easy Integration Testing With Testcontainers
-
-2. Annotate your test class with @DataJpaTest. From the documentation:
-
-By default, tests annotated with @DataJpaTest are transactional and roll back at the end of each test. They also use an embedded in-memory database (replacing any explicit or usually auto-configured DataSource).
-
-For example using Junit4:
-@RunWith(SpringRunner.class)
-@DataJpaTest
-public class MyTest { 
-//...
-}
-
-Using Junit5:
-@DataJpaTest
-public class MyTest { 
-//...
-}
-
-
-3. You could use @Transactional on your test methods. That way, each test method will run inside its own transaction bracket and will be rolled back before the next test method will run.
-Of course, this only works if you are not doing anything weird with manual transaction management, and it is reliant on some Spring Boot autoconfiguration magic, so it may not be possible in every use case, but it is generally a highly performant and very simple approach to isolating test cases.
-",Liquibase
-"I have the following changeset and can't find any docs about if the checksum is calculated before or after the property is substituted.
-<changeSet author=""[...]"" id=""[...]"">
-    <addColumn tableName=""FOO"">
-        <column name=""BAR"" type=""${type.string.max.80}"" />
-    </addColumn>
-</changeSet>
-
-Though, there might be a difference because for SQL embedded into XML and YAML things are differently handled compared to e.g. external SQL files. But in my case I don't have any plain SQL at all.
-
-You can use property substitution with the sql and sqlFile Change Types. Note that Liquibase obtains the checksum of a sql changeset after substituting any properties you specify. However, it obtains the checksum of a sqlFile changeset before substituting the properties in the external SQL file.
-
-https://docs.liquibase.com/concepts/changelogs/property-substitution.html?_ga=2.45579489.475291501.1716446628-689967727.1694174943
-So what's the case for my XML-example?
-Thanks!
-","1. If I get your question right and although it's not specified in the docs, if we make and experiment with a spring-boot app and create a changeSet like:
-<changeSet id=""foo"" author=""bar"">
-    <createTable tableName=""${test1}"">
-        <column name=""the_name"" type=""varchar(32)""/>
-    </createTable>
-</changeSet>
-
-where in application.properties we have:
-spring.liquibase.parameters.test1=12345
-
-Liquibase will create a table '12345'. And leave a record in the databasechangelog table with some checksum.
-Now if we change the test1 value to:
-spring.liquibase.parameters.test1=123456789
-
-and redeploy the application, Liquibase will fail with checksum validation error.
-So I'd suggest that Liquibase calculates checksum of xml changeSet AFTER applying property substitution but BEFORE executing the changeSet.
-Otherwise (from the checksum's perspective) the changeSet should've remained ""the same"" with tableName=""${test1}"" regardless of the property change.
-",Liquibase
-"We are upgrading spring boot(2.1.6.Release to 2.7.18) which in turn upgrading liquibase from 3.6.3 to 4.9.1 resulting in executing the file which are already executed.
-Looks like the new version of liquibase is generating new MD5Sum which is different than the already existing MD5Sum for the same xml file which is resulting in rerunning the file which are executed. which is causing the issue ..Please help us to resolve the issue.
-","1. You are correct. The checksum calculation algorithm has changed.
-You can fix your issue using Liquibase CLI. Execute the following commands:
-
-Run liquibase clear-checksums. This will clear the checksums of the previous changesets. Docs are here
-
-Run liquibase update. This will run all not prevously ran changeSets. Liquibase decides whether chageset has been ran or not based on ID, author and path to filename.  Docs are here.
-
-
-After that all your changesets should have new checksums, and you should be able to continue writing new changesets as usual using new Liquibase version.
-",Liquibase
-"The Liquibase install comes with an examples directory you can use to learn about different commands.  The examples use a H2 database with a web console on port 9090.  Unfortunately port 9090 is not available.
-I'm asking how can I change the web-conole port used with the example H2 database started by the script:
-
-start-h2
-
-The port appears to be specified by the Liquibase liquibase.example.StartH2Main module itself.  H2 doesn't seem influenced by changes to: $HOME/.h2.server.properties ...
-java -cp h2-1.4.200.jar:liquibase.jar liquibase.example.StartH2Main
-Starting Example H2 Database...
-NOTE: The database does not persist data, so stopping and restarting this process will reset it back to a blank database
-
-java.lang.reflect.InvocationTargetException
-    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
-    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
-    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
-    at java.base/java.lang.reflect.Method.invoke(Method.java:567)
-    at liquibase.example.StartH2Main.startTcpServer(StartH2Main.java:74)
-    at liquibase.example.StartH2Main.main(StartH2Main.java:28)
-Caused by: org.h2.jdbc.JdbcSQLNonTransientConnectionException: Exception opening port ""9090"" (port may be in use), cause: ""java.net.BindException: Address already in use"" [90061-200]
-    at org.h2.message.DbException.getJdbcSQLException(DbException.java:622)
-    at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
-    at org.h2.message.DbException.get(DbException.java:194)
-    at org.h2.util.NetUtils.createServerSocketTry(NetUtils.java:180)
-    at org.h2.util.NetUtils.createServerSocket(NetUtils.java:146)
-    at org.h2.server.TcpServer.start(TcpServer.java:245)
-    at org.h2.tools.Server.start(Server.java:511)
-
-I'm hoping there's a .properties file setting or command line option that will change the web-console port number H2 to use.
-","1. I have answered my own question, taking the lead from @RobbyCornelissen recommendation, with the following updates.
-
-It is completely possible to build the StartH2Main class.
-Change the dbPort constant from 9090 to something 'available' like 8092.
-
-
-The StartH2Main app loads H2 and side-steps the .h2.server.properties file.
-
-
-Build a StartH2Main.jar for yourself.
-
-
-The 9090 is hard-coded for StartH2Main.
-Port 9090 is the database port, which means that all the examples must be updated to match the new port number given.
-
-Personally I feel that anything such as a port used for a demo or tutorial should be something I can put on the command line or in a config file.  Thus, avoiding timeconsuming or inconvenient barriers to adoption.  It just makes sense.  Such things can always have a default, please allow them to be configured as well.
-
-2. I had the same issue, i resolved it by specifying the port number when executing the start H2DB command:
-
-
-liquibase init start-h2 --web-port <your_desired_port>
-
-
-
-Replace <your_desired_port> with the port you wish to use.
-I know this question was asked almost 3 years ago, but i hope it helps.
-",Liquibase
-"If we deploy both Apache Ozone and Apache Spark on kubernetes, is it possible to achieve data locality? Or will data always have to be shuffled upon read?
-","1. tl;dr Yes, Ozone Client (Used by Apache Spark) will prefer reading from local node if the block is present on the same node.
-Apache Spark uses Hadoop Filesystem Client (Which will call Ozone Client) to read data from Ozone.
-For reads, Apache Ozone will sort the block list based on the distance from the client node (if network topology is configured, the sorting will be done based on the network topology).
-If Apache Ozone and Apache Spark are co-located and there is a local copy of the block where Apache Spark is running, Ozone client will prefer reading the local copy. In case if there is no local copy, the read will go over network (if network topology is configured, the Ozone Client will prefer blocks from same rack).
-This is implemented in HDDS-1586.
-",Ozone
-"I want to pull data from Apache Ozone into my SpringBoot application.
-The authentication method for connecting to Ozone Store is Kerberos.
-I have OzoneUrl(hostIp & Port), KeyTab, Principal and ServicePrincipal and i want to use these properties for connection
-I tried using this dependency
-<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-ozone-client -->
-<dependency>
-    <groupId>org.apache.hadoop</groupId>
-    <artifactId>hadoop-ozone-client</artifactId>
-    <version>1.1.0</version>
-</dependency>
-
-
-My Connection Code =>
- OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
-        ozoneConfiguration.set(""ozone.om.address"",ozoneUrl);
-OzoneClient oz = OzoneClientFactory.getRpcClient(ozoneConfiguration);
-
-The code successfully tries to connect to Ozone but I want to connect it using Kerberos
-","1. You need to set these properties for secure cluster.
-//set om leader node
-ozoneConfiguration.set(""ozone.om.address"", ""xx:xx:xx:xx"");
- //Setting kerberos authentication
- ozoneConfiguration.set(""ozone.om.kerberos.principal.pattern"", ""*"");
- ozoneConfiguration.set(""ozone.security.enabled"", ""true"");
- ozoneConfiguration.set(""hadoop.rpc.protection"", ""privacy"");
- ozoneConfiguration.set(""hadoop.security.authentication"", ""kerberos"");
- ozoneConfiguration.set(""hadoop.security.authorization"", ""true"");
-
-//Passing keytab for Authentication
-UserGroupInformation.setConfiguration(ozoneConfiguration);
-UserGroupInformation.loginUserFromKeytab(""om pricipal"",""ozone.keytab- 
-location on-spring-boot-host"");
-
-
-copy ozone.keytab into spring boot host and refer the path in loginUserFromKeytab (ozone.keytab-location on-spring-boot-host)
-copy krb5.conf to your spring-boot-host under etc directory .
-
-",Ozone
-"C++20 std::atomic has wait and notify_* member functions, but no wait_for/wait_until.
-The Microsoft STL implementation for std::atomic uses WaitOnAddress (when the OS is new enough to has it). And this API has a dwMilliseconds parameter just as timeout value. So from a standard library writer's standpoint, I think the missing functions are easily implementable (at least on Windows 8 or newer). I just wonder why it's not in C++20.
-But as a (portable) user-code writer, I have to emulate the behavior with a standard semaphore and an atomic counter. So here's the code:
-#include <concepts>
-#include <atomic>
-#include <type_traits>
-#include <cstring>
-#include <semaphore>
-
-namespace detail
-{
-    template <size_t N>
-    struct bytes
-    {
-        unsigned char space[N];
-        auto operator<=>(bytes const &) const = default;
-    };
-
-    //Compare by value representation, as requested by C++20.
-    //The implementation is a bit awkward.
-    //Hypothetically `std::atomic<T>::compare(T, T)` would be helpful. :)
-    template <std::integral T>
-    bool compare(T a, T b) noexcept
-    {
-        static_assert(std::has_unique_object_representations_v<T>);
-        return a == b;
-    }
-    template <typename T>
-    requires(std::has_unique_object_representations_v<T> && !std::integral<T>)
-    bool compare(T a, T b) noexcept
-    {
-        bytes<sizeof(T)> aa, bb;
-        std::memcpy(aa.space, &a, sizeof(T));
-        std::memcpy(bb.space, &b, sizeof(T));
-        return aa == bb;
-    }
-    template <typename T>
-    requires(!std::has_unique_object_representations_v<T>)
-    bool compare(T a, T b) noexcept
-    {
-        std::atomic<T> aa{ a };
-        auto equal = aa.compare_exchange_strong(b, b, std::memory_order_relaxed);
-        return equal;
-    }
-
-    template <typename T>
-    class atomic_with_timed_wait
-        : public std::atomic<T>
-    {
-    private:
-        using base_atomic = std::atomic<T>;
-        std::counting_semaphore<> mutable semaph{ 0 };
-        std::atomic<std::ptrdiff_t> mutable notify_demand{ 0 };
-    public:
-        using base_atomic::base_atomic;
-    public:
-        void notify_one() /*noexcept*/
-        {
-            auto nd = notify_demand.load(std::memory_order_relaxed);
-            if (nd <= 0)
-                return;
-            notify_demand.fetch_sub(1, std::memory_order_relaxed);
-            semaph.release(1);//may throw
-        }
-        void notify_all() /*noexcept*/
-        {
-            auto nd = notify_demand.exchange(0, std::memory_order_relaxed);
-            if (nd > 0)
-            {
-                semaph.release(nd);//may throw
-            }
-            else if (nd < 0)
-            {
-                //Overly released. Put it back.
-                notify_demand.fetch_add(nd, std::memory_order_relaxed);
-            }
-        }
-        void wait(T old, std::memory_order order = std::memory_order::seq_cst) const /*noexcept*/
-        {
-            for (;;)
-            {
-                T const observed = base_atomic::load(order);
-                if (false == compare(old, observed))
-                    return;
-
-                notify_demand.fetch_add(1, std::memory_order_relaxed);
-
-                semaph.acquire();//may throw
-                //Acquired.
-            }
-        }
-        template <typename TPoint>
-        bool wait_until(int old, TPoint const & abs_time, std::memory_order order = std::memory_order::seq_cst) const /*noexcept*/
-        //Returns: true->diff; false->timeout
-        {
-            for (;;)
-            {
-                T const observed = base_atomic::load(order);
-                if (false == compare(old, observed))
-                    return true;
-
-                notify_demand.fetch_add(1, std::memory_order_relaxed);
-
-                if (semaph.try_acquire_until(abs_time))//may throw
-                {
-                    //Acquired.
-                    continue;
-                }
-                else
-                {
-                    //Not acquired and timeout.
-                    //This might happen even if semaph has positive release counter.
-                    //Just cancel demand and return.
-                    //Note that this might give notify_demand a negative value,
-                    //  which means the semaph is overly released.
-                    //Subsequent acquire on semaph would just succeed spuriously.
-                    //So it should be OK.
-                    notify_demand.fetch_sub(1, std::memory_order_relaxed);
-                    return false;
-                }
-            }
-        }
-        //TODO: bool wait_for()...
-    };
-}
-using detail::atomic_with_timed_wait;
-
-I am just not sure whether it's correct. So, is there any problem in this code?
-","1. Timed waiting APIs (try_wait, wait_for, and wait_until) for std::atomic are proposed in P2643, targeting C++26. libstdc++ has already implemented the underlying support for these operations in its internal header <bits/atomic_timed_wait.h>. Note that these facilities are also used to implement timed waits for std::counting_semaphore, which is essentially a more constrained std::atomic. Before the paper is merged, there are at least two portable ways to emulate timed operations:
-
-A pair of mutex and condition variable: These two can be combined to provide universal timed waiting functionality. For example, std::condition_variable_any could be implemented using a pair of std::mutex and std::condition_variable (N2406). A pair of std::atomic and std::counting_semaphore, as in your code, may also be feasible, but I found it a bit awkward since std::counting_semaphore doesn't have a notify_all operation, which introduces extra complexity. A straightforward prototype could be as follows (Godbolt):
-// NOTE: volatile overloads are not supported
-template <class T> struct timed_atomic : atomic<T> {
-  using atomic<T>::atomic;
-  bool try_wait(T old, memory_order order = seq_cst) const noexcept {
-    T value = this->load(order);
-    // TODO: Ignore padding bits in comparison
-    return memcmp(addressof(value), addressof(old), sizeof(T));
-  }
-  void wait(T old, memory_order order = seq_cst) const {
-    unique_lock lock(mtx);
-    cond.wait(lock, [=, this]() { return try_wait(old, relaxed); });
-  }
-  template <class Rep, class Period>
-  bool wait_for(T old, const duration<Rep, Period> &rel_time,
-                memory_order order = seq_cst) const {
-    unique_lock lock(mtx);
-    return cond.wait_for(lock, rel_time,
-                         [=, this]() { return try_wait(old, relaxed); });
-  }
-  template <class Clock, class Duration>
-  bool wait_until(T old, const time_point<Clock, Duration> &abs_time,
-                  memory_order order = seq_cst) const {
-    unique_lock lock(mtx);
-    return cond.wait_until(lock, abs_time,
-                           [=, this]() { return try_wait(old, relaxed); });
-  }
-  void notify_one() const {
-    { lock_guard _(mtx); }
-    cond.notify_one();
-  }
-  void notify_all() const {
-    { lock_guard _(mtx); }
-    cond.notify_all();
-  }
-private:
-  mutable mutex mtx;
-  mutable condition_variable cond;
-  using enum memory_order;
-};
-
-As you can see above, one downside of this approach is that volatile overloads of member functions are not supported since std::mutex and std::condition_variable themselves don't support volatile. One workaround is to store them in a separate table outside the timed_atomic and hash addresses to get the corresponding pairs. libstdc++ has implemented something similar when the platform doesn't support native atomic waiting operations (Thomas).
-A more subtle problem is that the standard requires wait to compare value representations (i.e., excluding padding bits) for equality instead of object representations ([atomics.types.operations] p30.1). For now, this can't be easily implemented in a portable way and needs compiler support (e.g., __builtin_clear_padding in GCC).
-
-Polling with timed backoff: This approach is more lightweight as it doesn't require additional synchronization facilities. The downside is that polling is usually more expensive than waiting when the notification takes a long time to arrive. One potential advantage of polling is that it honors adjustments to the user-provided Clock. An example implementation is as follows (Godbolt):
-template <class T> struct timed_atomic : atomic<T> {
-  using atomic<T>::atomic;
-  bool try_wait(T old, memory_order order = seq_cst) const noexcept {
-    T value = this->load(order);
-    // TODO: Ignore padding bits in comparison
-    return memcmp(addressof(value), addressof(old), sizeof(T));
-  }
-  template <class Rep, class Period>
-  bool wait_for(T old, const duration<Rep, Period> &rel_time,
-                memory_order order = seq_cst) const {
-    return wait_until(old, steady_clock::now() + rel_time, order);
-  }
-  template <class Clock, class Duration>
-  bool wait_until(T old, const time_point<Clock, Duration> &abs_time,
-                  memory_order order = seq_cst) const {
-    while (!try_wait(old, order)) {
-      if (Clock::now() >= abs_time)
-        return false;
-      sleep_for(100ms);
-    }
-    return true;
-  }
-  // NOTE: volatile overloads are omitted
-private:
-  using enum memory_order;
-};
-
-
-
-",Semaphore
-"I am using Vulkan graphics API (via BGFX) to render. And I have been measuring how much (wall-clock) time my calls take.
-What I do not understand is that vkAcquireNextImageKHR() is always fast, and never blocks. Even though I disable the time-out and use a semaphore to wait for presentation.
-The presentation is locked to a 60Hz display rate, and I see my main-loop indeed run at 16.6 or 33.3 ms.
-Shouldn't I see the wait-time for this display rate show up in the length of the vkAcquireNextImageKHR() call?
-The profiler measures this call as 0.2ms or so, and never a substantial part of a frame.
-VkResult result = vkAcquireNextImageKHR(
-    m_device
-  , m_swapchain
-  , UINT64_MAX
-  , renderWait
-  , VK_NULL_HANDLE
-  , &m_backBufferColorIdx
-);
-
-Target hardware is a handheld console.
-","1. The whole purpose of Vulkan is to alleviate CPU bottlenecks. Making the CPU stop until the GPU is ready for something would be the opposite of that. Especially if the CPU itself isn't actually going to use the result of this operation.
-As such, all the vkAcquireNextImageKHR function does is let you know which image in the swap chain will be ready to use next. The Vulkan term for this is ""available"". This is the minimum that needs to happen in order for you to be able to use that image (for example, by building command buffers that reference the image in some way). However, an image being ""available"" doesn't mean that it is ready for use.
-This is why this function requires you to provide a semaphore and/or a fence. These will be signaled when the image can actually be used, and the image cannot be used in a batch of work submitted to the GPU (despite being ""available"") until these are signaled. You can build the command buffers that use the image, but if you submit those command buffers, you have to ensure that the commands that use them wait on the synchronization.
-If the process which consumes the image is just a bunch of commands in a command buffer (ie: something you submit with vkQueueSubmit), you can simply have that batch of work wait on the semaphore given to the acquire operation. That means all of the waiting happens in the GPU. Where it belongs.
-The fence is there if you (for some reason) want the CPU to be able to wait until the acquired image is ready for use. But Vulkan, as an explicit, low-level API, forces you to explicitly say that this is what you want (and it almost never is what you want).
-Because ""available"" is a much more loose definition than ""ready for use"", the GPU doesn't have to actually be done with the image. The system only needs to figure out which image it will be done with next. So any CPU waiting that needs to happen is minimized.
-",Semaphore
-"Env: on-prem k8s v1.28.3
-Spinnaker: v1.33.0
-Spinnaker-operator: v1.3.1
-Halyard: image: armory/halyard:operator-a6ac1d4
-I've deployed Spinnaker CD to our On-Prem Kubernetes cluster via spinnaker-operator, and set it up a little, but here is the problem:
-If you're deploying Spinnaker to already existing cluster, you might see automatically created applications. Docs says to ignore them, creating new ones insted, as I did.
-But created application is absolutely empty. I know that I can create a pipeline, and created by it resources will be seen in here, but this not exactly what I'm looking for.
-I have a project, let's name it learning. And I have many resources in k8s, that belongs to our learning project, such as namespaces, ingresses, deployments etc.
-I want to create an application in Spinnaker, let's name it learning-apps, and group all existing resources associated with it into this application,
-like namespaces learning-01,learning-02,ingress-learning, and everything inside of them to make it visible in my app.
-However, Spinnaker do not provide such an option in UI, and I didn't find any information how to configure it anywhere else.
-Is there a possibility to configure somewhere my new application, add namespaces to it, and will it see and group resources within this namespace the way, as it doing to already existing ones?
-Beautiful, cool and cute generated and grouped automatically application:
-
-Ugly, empty, and idk how to configure created new one:
-
-There is no option to add anything in config:
-
-And, finally, + Create Server Group provides only manifest-based setting, nothing more:
-
-So, is there a way to add namespaces to this app? To add its resources? Anything?
-Thanks.
-","1. Spinnaker uses annotations and labels to the resources that are deployed.
-If you want to add already existing in cluster resource to Spinnaker Application, you have to annotate it manually. Or, you can use the same manifests to ""deploy it again"" via pipeline. It will label and annotate all resources that will be created by those manifests, and they will appear in Spinnaker UI.
-Note: Namespaces are not shown in the list.
-Services will appear in LOADBALANCERS menu.
-Pods will appear too, according to deployment.
-To reproduce:
-Deploy nginx-demo using Spinnaker pipelines, and then use kubectl describe all -n <> to see how it's labeled/annotated.
-Thx for -1
-",Spinnaker
-"I'm trying to run some Python unit tests on a remote build server using Teamcity. They fail when attempting to execute some matplotlib code. I get the following output in the Teamcity build logs, which seems to point towards the matplotlib backend as the culprit.
-    XXXXX\stats.py:144: in PerformHypothesisTest
-        fig, ax = plt.subplots(1, 1, figsize=(10, 6))
-    .venv\lib\site-packages\matplotlib\pyplot.py:1702: in subplots
-        fig = figure(**fig_kw)
-    .venv\lib\site-packages\matplotlib\pyplot.py:1022: in figure
-        manager = new_figure_manager(
-    .venv\lib\site-packages\matplotlib\pyplot.py:545: in new_figure_manager
-        return _get_backend_mod().new_figure_manager(*args, **kwargs)
-    .venv\lib\site-packages\matplotlib\backend_bases.py:3521: in new_figure_manager
-        return cls.new_figure_manager_given_figure(num, fig)
-    .venv\lib\site-packages\matplotlib\backend_bases.py:3526: in new_figure_manager_given_figure
-        return cls.FigureCanvas.new_manager(figure, num)
-    .venv\lib\site-packages\matplotlib\backend_bases.py:1811: in new_manager
-        return cls.manager_class.create_with_canvas(cls, figure, num)
-    .venv\lib\site-packages\matplotlib\backends\_backend_tk.py:479: in create_with_canvas
-        with _restore_foreground_window_at_end():
-    C:\Python310\lib\contextlib.py:135: in __enter__
-        return next(self.gen)
-    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
-    
-        @contextmanager
-        def _restore_foreground_window_at_end():
-    >       foreground = _c_internal_utils.Win32_GetForegroundWindow()
-    E       ValueError: PyCapsule_New called with null pointer
-    
-    .venv\lib\site-packages\matplotlib\backends\_backend_tk.py:43: ValueError
-
-The tests run fine both:
-
-Locally on my PC from Pycharm
-On the build server when executed from the command line, i.e. by running python -m pytest
-
-I'm not super familiar with how Teamcity works and how to debug it, so I would appreciate any ideas as to what might be going wrong here.
-The build server is running the following versions:
-
-Python 3.10.0
-Matplotlib 3.9.0
-Pytest 8.2.1
-
-If it is useful, the build server is using the 'tkagg' backend (from matplotlib.get_backend()).
-","1. I'm having a similar issue. My python script stopped working spontaneously (source not changed). The script works fine locally, but not remotely (server). The local and remote python environments seem to be the same. Setting the backend to 'Agg' solved the issue for me. Here is what I did:
-import matplotlib
-matplotlib.use('Agg')
-# now do the funny stuff
-
-I hope that helps!
-",TeamCity
-"We have a multi target (net472 and netstandard2.0) class library referencing latest linq2db.SQLite 5.4.1 nuget. That library is referenced in net472 WPF application and net6.0 console application.
-linq2db.SQLite nuget depends on System.Data.SQLite.Core 1.0.118 nuget  that depends on Stub.System.Data.SQLite.Core.NetFramework 1.0.118 nuget which has Stub.System.Data.SQLite.Core.NetFramework.targets file.
-My local builds were working fine while same builds on TeamCity has sporadic error during a clean because the files \x86\SQLite.Interop.dll and \x64\SQLite.Interop.dll are in use by another process. It is failing on a clean operation defined in that targets file:
-  <Target Name=""CleanSQLiteInteropFiles""
-          Condition=""'$(CleanSQLiteInteropFiles)' != 'false' And
-                     '$(OutDir)' != '' And
-                     HasTrailingSlash('$(OutDir)') And
-                     Exists('$(OutDir)')"">
-    <!--
-        NOTE: Delete ""SQLite.Interop.dll"" and all related files, for every
-              architecture that we support, from the build output directory.
-    -->
-    <Delete Files=""@(SQLiteInteropFiles -> '$(OutDir)%(RecursiveDir)%(Filename)%(Extension)')"" />
-  </Target>
-
-According to this that is because multi target projects are built in parallel by Visual Studio.
-I've followed this recomendations and added following properties into my library .csproj:
-<PropertyGroup> 
-    <ContentSQLiteInteropFiles>true</ContentSQLiteInteropFiles>
-    <CopySQLiteInteropFiles>false</CopySQLiteInteropFiles>
-    <CleanSQLiteInteropFiles>false</CleanSQLiteInteropFiles>
-    <CollectSQLiteInteropFiles>false</CollectSQLiteInteropFiles>
-</PropertyGroup>
-
-\x86\SQLite.Interop.dll and \x64\SQLite.Interop.dll are now listed in my solution explorer as content files
-
-But sporadic error on TeamCity still exists and looks like CleanSQLiteInteropFiles target still execute.
-What is wrong with my setup?
-","1. With help of msbuild MySolution.sln /t:Clean -fl -flp:logfile=d:\clean.log;verbosity=diagnostic I've found that CleanSQLiteInteropFiles target was executed also while cleaning both applications, depending on my library. After setting <PackageReference Include=""linq2db.SQLite"" Version=""5.4.1"" PrivateAssets = ""all"" /> that target started to execute only for my library. So the final solution is combination of 4 properties
-<PropertyGroup> 
-    <ContentSQLiteInteropFiles>true</ContentSQLiteInteropFiles>
-    <CopySQLiteInteropFiles>false</CopySQLiteInteropFiles>
-    <CleanSQLiteInteropFiles>false</CleanSQLiteInteropFiles>
-    <CollectSQLiteInteropFiles>false</CollectSQLiteInteropFiles>
-</PropertyGroup>
-
-and PrivateAssets=all for linq2db.SQLite nuget.
-",TeamCity
-"This is really weird.
-I am trying a clean Teamcity 9.1.1 install but the Data Directory is nowhere to be found.
-
-if I access the Global Settings tab under Administration, it lists ""C:\Windows\System32\config\systemprofile.BuildServer"" - a folder that doesn't exist.
-if I try to browse to that folder, it shows me a range of files; uploading a specific file there instead uploads it to C:\Windows\SysWOW64\config\systemprofile.BuildServer.
-there is no teamcity-startup.properties file anywhere - I am unable to customize the location of the data directory.
-when I restore a backup, the backup files are instead restored to C:\Users\[user name]\.BuildServer rather than in the correct data directory.
-
-Does anyone has any suggestions on how to regain control of the situation? How can I tell TeamCity which data folder to use?
-","1. I no longer desire for my answers to contribute to stackoverflow, due to both their changes to the code of conduct and the unilateral use of our answers to train OpenAI.
-",TeamCity
-"I am trying to modify a parameter (e.g. configuration parameter) of a running (or finished) build named 'lock' and want to insert the value: ""true"".
-My initial thought was to send a rest api call with a service message which sets the parameter.
-    response = requests.post(f""{teamcity_build_api_endpoint}/id:{build_id}/log"", headers=header_text_plain, data=""##teamcity[setParameter name='lock' value='true']"")
-
-Even though, this is not throwing any exceptions it also does not modify the parameter.
-What am I doing wrong? Is this even possible to alter parameters via rest api?
-Background information:
-What I need is a shared parameter for a couple of builds which are triggered from an initial one. The triggered builds should report their result into a parameter. For that reason I wanted to introduce a lock mechanism so that only one triggered build can change the parameter at the time.
-","1. 
-What am I doing wrong? Is this even possible to alter parameters via rest api?
-
-The ##teamcity[setParameter name='lock' value='true'] service message syntax is intended to be used inside a script running within the build itself which means there is a specific point in time when this change could be applied:
-
-To be processed by TeamCity, they need to be written to the standard output stream of the build, that is printed or echoed from a build step.
-
-I can't imagine how modifying the build parameter via the API would be possible for a running build on a lower level - how would TeamCity decide when exactly it will apply the new value if the build is already running and a certain process is already being executed?
-For a build configuration it might be possible but this generally means that you don't have information on what and why has changed the value of the parameter.
-
-What I need is a shared parameter for a couple of builds which are triggered from an initial one. The triggered builds should report their result into a parameter. For that reason I wanted to introduce a lock mechanism so that only one triggered build can change the parameter at the time.
-
-I'm not sure I fully understand your design idea and how it fits into the build chain concept. But in general TeamCity already has a lock mechanism in place - it's the Shared Resources build feature. It seems like it should solve your problem which basically boils down to prohibition of parallel execution of those two build configurations.
-",TeamCity
-"Using TeamCity 2020.2.4
-Build job has multiple VCS Root set up comprised of ""source code"" and ""utility scripts"".
-The ""utility scripts"" are always on same branch (master); however, the ""source code"" is either in master (aka: default) or release/### branch.
-The ""source code"" root has:
-Default branch: refs/heads/master
-Branch specification:
-+:refs/heads/(master)
-+:refs/heads/(release/*)
-
-Currently there are 2 builds, one for master (longer running builds), and one for release (does bunch of extra steps to prep for release).
-Initially there was a desire to just hide the ""master"" default branch from displayed, thus followed TeamCity's own docs (https://www.jetbrains.com/help/teamcity/branch-filter.html#Branch+Filter+Format) that imply I can tweak Branch Filter:
-+:*
--:<default>
-
-(also some SO articles that mention this as an answer, but years old)
-However when doing so, end up getting error:
-Failed to collect changes, error: Builds in default branch are disabled in build configuration
-
-Looks like triggers run just fine, but it's the Manual build where things really go sideways.
-Have tried even overriding with teamcity.build.branch with a default + prompt parameter, no such luck.
-Have seen circumstances where wrap it in another job, but that's bit hacky just to do what TC says should be possible directly.
-","1. Found a solution, posting here to help the next person ...
-So, despite TeamCity mentioning ""logical branch name"" for the ""Branch Filter"" in Version Control Settings, can actually provide full branch name.
-Thus ""refs/heads/master"" can be used. This effectively seems to allow one of the VCS roots to continue using it's default/master, while allowing other options for the 2nd root.
-For example:
-VCS Root #1 config:
-Default: refs/heads/master
-Spec: 
-+:refs/heads/(master)
-+:refs/heads/(release/*)
-
-VCS Root #1 config:
-Default: refs/heads/master
-Spec: (empty)
-
-And when doing different jobs, the Branch Filter would be set as such:
-""master only"":
-+:refs/heads/master
-+:release/*
--:master
-
-""releases only"":
-+:refs/heads/master
-+:<default>
-+:master
-
-Worth noting, despite that even though ""master"" is default, still need to actually specify both ... at first pass might not seem intuitive, but it is what it is - and it works.
-
-2. I get the same behavior in TeamCity v2023.11.4.
-The Branch Filter field must include the full git path rather than just the logical branch name:
-If you have this Branch Specification in the VCS Root:
-+:refs/heads/(mybranch)
-
-
-As an aside, parentheses here result in TeamCity build parameters having ""mybranch"" instead of ""refs/heads/mybranch"".  It's useful if you generate deployment environment or dns strings based on the branch name.
-
-Then you must have this Branch Filter in the Version Control Settings of your Build Config:
-+:mybranch
-+:refs/heads/mybranch
-
-",TeamCity
-"I am trying to host a public docker image https://gallery.ecr.aws/unleashorg/unleash-edge#edge on AWS App Runner and I am getting an error on port configuration.
-
-[AppRunner] Deployment Artifact: [Repo Type: ECR-Public], [Image URL: public.ecr.aws/unleashorg/unleash-edge], [Image Tag: edge]
-
-
-[AppRunner] Pulling image public.ecr.aws/unleashorg/unleash-edge from ECR-Public repository.
-
-
-[AppRunner] Successfully pulled your application image from ECR.
-[AppRunner] Provisioning instances and deploying image for publicly accessible service.
-[AppRunner] Performing health check on protocol TCP [Port: '3063'].
-[AppRunner] Your application stopped or failed to start. See logs for more information.  Container exit code: 2
-[AppRunner] Deployment with ID : 19d4d48e1f284ed29e2d8bbde622415a failed.
-
-Below is the docker cmd that works locally.
- docker run -p 3063:3063 unleashorg/unleash-edge:latest edge
-How should I configure the above docker run cmd in App Runner?
-","1. You may need to add a start command in App Runner. Here's my setup:
-Unleash Edge AppRunner start command
-",Unleash
-"Currently I have a feature flag with a rollout strategy that uses a specific stickness parameter (accountId)
-I set it to 25% rollout and I'm wondering if there's a way to fetch all the accountIds that are included in the rollout.
-I'm expecting to have an API exposed where we can provide the feature-flag and it would return the results or something
-","1. This is not possible due to the way Unleash evaluates feature flags. This data is not stored in Unleash, and instead only used for feature flag evaluation.
-This is one of Unleash's 11 principles for building and scaling feature flag systems. See: 2. Never expose PII. Follow the principle of least privilege.
-You can also learn more about how stickiness is deterministically calculated here: https://docs.getunleash.io/reference/stickiness#calculation
-Depending on your exact needs you could potentially collect this data by using Impression data in your application.
-",Unleash
-"I have my server file something like below.
-const unleash = require('unleash-server');
-unleash
-.start({
-db: {
-ssl: false,
-host: 'localhost',
-port: 5432,
-database: 'unleash',
-user: 'unleash_user',
-password: 'password',
-},
-server: {
-port: 8443,
-},
-})
-.then((unleash) => {
-console.log(
-Unleash started on http://localhost:${unleash.app.get('port')},
-);
-});
-I
-
-have 2 question here...
-
-I am getting secrets as /vault/secrets/cert.pem and /vault/secrets/key.pem ...I want to configure these secrets for 8443 port which is HTTPS...Is there a way I can configure my secrets
-
-I need to run my application on 2 ports HTTP 4242 and HTTPS 8443 Is there a way I can configure unleash with this
-
-
-I tried to put
-but seems it is not working
-","1. Unleash recommends setting up a proxy terminating HTTPS for you and speaking HTTP to Unleash, as does the Express docs (the web framework running Unleash).
-See http://expressjs.com/en/advanced/best-practice-security.html#use-tls
-You can use a proxy server like Nginx and configure both the SSL termination and listening on multiple ports.
-Here's an example of how your Nginx config file could look like:
-# HTTP on 4242
-server {
-    listen 4242;
-    server_name your_domain.com;
-
-    # Any other settings...
-
-    location / {
-        proxy_pass http://localhost:4242;
-        # Any other proxy settings...
-    }
-}
-
-# HTTPS on 8443
-server {
-    listen 8443 ssl;
-    server_name your_domain.com;
-
-    ssl_certificate /path/to/your/cert.pem;
-    ssl_certificate_key /path/to/your/key.pem;
-
-    # Any other settings, like recommended SSL settings...
-
-    location / {
-        proxy_pass http://localhost:4242;
-        # Any other proxy settings...
-    }
-}
-
-If you insist on having Unleash do HTTPS termination for you, you'll need to set that up yourself using
-
-http://expressjs.com/en/5x/api.html#app.listen
-https://nodejs.org/api/https.html#httpscreateserveroptions-requestlistener
-
-This would look something like:
-const https = require('node:https');
-const fs = require('node:fs');
-const options = {
-  key: fs.readFileSync('test/fixtures/keys/agent2-key.pem'),
-  cert: fs.readFileSync('test/fixtures/keys/agent2-cert.pem')
-};
-
-let app = unleash.create();
-https.createServer(options, app).listen(443);
-
-",Unleash
-"I'm trying to setup a feature flag for a Typescript project. The code below is from the https://www.npmjs.com/package/unleash-client. Where I create and connect to an unleash instance. This is all with a locally running unleash setup as per this documentation: https://github.com/Unleash/unleash. It seems from the error, I'm not able to connect to unleash. I have verified the instance is running locally in the docker container as per the docs. I can also see the service up and running in my browser. Would anyone know why I'm getting the error below when I try to connect?
-CODE:
-import express from 'express';
-import { Unleash } from 'unleash-client';
-
-const unleash = new Unleash({
-  url: 'http://localhost:4242/api/',
-  appName: 'default',
-  customHeaders: { Authorization: 'default:development.unleash-insecure-api-token' },
-});
-
-ERROR LOG:
-FetchError: Unleash Repository error: request to http://localhost:4242/api/client/features failed, reason: connect ECONNREFUSED 127.0.0.1:4242
-app-local-backend                  | [1]     at ClientRequest.<anonymous> (/app/node_modules/minipass-fetch/lib/index.js:130:14)
-app-local-backend                  | [1]     at ClientRequest.emit (node:events:517:28)
-app-local-backend                  | [1]     at Socket.socketErrorListener (node:_http_client:501:9)
-app-local-backend                  | [1]     at Socket.emit (node:events:517:28)
-app-local-backend                  | [1]     at emitErrorNT (node:internal/streams/destroy:151:8)
-app-local-backend                  | [1]     at emitErrorCloseNT (node:internal/streams/destroy:116:3)
-app-local-backend                  | [1]     at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
-app-local-backend                  | [1]   code: 'ECONNREFUSED',
-app-local-backend                  | [1]   errno: 'ECONNREFUSED',
-app-local-backend                  | [1]   syscall: 'connect',
-app-local-backend                  | [1]   address: '127.0.0.1',
-app-local-backend                  | [1]   port: 4242,
-app-local-backend                  | [1]   type: 'system'
-app-local-backend                  | [1] }
-
-What is weirder is that I can access the endpoint as per the documentation in postman as may be seen below:
-
-Any assistance with this would be much appreciated!
-","1. I tried reproducing the issue but was unable to. Your code looks correct to me.
-Is it possible you're running your application in isolation, e.g. in a separate Docker container, so it is unable to reach localhost:4242 on your host machine?
-If you can reach Unleash at localhost:4242 through your browser and Postman, then I would suggest you start by trying to create a new local project just to see if that works. Something like:
-import { initialize } from 'unleash-client'
-
-const TOGGLE = 'unleash-node-test'
-
-const unleash = initialize({
-  url: 'http://localhost:4242/api',
-  appName: 'unleash-node-test',
-  customHeaders: {
-    Authorization:
-      'default:development.unleash-insecure-api-token'
-  }
-})
-
-const checkToggles = () => {
-  const enabled = unleash.isEnabled(TOGGLE)
-  const variant = unleash.getVariant(TOGGLE)
-  console.log(TOGGLE)
-  console.log('isEnabled', enabled)
-  console.log('getVariant', variant)
-  setInterval(checkToggles, 5000)
-}
-
-unleash.on('ready', checkToggles)
-
-If it works, then I would look into any specificities of the environment you're running your other application in and try to address them.
-",Unleash
-"In Flagsmith, is there a way to tie Feature Flags together in a way that if I enable Feature Flag ""A"", and that requires Feature Flag ""B"", then the user is warned that Flag ""B"" is required?
-","1. (I'm a Flagsmith founder)
-Right now, Flagsmith doesnt have dependent flags. We do have plans to implement them (as of Feb 2024) which you can track here.
-As a work around, you can achieve dependency in code by writing a helper function that checks the state of Flag A and Flag B.
-Having said that, we do generally regard flag dependencies as something of an anti-pattern; it adds complexity and can sometimes obscure the resultant application behaviour.
-We do occasionally have folk who present a use case that would help with dependent flags, hence its inclusion on the roadmap.
-",Flagsmith
-"I want to skip TLS verification for flagsmith.
-Note: I'm using flagsmithv3 go sdk.
-This is my current code:
-func InitializeFlagsmith() *flagsmith.Client {
-    apiKey := ""ser...""
-    customerAPIEndpoint := ""https://example.com/api/v1/""
-
-    // Create Flagsmith client options
-    options := []flagsmith.Option{
-        flagsmith.WithBaseURL(customerAPIEndpoint),
-        flagsmith.WithLocalEvaluation(context.Background()),
-        flagsmith.WithEnvironmentRefreshInterval(60 * time.Second),
-    }
-
-    // Initialize Flagsmith client with custom options
-    flagsmithClient := flagsmith.NewClient(apiKey, options...)
-
-    return flagsmithClient
-}
-
-I have checked flagsmith code so I found this, how can I change the client in the &Client?
-// NewClient creates instance of Client with given configuration.
-func NewClient(apiKey string, options ...Option) *Client {
-    c := &Client{
-        apiKey: apiKey,
-        config: defaultConfig(),
-        client: flaghttp.NewClient(),
-    }
-
-    c.client.SetHeaders(map[string]string{
-        ""Accept"":            ""application/json"",
-        ""X-Environment-Key"": c.apiKey,
-    })
-    c.client.SetTimeout(c.config.timeout)
-    ....
-
-
-","1. It's not really possible right now, but it's good use case, and we will implement this. I have created an issue for this here
-Full disclosure: I work for flagsmith and have written that SDK
-",Flagsmith
-"I want to run a script after a docker image has been initialized. The image in question is a node:16 with python and other stuff
-https://github.com/Flagsmith/flagsmith/blob/main/Dockerfile
-Anyway, if I run the image without commands or entry-point it does start successfully. If I login using docker exec -it ###### /bin/bas I can then run either sh, bash or even python
-However having:
-  flagsmith:
-      image: flagsmith/flagsmith:latest
-      environment:
-          # skipping for readibility
-      ports:
-          - ""9000:8000""
-      depends_on:
-          - flotto-postgres
-      links:
-          - flotto-postgres
-      volumes: ['./init_flagsmith.py:/init_flagsmith.py', './init_flagsmith.sh:/init_flagsmith.sh']
-      command: /bin/bash '/init_flagsmith.sh'  # <-------- THIS GUY IS NOT WORKING
-
-it does not run, and the returned error is 0 with this message (depending on the tool I run on init_flagsmith.sh :
-
-ERROR: unrecognised command '/bin/bash'
-
-","1. If you look at the end of the Dockerfile you link to, it specifies
-ENTRYPOINT [""./scripts/run-docker.sh""]
-CMD [""migrate-and-serve""]
-
-In the Compose file, the command: overrides the Dockerfile CMD, but it still is passed as arguments to the ENTRYPOINT.  Looking at the run-docker.sh script, it does not accept a normal shell command as its arguments, but rather one of a specific set of command keywords (migrate, serve, ...).
-You could in principle work around this by replacing command: with entrypoint: in your Compose file.  However, you'll still run into the problem that a container only runs one process, and so your setup script runs instead of the normal container process.
-What you might do instead is set up your initialization script to run the main entrypoint script when it finishes.
-#!/bin/sh
-# init_flagsmith.sh
-
-...
-
-# at the very end
-exec ./scripts/run-docker.sh ""$@""
-
-I also might package this up into an image, rather than injecting the files using volumes:.  You can create an image FROM any base image you want to extend it.
-# Dockerfile
-FROM flagsmith/flagsmith:latest
-COPY init_flagsmith.sh init_flagsmith.py ./
-ENTRYPOINT [""./init_flagsmith.sh""]  # must be JSON-array syntax
-CMD [""migrate-and-serve""]           # must repeat from the base image
-                                    # if changing ENTRYPOINT
-
-Then you can remove these options from the Compose setup (along with the obsolete links:)
-  flagsmith:
-      build: .
-      environment:
-          # skipping for readibility
-      ports:
-          - ""9000:8000""
-      depends_on:
-          - flotto-postgres
-      # but no volumes:, command:, or entrypoint:
-
-",Flagsmith
-"I am trying to use this in my reactjs application: https://docs.flagsmith.com/clients/javascript/
-The way it is initalized is as follows:
-flagsmith
-  .init({
-    environmentID: Config.FLAGSMITH_ENVIRONMENT_ID
-  })
-  .then(() => {
-    flagsmith.startListening(1000);
-  })
-  .catch((error) => {
-    console.log(error)
-  });
-
-this one works well but I want to wrap it in a function and initalize it from one component only so I did:
-function initFlagSmith(){
-  flagsmith
-  .init({
-    environmentID: Config.FLAGSMITH_ENVIRONMENT_ID
-  })
-  .then(() => {
-    flagsmith.startListening(1000);
-  })
-  .catch((error) => {
-    console.log(error)
-  });
-   }
-
-but that doesn't work with error u.getItem undefined. Looking at flagsmith, i see u.getItem but also AsyncStorage has the method as well.
-any help?
-Here is a repo: https://github.com/iconicsammy/flagsmithissue
-","1. This was down to a typo, raised a PR here. https://github.com/iconicsammy/flagsmithissue/pull/1
-",Flagsmith
-"I am trying to consume the Flagsmith APIs as documented here .
-It seems some APIs like -- /flags/ need ""x-environment-key"" header, which is working.
-But for others like /environments/ ""x-environment-key"" does not work. I have tried a bearer token authorisation by obtaining the API key ( Authorization: Bearer <> ). But that doesn't work either. There is no clear documentation on the authentication mechanism ( or I have missed it ).
-Can someone throw some pointers ?
-","1. x-environment-key is for the SDK endpoints, where as /environments is an admin endpoint used in the dashboard to list a project's environments.
-Those endpoints are protected via an API token, so you'd need to send
-authorization: Token $API_TOKEN
-You can find your API token in your account settings under keys
-
-",Flagsmith
-"Launch Darkly have an example(https://github.com/launchdarkly/react-client-sdk/blob/main/examples/async-provider/src/client/index.js) of how to use asyncWithLDProvider with a React project (as below) but  I cannot figure out how to integrate this with my Next app.
-Example
-import { asyncWithLDProvider } from 'launchdarkly-react-client-sdk';
-
-(async () => {
-  const LDProvider = await asyncWithLDProvider({
-    clientSideID: 'client-side-id-123abc',
-    user: {
-      ""key"": ""user-key-123abc"",
-      ""name"": ""Sandy Smith"",
-      ""email"": ""sandy@example.com""
-    },
-    options: { /* ... */ }
-  });
-
-  render(
-    <LDProvider>
-      <YourApp />
-    </LDProvider>,
-    document.getElementById('reactDiv'),
-  );
-})();
-
-Have tried creating a provider in the _app.tsx file and wrapping the entire app but as asyncWithLDProvider is async and requires the await keyword this is tricky.
-Something like this
-const App = ({ Component }) = {
-    // but how to get this async function to work with the React lifecycle stumps me
-    const LDProvider = await asyncWithLDProvider({
-      clientSideID: 'client-side-id-123abc',
-    });
-
-    return (
-        <LDProvider>
-            <Component />
-        </LDProvider>
-    )
-}
-
-
-Here is my _app.tsx (have removed a few imports to save space)
-This is a group project and not all of this was written by me.
-import { Next, Page } from '@my/types';
-import NextHead from 'next/head';
-import { QueryClient, QueryClientProvider } from 'react-query';
-
-const App = ({
-  Component,
-  pageProps: { session, ...restProps },
-}: Next.AppPropsWithLayout) => {
-  const { pathname } = useRouter();
-  const { description, title } = Page.getMetadata(pathname, ROUTES);
-
-  const getLayout = Component.getLayout ?? ((page) => page);
-  const WithRedirectShell = withRedirect(Shell);
-
-  const queryClient = new QueryClient();
-
-  const [colorScheme, setColorScheme] = useLocalStorage<ColorScheme>({
-    key: 'mantine-color-scheme',
-    defaultValue: 'light',
-    getInitialValueInEffect: true,
-  });
-
-  const toggleColorScheme = (value?: ColorScheme) =>
-    setColorScheme(value || (colorScheme === 'dark' ? 'light' : 'dark'));
-
-  useHotkeys([['mod+J', () => toggleColorScheme()]]);
-
-  return (
-    <ColorSchemeProvider
-      colorScheme={colorScheme}
-      toggleColorScheme={toggleColorScheme}
-    >
-      <MantineProvider
-        withGlobalStyles
-        withNormalizeCSS
-        theme={{ colorScheme, ...theme }}
-      >
-        <NotificationsProvider position='top-center' zIndex={2077} limit={5}>
-          <SessionProvider session={session}>
-            <QueryClientProvider client={queryClient}>
-              <NextHead>
-                <Head description={description} title={title} />
-              </NextHead>
-              <WithRedirectShell header={<Header />}>
-                {getLayout(<Component {...restProps} />)}
-              </WithRedirectShell>
-            </QueryClientProvider>
-          </SessionProvider>
-        </NotificationsProvider>
-      </MantineProvider>
-    </ColorSchemeProvider>
-  );
-};
-
-export default App;
-
-
-Here is my index.tsx
-import { Next } from ""@my/types"";
-
-const Home: Next.Page = () => null;
-
-export default Home;
-
-
-","1. The launchdarkly-react-client-sdk also exports a component called LDProvider. You can just import that and pass in your context.
-Using app routing, you can create a Provider component and pass in your user context. At the top of this file, add 'use client' to make it a client component. Then, you can import this component into your root layout. The layout component will remain a server component, and if the children are server components, they will remain server components.
-'use client';
-
-import React from 'react';
-import { LDProvider } from 'launchdarkly-react-client-sdk';
-
-export const Providers = ({ children, user }: Props) => {
-  const LDContext = {
-    kind: 'user',
-    key: 'user key here',
-  };
-
-  return (
-    <LDProvider
-      context={LDContext}
-      clientSideID=""YOUR KEY HERE""
-    >
-      {children} 
-    </LDProvider>
-  );
-}
-
-There is also a sample app using page routing: https://github.com/tanben/sample-nextjs
-",LaunchDarkly
-"I'm trying to integrate LaunchDarkly into my project. I'm seeing a problem where the LdClient connects in one version and doesn't in another. The code is essentially identical. I got their example running via a simple Console app in .NET Framework 4.8 using the Server SDK. You can find a version of it here. This is what my working project looks like:
-using System;
-using System.Net.Sockets;
-using System.Runtime.Remoting.Contexts;
-using LaunchDarkly.Sdk;
-using LaunchDarkly.Sdk.Server;
-using Context = LaunchDarkly.Sdk.Context;
-namespace HelloDotNet
-{
-    class Program
-    {
-        // Set SdkKey to your LaunchDarkly SDK key.
-        public const string SdkKey = ""my-sdk-key"";
-        // Set FeatureFlagKey to the feature flag key you want to evaluate.
-        public const string FeatureFlagKey = ""my-flag"";
-        private static void ShowMessage(string s)
-        {
-            Console.WriteLine(""*** "" + s);
-            Console.WriteLine();
-        }
-        static void Main(string[] args)
-        {
-            var ldConfig = Configuration.Default(SdkKey);
-            var client = new LdClient(ldConfig);
-            if (client.Initialized)
-            {
-                ShowMessage(""SDK successfully initialized!"");
-            }
-            else
-            {
-                ShowMessage(""SDK failed to initialize"");
-                Environment.Exit(1);
-            }
-            // Set up the evaluation context. This context should appear on your LaunchDarkly contexts
-            // dashboard soon after you run the demo.
-            var context = Context.Builder(""test"")
-                .Name(""Sandy"")
-                .Build();
-            var flagValue = client.BoolVariation(FeatureFlagKey, context, false);
-            ShowMessage(string.Format(""Feature flag '{0}' is {1} for this context"",
-                FeatureFlagKey, flagValue));
-            // Here we ensure that the SDK shuts down cleanly and has a chance to deliver analytics
-            // events to LaunchDarkly before the program exits. If analytics events are not delivered,
-            // the context attributes and flag usage statistics will not appear on your dashboard. In
-            // a normal long-running application, the SDK would continue running and events would be
-            // delivered automatically in the background.
-            client.Dispose();
-
-            Console.ReadKey();
-        }
-    }
-}
-
-However, if I then transfer that code to just a class in a ClassLibrary, the LdClient never connects and client.Initalized is always false. In turn, this means I can never see my flags toggle. For example, the client in this Singleton never connects:
-using LaunchDarkly.Sdk.Server;
-using System;
-using LaunchDarkly.Sdk;
-using System.Collections.Generic;
-using System.Linq;
-using System.Text;
-using System.Threading.Tasks;
-using System.Xml.Linq;
-using Context = LaunchDarkly.Sdk.Context;
-
-namespace FeatureFlags
-{ 
-    public class FeatureFlagsManager
-    {
-        //Load from something better in the future, hardcoded for now
-        private readonly string _sdkKey = ""my-sdk-key"";
-        private Configuration _ldConfig;
-        private LdClient _ldClient;
-        private Context _context;
-        private static readonly Lazy<FeatureFlagsManager> lazy = new Lazy<FeatureFlagsManager>(() => new FeatureFlagsManager());
-        public static FeatureFlagsManager Instance => lazy.Value;
-
-        private FeatureFlagsManager()
-        {
-            _ldConfig = Configuration.Default(_sdkKey);
-            _ldClient = new LdClient(_ldConfig);
-
-            //load key and name from something in the future
-            _context = Context.Builder(""test"").Name(""Sandy"").Build();
-        }
-
-        public bool GetFeatureFlagBool(string featureFlagKey)
-        {
-            bool enabled = _ldClient.BoolVariation(featureFlagKey, _context, false);
-            return enabled;
-        }
-    }
-}
-
-I also observe that if I turn the first, working example into a class library and put the contents of main into a method and try to run the method, the client does not initialize.
-","1. I had the same issue.  After much gnashing of teeth, it turned out to be a bug in the latest version and specific to .NET Framework 4.8.  I downgraded my NuGet package to 7.1.0 and it worked again.
-I submitted this issue to LaunchDarkly.
-https://github.com/launchdarkly/dotnet-server-sdk/issues/184
-Hope this helps!
-
-2. EDIT: I found the following by noticing Launch Darkly was logging errors in the compiler console.Specifically, 2024-04-12 15:45:13.829 -05:00 [LaunchDarkly.Sdk.DataSource] ERROR: Unexpected error in stream processing: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.CompilerServices.Unsafe, Version=4.0.4.1, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
-I finally got it working. High level:
-
-Use fuslogvw.exe to view binding errors. Saw native images threw error also.
-Use ngen.exe update to update native images.
-Added the binding redirect from my app.config to machine.config for .net framework.
-
-How to run fuslogvw.exe and ngen.exe VisualStudio in admin mode. Tools>>Command Line>>Developer Command Prompt
-fuslogvw.exe
-In the GUI, Settings>>Log bind failures to disk and Enable Custom log path. Enter desired directory.
-In the Developer Command Prompt:
-ngen.exe update or ngen.exe /? to see what options you have to you.
-You can use this to find machine.config:
-Where Is Machine.Config?
-Modified runtime to have this:
-  <runtime>
-    <assemblyBinding xmlns=""urn:schemas-microsoft-com:asm.v1"">
-      <dependentAssembly>
-        <assemblyIdentity name=""System.Runtime.CompilerServices.Unsafe"" publicKeyToken=""b03f5f7f11d50a3a"" culture=""neutral"" />
-        <bindingRedirect oldVersion=""0.0.0.0-6.0.0.0"" newVersion=""6.0.0.0"" />
-      </dependentAssembly>
-    </assemblyBinding>
-  </runtime>
-
-",LaunchDarkly
-"Currently We are evaluating LaunchDarkly to incorporate. We have a specific use case like below.
-
-App -> LD (LaunchDarkly)
-LD -> My-Own-Service  // feature flag evaluation based on context params passed to my server side.
-My-Own-Service -> LD // sends result back.
-LD -> App. //LD notifies App.
-
-Please let me know if this is possible.
-Prior Evaluation what I Did:
-
-I have gone through the Integrations tab in LD site after login to see a possible integration is given. I have not found one. Or I missed it or misread it.
-
-I know you can ask me why this way, you can talk to ur service directly  :-)// ignore this.
-","1. You should rather set it up this way:
-1. App -> My-Own-Service // Get user's context
-2. App -> LD (LaunchDarkly) // feature flag evaluation based on the context
-received from your service
-
-That is the first thing you need to do before getting feature flag values from LD is to get user's context.
-",LaunchDarkly
-"I'm tring to get passed this error and can't seem to. This is the line I'm getting it on.
-const client: LDClient = LaunchDarkly.init(key);
-
-I'm using typescript, NextJS.
-I'm using react-client-sdk version 3.0.10 and node-server-sdk version 7.0.3 of LaunchDarkly.
-","1. This is what I did to get past this error.
-export async function getClient(): Promise<LaunchDarkly.LDClient> {
-  const client = LaunchDarkly.init(process.env.LAUNCHDARKLY_SDK_KEY!);
-  await client.waitForInitialization();
-  return client;
-}
-
-",LaunchDarkly
-"I want a shortest way to get 1st char of every word in a string in C#.
-what I have done is:
-string str = ""This is my style"";
-string [] output = str.Split(' ');
-foreach(string s in output)
-{
-   Console.Write(s[0]+"" "");
-}
-
-// Output
-T i m s
-
-I want to display same output with a shortest way...
-Thanks
-","1. var firstChars = str.Split(' ').Select(s => s[0]);
-
-If the performance is critical:
-var firstChars = str.Where((ch, index) => ch != ' ' 
-                       && (index == 0 || str[index - 1] == ' '));
-
-The second solution is less readable, but loop the string once.
-
-2. string str = ""This is my style""; 
-str.Split(' ').ToList().ForEach(i => Console.Write(i[0] + "" ""));
-
-
-3. Print first letter of each word in a string
-string SampleText = ""Stack Overflow Com"";
-string ShortName = """";
-SystemName.Split(' ').ToList().ForEach(i => ShortName += i[0].ToString());  
-
-Output:
-SOC
-
-",Split
-"If I run this code as a ps1-file in the console it gives the expected output, but if I 'compile' the script with PS2EXE there's no second part of the string:
-$a = ""first Part`r`n`r`nsecond Part""
-$a
-$b = $a.split(""`r`n`r`n"")
-$c = $b[0]
-$d = $b[1]
-write-host ""Part 1: $c""
-write-host ""Part 2: $d""
-
-Console:
-
-Exe:
-
-Somehow the executable seems to be unable to split the string correctly.
-Any ideas?
-","1. I've come to an explanation, finally:
-I wrote the above code with Powershell 7, whereas ps2exe uses Powershell version 5.
-These two versions differ in handling a split command, as I try to show with the following code. If you run it under PS 5, the parts of the split string will be stored in $b[0] and $b[1] only if you use a one-character-delimiter. With each added character to the delimiter the second value will be stored at one index higher. But if you run the code under PS 7, the values will always be stored in [0] and [1].
-$delimiter = ""@@""
-$a = ""first Part$($delimiter)second Part""
-write-host ""String to split: $a""
-write-host ""Delimiter: $delimiter`r`n""
-$b = $a.split($delimiter)
-write-host ""`t0: $($b[0])""
-write-host ""`t1: $($b[1])""
-write-host ""`t2: $($b[2])""
-write-host ""`t3: $($b[3])""
-write-host ""`t4: $($b[4])""
-
-Just in case someone runs into the same issue.
-Thanks for the advice, Booga Roo.
-",Split
-"I need to split an input string at contiguous spaces into a list of strings. The input may include single or double quoted strings, which must be ignored.
-How can I split a string at the spaces, but ignore quoted strings, so the result of splitting this
-me   you   ""us and them""   'everyone else' them
-
-returns this?
-me
-you
-us and them
-everyone else
-them
-
-Duplicate of this question but for needing to ignore single-quoted strings, as well.
-","1. This excellent solution has been modified to ignore single-quoted strings, as well, and remove all leading and trailing quotes from each argument.
-$people  = 'me   you   ""us and them""   ''everyone else'' them'
-
-$pattern = '(?x)
-  [ ]+              # Split on one or more spaces (greedy)
-  (?=               # if followed by one of the following:
-    (?:[^""'']|      #   any character other a double or single quote, or 
-    (?:""[^""]*"")|    #   a double-quoted string, or
-    (?:''[^'']*'')) #   a single-quoted string.
-  *$)               # zero or more times to the end of the line.
-'  
-   
-[regex]::Split($people, $pattern) -replace '^[""'']|[""'']$', ''
-
-Results in:
-me
-you
-us and them
-everyone else
-them
-
-In short, this regex matches a string of blanks so long as everything that follows is a non-quote or a quoted string--effectively treating quoted strings as single characters.
-
-2. 
-A concise solution based on PowerShell's regex-based -split and -match operators (and a verbatim here-string to provide the input):
-# Returns the following array:
-#   @('me', 'you', 'us and them', 'everyone else', 'them')
-@'
-me   you   ""us and them""   'everyone else' them
-'@ -split '""(.*?)""|''(.*?)''|(\S+)' -match '\S'
-
-Note:
-
-Tokens with escaped, embedded "" or ' characters (e.g., ""Nat """"King"""" Cole"" are not supported, and any empty tokens ('' or """") are effectively eliminated from the result array.
-
-For an explanation of the regex used with -split as well as the option to experiment with it, see this regex101.com page.
-
--match '\S' operates on -split's result array and eliminates empty or all-whitespace elements from it, by filtering in only those elements that contain at least one non-whitespace character (\S).
-
-This extra, filtering step is necessary, because -split is being slightly repurposed above: the regex passed to it normally describes the separators between elements, whereas here it describes the elements, and it is the enclosure in (...) (capture groups) that also causes what these groups matched to be included in the result array, in addition to the runs of spaces that are technically now the ""elements"", as well as an initial, empty element that precedes the first ""separator""; -match '\S' in effect eliminates all these unwanted elements.
-
-
-
-
-Alternatively, use .NET APIs directly, namely [regex]::Matches():
-$string = @'
-me   you   ""us and them""   'everyone else' them
-'@
-
-# Returns the following array:
-#   @('me', 'you', 'us and them', 'everyone else', 'them')
-[regex]::Matches($string, '""(?<a>.*?)""|''(?<a>.*?)''|(?<a>\S+)').
-  ForEach({ $_.Groups['a'].Value })
-
-
-This more directly expresses the intent of matching and extracting only the arguments embedded in the string.
-
-Named capture groups ((?<name>...) are used to capture the arguments without enclosing quotes.
-Using the same name (?<a>) for multiple groups means that whichever one captures a specific match reports it via that name in the .Groups property of the resulting [Match] instance, and the captured text can therefore be accessed via .Groups['a'].Value
-
-
-GitHub issue #7867 is a feature request for introducing a -matchall operator, which would enable a more PowerShell-idiomatic solution:
-# WISHFUL THINKING, as of PowerShell 7.4.x
-($string -matchall '""(?<a>.*?)""|''(?<a>.*?)''|(?<a>\S+)').
-  ForEach({ $_.Groups['a'].Value })
-
-
-
-
-3. An alternative approach would be to match in order one or more of double-quoted strings, single-quoted strings, or non white spaces:
-$people  = 'me   you   ""us and them""   ''everyone else'' them'
-$pattern = '(?:""(?:[^""])*""|''(?:[^''])*''|\S)+'
-([regex]::Matches($people, $pattern)).Value
-
-The order is important, you want the regex to match/grab the quoted items as a whole before trying to grab non white spaces.
-The pattern:
-(?:               #  Start a non-capturing group
-   ""(?:[^""])*""    #  match double-quoted string
-   |              #  or
-   '(?:[^'])*'    #  match single-quoted string
-   |              #  or
-   \S             #  match a non white space character
-)+                #  repeat non-capturing group 1 or more times
-
-",Split
-"df<-c(""Abc1038"")
-
-df<-strsplit(df, ""(?=[A-Za-z])(?<=[0-9])|(?=[0-9])(?<=[A-Za-z])"", perl=TRUE)
-
-[[1]]
-[1] ""Abc""  ""1038""
-
-From here, I would like to separate one column to 2; say one is named ""text"" and another is named ""num"", may I know how should I go about it?
-","1. Based on your output, you can easily get a data.frame:
-df <- strsplit(df, ""(?=[A-Za-z])(?<=[0-9])|(?=[0-9])(?<=[A-Za-z])"", perl=TRUE)
-
-library(dplyr)
-do.call(rbind, df)|>
-  data.frame()|>
-  rename(text = X1, num = X2)
-  text  num
-1  Abc 1038
-
-
-2. Try
-as.data.frame(
-    t(
-        setNames(
-            strsplit(df, ""(?<=\\D)(?=\\d)"", perl = TRUE)[[1]],
-            c(""text"", ""num"")
-        )
-    )
-)
-
-which gives
-  text  num
-1  Abc 1038
-
-",Split
-"I have a data frame which I want split into several elements of a named list by filtering by year and selecting a single variable.
-df <- data.frame(year = sample(rep(2010:2020, times=3)), x = runif(33))
-
-What I want to get is something similar to
-yr2010 <- df %>%
-    filter(year == 2010) %>%
-    select(x) %>%
-    as.matrix()
-
-yr2011 <- df %>%
-    filter(year == 2011) %>%
-    select(x) %>%
-    as.matrix()
-
-...
-
-yr2020 <- df %>%
-    filter(year == 2020) %>%
-    select(x) %>%
-    as.matrix()
-
-df_list <- list(yr2010 = yr2010, ..., yr2020 = yr2020)
-
-How can I do that?
-","1. 
-base
-
-split(df['x'], paste0(""yr"", df$year))
-
-
-
-dplyr
-
-df %>%
-  mutate(year = paste0(""yr"", year)) %>%
-  group_by(year) %>% {
-    setNames(group_split(., .keep = FALSE), group_keys(.)$year)
-  }
-
-Note: From the document of group_split, it said
-
-group_split() is not stable because you can achieve very similar results by manipulating the nested column returned from tidyr::nest(.by =). That also retains the group keys all within a single data structure. group_split() may be deprecated in the future.
-
-
-
-tidyverse
-
-library(tidyverse)
-
-df %>%
-  mutate(year = paste0(""yr"", year)) %>%  # dplyr
-  nest(.by = year) %>%                   # tidyr
-  deframe()                              # tibble
-
-
-Output
-# $yr2010
-# # A tibble: 3 × 1
-#       x
-#   <dbl>
-# 1 0.327
-# 2 0.163
-# 3 0.939
-# 
-# $yr2011
-# # A tibble: 3 × 1
-#        x
-#    <dbl>
-# 1 0.202 
-# 2 0.0209
-# 3 0.862
-# ...
-
-
-2. A dplyr solution using group_split:
-library(dplyr)
-
-df |> 
-  group_split(year, .keep = FALSE) |> 
-  setNames(paste0(""yr"", sort(unique(df$year))))
-
-#<list_of<tbl_df<x:double>>[11]>
-#$yr2010
-# A tibble: 3 × 1
-#       x
-#   <dbl>
-#1 0.849 
-#2 0.845 
-#3 0.0853
-
-#$yr2011
-# A tibble: 3 × 1
-#      x
-#  <dbl>
-#1 0.394
-#2 0.304
-#3 0.488
-...
-
-Data:
-> dput(df)
-structure(list(year = c(2014L, 2012L, 2016L, 2020L, 2013L, 2017L, 
-2010L, 2014L, 2018L, 2013L, 2019L, 2018L, 2020L, 2012L, 2011L, 
-2020L, 2012L, 2013L, 2015L, 2019L, 2017L, 2018L, 2016L, 2010L, 
-2014L, 2016L, 2015L, 2011L, 2015L, 2017L, 2019L, 2010L, 2011L
-), x = c(0.536158735398203, 0.835086770122871, 0.37455660966225, 
-0.925604712218046, 0.872673955745995, 0.306027633370832, 0.849452404072508, 
-0.276228162227198, 0.575324499513954, 0.408695956459269, 0.980199206154794, 
-0.0974830950144678, 0.00175619078800082, 0.980167318368331, 0.394283950561658, 
-0.830085879191756, 0.651535265613347, 0.699725363403559, 0.736490845214576, 
-0.979100176133215, 0.546931945951656, 0.869536967016757, 0.548196423565969, 
-0.84485424309969, 0.358381181955338, 0.231132469605654, 0.00961211626417935, 
-0.30368754058145, 0.144809128949419, 0.401646558428183, 0.689442926784977, 
-0.0852784754242748, 0.488420734880492)), class = ""data.frame"", row.names = c(NA, 
--33L))
-
-",Split
-"I'm trying to traverse a graph (directed) where the edges have weights.
-How do I use the weights to define the order in which an edge is followed: the lower the weight (module of the weight to be exact: edges can have negative values. But this is a detail), then the higher the priority: low weights are executed first. In case (possible) two edges have the same weight I would execute them simultaneously, or, if not possible, at random.
-The plan is to make a change to one of the nodes' values, and then let it propagate over the graph: the propagation must follow the rules of weights of edges, and each node reached will make a change to one of its values, and so on.
-I am yet to define the exact changes, and how to solve the cycles, but to begin with, I am unable to propagate on the graph in a controlled manner, following a rule of minimal weights.
-In order to see the propagation, I'm pushing it into Gephi.
-I did succeed to push a propagation:
-traversal = vg.V().has('Value','SomeStartNodeIChose').repeat(outE().otherV().simplePath()).until(has('Value','SomeEndNodeIChose')).path().by('Value').by('Weight');[]
-:> traversal
-
-and this works nicely: I can see the nodes lighting up like a christmas tree.
-But I cannot insert a min() here for the world.....
-Can anyone explain a newbie how this works?
-my attempts are clearly confuse, so....
-traversal = vg.V().has('Value','AirBnb').repeat(outE().where(min()).otherV().simplePath()).until(has('Value','Comfort')).path().by('Value').by('Update Weight');[]
-
-throws: org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerEdge cannot be cast to java.lang.Comparable
-
-
-traversal = vg.V().has('Value','AirBnb').repeat(outE().where('Weight',is(min())).otherV().simplePath()).until(has('Value','Comfort')).path().by('Value').by('Update Weight');[]
-throws: No signature of method: org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal.where() is applicable for argument types: (String, org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.DefaultGraphTraversal) values: [Weight, [IsStep(eq([MinGlobalStep]))]]
-
-and I can go on and on...
-I definitely did not understand how this works...
-","1. min() is a reducing barrier, so you're not going to be able to use that within a repeat().  Instead, you may want to use order().by().limit(1) to get the minimum weighted edge:
-g.V().has('Value','AirBnb').
-    repeat(
-        outE().order().by('weight').limit(1).
-        inV().
-        simplePath()
-        ).
-    until(outE().count().is(eq(0))).path()
-
-",Gremlin
-"I am trying to return something like
-{
-  ""label1"" : [""prop1"",""prop2""],
-  ""label2"" : [""prop3"",""prop4""],
-  ""label2"" : [""prop1"",""prop3""]
-}
-
-etc where the labels[N] are vertex label values and the props array are the properties for these vertices.
-I can get a list of labels and I can get a list of properties but I can't combine them in a single object. I could potentially do the two queries and combine the two arrays ultimately, but something like 
-g.V().valueMap().select(keys).dedup();
-
-only gets properties where there are any, so if a vertex type doesn't have any properties the array returned by this is a different size than doing 
-g.V().label().dedup();
-
-This is using gremlin syntax (TP3)
-Thanks 
-","1. I'm assuming that you're trying to get sort of a schema definition. Note that this will be a fairly expensive traversal as you have to iterate all vertices to do this:
-gremlin> g.V().
-......1>   group().
-......2>     by(label).
-......3>     by(properties().
-......4>        label().
-......5>        dedup().
-......6>        fold())
-==>[software:[name,lang],person:[name,age]]
-
-
-2. From 2019: if you are using Janus (as the OP mentioned they are in the comments to stephen mallette's answer), the default print.. commands are available:
-https://github.com/JanusGraph/janusgraph/blob/a2e3521d28de37ca01d08166777d77c0943a9db5/janusgraph-core/src/main/java/org/janusgraph/core/schema/JanusGraphManagement.java#L413
-These are:
-printEdgeLabels()     
-printIndexes()        
-printPropertyKeys()   
-printSchema()         
-printVertexLabels()   
-
-They can be used like this:
-gremlin> m = graph.openManagement()
-==>org.janusgraph.graphdb.database.management.ManagementSystem@5aa2758a
-
-gremlin> m.printVertexLabels()
-==>------------------------------------------------------------------------------------------------
-Vertex Label Name              | Partitioned | Static                                             |
----------------------------------------------------------------------------------------------------
-titan                          | false       | false                                              |
-location                       | false       | false                                              |
-god                            | false       | false                                              |
-demigod                        | false       | false                                              |
-human                          | false       | false                                              |
-monster                        | false       | false                                              |
----------------------------------------------------------------------------------------------------
-
-gremlin> m.printSchema()
-==>------------------------------------------------------------------------------------------------
-Vertex Label Name              | Partitioned | Static                                             |
----------------------------------------------------------------------------------------------------
-titan                          | false       | false                                              |
-location                       | false       | false                                              |
-god                            | false       | false                                              |
-demigod                        | false       | false                                              |
-human                          | false       | false                                              |
-monster                        | false       | false                                              |
----------------------------------------------------------------------------------------------------
-Edge Label Name                | Directed    | Unidirected | Multiplicity                         |
----------------------------------------------------------------------------------------------------
-brother                        | true        | false       | MULTI                                |
-father                         | true        | false       | MANY2ONE                             |
-mother                         | true        | false       | MANY2ONE                             |
-battled                        | true        | false       | MULTI                                |
-lives                          | true        | false       | MULTI                                |
-pet                            | true        | false       | MULTI                                |
----------------------------------------------------------------------------------------------------
-Property Key Name              | Cardinality | Data Type                                          |
----------------------------------------------------------------------------------------------------
-name                           | SINGLE      | class java.lang.String                             |
-age                            | SINGLE      | class java.lang.Integer                            |
-time                           | SINGLE      | class java.lang.Integer                            |
-reason                         | SINGLE      | class java.lang.String                             |
-place                          | SINGLE      | class org.janusgraph.core.attribute.Geoshape       |
----------------------------------------------------------------------------------------------------
-Graph Index (Vertex)           | Type        | Unique    | Backing        | Key:           Status |
----------------------------------------------------------------------------------------------------
-name                           | Composite   | true      | internalindex  | name:         ENABLED |
-vertices                       | Mixed       | false     | search         | age:          ENABLED |
----------------------------------------------------------------------------------------------------
-Graph Index (Edge)             | Type        | Unique    | Backing        | Key:           Status |
----------------------------------------------------------------------------------------------------
-edges                          | Mixed       | false     | search         | reason:       ENABLED |
-                               |             |           |                | place:        ENABLED |
----------------------------------------------------------------------------------------------------
-Relation Index (VCI)           | Type        | Direction | Sort Key       | Order    |     Status |
----------------------------------------------------------------------------------------------------
-battlesByTime                  | battled     | BOTH      | time           | desc     |    ENABLED |
----------------------------------------------------------------------------------------------------
-
-",Gremlin
-"I'm new to graphDB, I have a graph as shown in the attached image.
-
-I want to find a connected path like ""A1,E1,A2,D2,A3"" for this I wrote the following query
-g.V().hasLabel('A1').repeat(inE('edge').outV().outE().inV().cyclicPath()).times(5).path().map(unfold().not(hasLabel('edge')).fold()).count()
-Where the label of all the edges is ""edge"". This query gives me output like below
-A1,E1,A2,B2,A2
-A1,E1,A1,D1,A1
-A1,E1,A2,D2,A3
-how can I modify my query to get ""A1,E1,A2,D2,A3"" as the answer and avoid other combinations as I'm interested only in the connections between two different A's like A1,A2, and A3 and what connects them. I'm not interested in (A1,B1,A1,C1,A1,D1,A1,E1,A1) as they are all the attributes belong to A1. I'm interested in finding the attributes that connect different A's like ""A1,E1,A2,D2,A3"".
-Thanks,
-","1. I would try not to use labels as unique identifiers. Labels are meant to be lower cardinality or groupings.  Instead, look to use the vertex ID or a property on each vertex to denote it's unique name.
-You could potentially use the first letter of your identifiers as the label, though.  So you could have vertices with labels of A, B, C, D, E but with IDs of A1, A2... etc.
-Once you've done that, the query you're looking for should look something like:
-g.V('A1').
-  repeat(both().simplePath()).
-  until(hasId('A3')).
-  path().
-  by(id())
-
-Returns:
-A1, E1, A2, D2, A3
-
-",Gremlin
-"I'm trying to add an edge between two vertexes using the gremlin scala framework connected to a remote JanusGraph server. While this edge is created, I still get a ""org.apache.tinkerpop.shaded.kryo.KryoException: java.lang.NegativeArraySizeException"" error exception
-The edge and vertexes do get created, but the error is still thrown and I can not catch it. 
-I'm using JanusGraph 0.3.1, and tried with different versions of scala gremlin (3.3, 3.4), all leading to the same error.
-val serializer = new GryoMessageSerializerV3d0(GryoMapper.build.addRegistry(TinkerIoRegistryV3d0.instance))
-val cluster = Cluster.build.addContactPoint(""localhost"").port(8182).serializer(serializer).create
-implicit val graph: ScalaGraph = EmptyGraph.instance.asScala.configure(_.withRemote(DriverRemoteConnection.using(cluster)))
-
-val Founded = Key[String](""founded"")
-val Distance = Key[Int](""distance"")
-
-// create labelled vertex
-val paris = graph + ""Paris""
-
-// create vertex with typed properties
-val london = graph + (""London"", Founded -> ""43 AD"")
-
-// create labelled edges
-paris --- (""OneWayRoad"",  Distance -> 495) --> london
-cluster.close()
-
-Error message thrown at runtime
-15:34:02.704 [gremlin-driver-loop-1] WARN  o.a.t.g.driver.MessageSerializer - Response [PooledUnsafeDirectByteBuf(ridx: 92, widx: 92, cap: 92)] could not be deserialized by org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.
-org.apache.tinkerpop.shaded.kryo.KryoException: java.lang.NegativeArraySizeException
-Serialization trace:
-id (org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceEdge)
-    at org.apache.tinkerpop.shaded.kryo.serializers.ObjectField.read(ObjectField.java:144)
-    at org.apache.tinkerpop.shaded.kryo.serializers.FieldSerializer.read(FieldSerializer.java:557)
-...
-Caused by: java.lang.NegativeArraySizeException: null
-    at org.apache.tinkerpop.shaded.kryo.io.Input.readBytes(Input.java:325)
-[...]
-15:34:02.705 [gremlin-driver-loop-1] ERROR o.a.t.g.d.Handler$GremlinResponseHandler - Could not process the response
-io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: java.lang.NegativeArraySizeException
-Serialization trace:
-id (org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceEdge)
-    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:98)
-[...]
-Caused by: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: org.apache.tinkerpop.shaded.kryo.KryoException: java.lang.NegativeArraySizeException
-Serialization trace:
-id (org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceEdge)
-    at org.apache.tinkerpop.gremlin.driver.ser.AbstractGryoMessageSerializerV3d0.deserializeResponse(AbstractGryoMessageSerializerV3d0.java:159)
-[...]
-Caused by: org.apache.tinkerpop.shaded.kryo.KryoException: java.lang.NegativeArraySizeException
-Serialization trace:
-id (org.apache.tinkerpop.gremlin.structure.util.reference.ReferenceEdge)
-    at org.apache.tinkerpop.shaded.kryo.serializers.ObjectField.read(ObjectField.java:144)
-[...]
-Caused by: java.lang.NegativeArraySizeException: null
-    at org.apache.tinkerpop.shaded.kryo.io.Input.readBytes(Input.java:325)
-[...]
-
-The debugger shows me that the error is thrown when the edge is created. Using
-val edge = g.V(paris).as(""a"").V(london).addE(""test"").iterate()
-
-leads to the same error.
-Here's my gremlin-server.yaml configuraiton file
-host: 0.0.0.0
-port: 8182
-scriptEvaluationTimeout: 180000
-channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
-graphs: {
-  graph: conf/gremlin-server/janusgraph-cql-es-server.properties,
-  ConfigurationManagementGraph: conf/janusgraph-cql-configurationgraph.properties
-}
-scriptEngines: {
-  gremlin-groovy: {
-    plugins: { org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
-               org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
-               org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
-               org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
-               org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/empty-sample.groovy]}}}}
-serializers:
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
-  # Older serialization versions for backwards compatibility:
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
-  - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
-processors:
-  - { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
-  - { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
-metrics: {
-  consoleReporter: {enabled: true, interval: 180000},
-  csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
-  jmxReporter: {enabled: true},
-  slf4jReporter: {enabled: true, interval: 180000},
-  gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
-  graphiteReporter: {enabled: false, interval: 180000}}
-maxInitialLineLength: 4096
-maxHeaderSize: 8192
-maxChunkSize: 8192
-maxContentLength: 65536
-maxAccumulationBufferComponents: 1024
-resultIterationBatchSize: 64
-writeBufferLowWaterMark: 32768
-writeBufferHighWaterMark: 65536
-
-The error does not appears when working without using a remote server: 
-implicit val graph: ScalaGraph = EmptyGraph.instance
-
-is working fine.
-","1. This issue typically points to a compatibility problem with Gryo which is usually exposed when TinkerPop versions are mixed. For the most part, Gryo tends to be backward compatible across versions, thus Gryo 1.0 from 3.3.3 will work with 3.3.4, but there are occasions where that is not always true (e.g. a bug is discovered in the core of the format and a breaking change is necessary.) 
-TinkerPop recommends that when using Gryo, that you align the TinkerPop version on the server with the client. So JanusGraph 0.3.1, uses TinkerPop 3.3.3, therefore your Gremlin Scala version should be 3.3.3.x (I'm pretty sure that Gremlin Scala binds their first three version numbers to TinkerPop's). It seems that you've tried that already, so let's next consider your configuration.
-I note that you've added the TinkerIoRegistryV3d0 but since you're using JanusGraph, you might also need to add their custom IoRegistry:
-GryoMessageSerializerV3d0(GryoMapper.build.addRegistry(JanusGraphIoRegistry.getInstance()))
-
-You could add the TinkerIoRegistryV3d0 if your use case requires it - typically only useful for returning subgraphs. If none of that fixes the problem then my only suggestion would be to simplify heavily: remove all serializer configurations from Gremlin Server configuration except for the one you are using, make sure you can connect to that with just some simple scripts configuring your driver using just Gremlin Console and take note of what that configuration is to make the connection work so that you can port the configuration to Gremlin Scala. 
-I see that you have currently isolated the problem to:
-val edge = g.V(paris).as(""a"").V(london).addE(""test"").iterate()
-
-Note that this code doesn't do exactly what I think you want for a couple of reasons:
-
-If you want the edge back you need to next() and not iterate() 
-That isn't adding an edge between ""paris"" and ""london"" - it's adding a self-referencing edge to ""london"". You need to specify the from() or to() after addE().
-
-I hope something in there helps.
-
-2. From the experience I just got.
-I [dropped the data then...] moved all upsert operations in a queue with concurrency 1.
-NegativeArraySizeException problem disappeared.
-Hope this it going to help in your specific environment.
-",Gremlin
-"For compatibility with APIs previously exposed by a Java library for Spring Boot, which I am modifying to integrate with AWS Neptune, I need to load a ""node"" and all its children, along with their edges (from node to children), from Neptune. However, I haven't been able to accomplish this with a single Gremlin query so far; the only way I found, which I'll outline below, involves two separate queries. Obviously, this significantly impacts performance. Is there a more elegant and optimized way to achieve the same result?
-(as you can see, nodes have an attribute called entityId and a name)
-@Repository
-@Slf4j
-public class VisibilityRepositoryGremlin {
-
-    private final GraphTraversalSource g;
-
-    @Autowired
-    private Client client;
-
-    @Autowired
-    public VisibilityRepositoryGremlin(GraphTraversalSource g) {
-        this.g = g;
-    }
-
-
-    public Mono<Node> findVisibleNode(UUID originEntityId, String originLabel,
-                                      UUID targetEntityId, String targetLabel, boolean isPrivileged) {
-
-        return g.V()
-            .hasLabel(originLabel)
-            .has(Node.ENTITY_ID_PROPERTY, originEntityId.toString())
-            .repeat(outE(Node.CAN_SEE_REL_TYPE, CONTAINS_REL_TYPE)
-                .has(VisibilityGroupRelationship.VISIBILITY, isPrivileged ?
-                    within(VisibilityGroupRelationship.Visibility.PRIVILEGED.name(),
-                           VisibilityGroupRelationship.Visibility.STANDARD.name()) :
-                    within(VisibilityGroupRelationship.Visibility.STANDARD.name()))
-                .otherV().dedup())
-            .until(hasLabel(targetLabel)
-                .has(Node.ENTITY_ID_PROPERTY, targetEntityId.toString()))
-            .elementMap().fold().next()
-            .stream()
-            .map(VisibilityRepositoryGremlin::getNodeFromVertexProps)
-            .map(vg->findById(vg.getId()))
-            .findAny().orElse(Mono.error(new NotFoundException((""Entity not found""))));
-    }
-
-    
-
-    @SuppressWarnings(""unchecked"")
-    public Mono<Node> findById(String s) {
-
-        List<Map<String, Object>> result= g.V().hasId(s)
-            .project(""visibilityGroup"", ""children"")
-            .by(elementMap())
-            .by(outE().hasLabel(CONTAINS_REL_TYPE)
-                .project(""edge"", ""visibility"")
-                .by(inV().elementMap())
-                .by(""visibility"")
-                .fold())
-            .fold().next();
-
-        if (result.isEmpty()) return Mono.error(new NotFoundException(""Not found""));
-
-        Node vg = getNodeFromVertexProps((Map<Object, Object>)result.get(0).get(""visibilityGroup""));
-
-        List<Map<Object, Object>> childrenMaps = (List<Map<Object, Object>>)result.get(0).get(""children"");
-
-        childrenMaps.forEach(map -> {
-            Map<Object, Object> edgeProps = (Map<Object, Object>) map.get(""edge"");
-            Node child = getNodeFromVertexProps(edgeProps);
-            if (VisibilityGroupRelationship.Visibility.valueOf((String)map.get(""visibility"")) == 
-                    VisibilityGroupRelationship.Visibility.PRIVILEGED)
-                vg.addExplicitlyVisibleChild(child);
-            else vg.addChild(child);
-        });
-
-        return Mono.just(vg);
-    }
-
-    
-
-    private static Node getNodeFromVertexProps(Map<Object, Object> r) {
-        return Node.builder()
-            .id(r.get(T.id).toString())
-            .entityId(UUID.fromString(r.get(Node.ENTITY_ID_PROPERTY).toString()))
-            .nodeName(r.get(""nodeName"").toString())
-            .label(r.get(T.label).toString())
-            .build();
-    }
-}
-
-
-","1. You're doing a lot of extra work at the bottom of the query within the findVisibleNode method.  You could combine those two into:
-       g.V()
-            .hasLabel(originLabel)
-            .has(Node.ENTITY_ID_PROPERTY, originEntityId.toString())
-            .repeat(outE(Node.CAN_SEE_REL_TYPE, CONTAINS_REL_TYPE)
-                .has(VisibilityGroupRelationship.VISIBILITY, isPrivileged ?
-                    within(VisibilityGroupRelationship.Visibility.PRIVILEGED.name(),
-                           VisibilityGroupRelationship.Visibility.STANDARD.name()) :
-                    within(VisibilityGroupRelationship.Visibility.STANDARD.name()))
-                .otherV().dedup())
-            .until(hasLabel(targetLabel)
-                .has(Node.ENTITY_ID_PROPERTY, targetEntityId.toString()))
-            .project(""visibilityGroup"", ""children"")
-            .by(elementMap())
-            .by(outE().hasLabel(CONTAINS_REL_TYPE)
-                .project(""edge"", ""visibility"")
-                .by(inV().elementMap())
-                .by(""visibility"")
-                .fold())
-            .fold().next();
-
-That would essentially get the same results.
-",Gremlin
-"i have the following snippet of code having two images one for light and other for dark mode. media query is not working for ios gmail app. how can i show a .img-dark when applying dark mode in ios gmail app?
-@media (prefers-color-scheme: dark) {
-.img-dark {
-      display: inline-block !important;
-    }
-    .img-light {
-      display: none !important;
-    }
-}
-
-
-
-<img
-                                      class=""header-logo img-light""
-                                      src=""https://viwell.com/wp-content/uploads/2023/10/viwell-logo.png""
-                                      alt=""Viwell Logo""
-                                      style=""width: 284px; height: 64px""
-                                      width=""284""
-                                      height=""64"" /><img
-                                      class=""header-logo img-dark""
-                                      src=""https://viwell.com/wp-content/uploads/2023/10/viwell-logo-dark.png""
-                                      alt=""Viwell Logo""
-                                      style=""
-                                        width: 284px;
-                                        height: 64px;
-                                        display: none;
-                                      ""
-                                      width=""284""
-                                      height=""64""
-                                  />
-
-
-
-","1. The Gmail iOS app doesn't support media queries. To show the .img-dark image in the Gmail iOS app, you need to use a different method.
-One way is to use the data-src attribute to store the path to the image for dark mode. Then, you can use JavaScript to check the current viewing mode and display the appropriate image.
- data-src=""https://viwell.com/wp-content/uploads/2023/10/viwell-logo-dark.png""
-
-",Litmus
-"Facing white lines issue in certain email clients, such as Outlook, particularly when testing using Litmus on various Outlook and Outlook X-DPI clients. The problem is also observed during local testing, where the lines appear in different locations when zooming in or resizing the window. The white lines appear both horizontally and vertically, but the vertical lines are more prominent.
-The email HTML structure uses table tags, with images and text within them. While the issue specifically occurs in Outlook and related clients, other email clients seem to display the content correctly without any white lines. The same email, when opened in web browser using the line in the header ""View on web"", doesn't show any white lines (no lines on browsers, only specific email clients)
-Tested the email on different Outlook versions and resolutions, and the issue remains same.
-Screenshot with 2 lines at high zoom on Outlook 2019
-Single line at multiple zoom levels on Outlook 2019
-<meta name=""viewport"" content=""width=device-width, initial-scale=1"">
-<meta http-equiv=""Content-Type"" content=""text/html; charset=UTF-8"">
-
-<style type=""text/css"">
-  ReadMsgBody {
-    width: 100%;
-  }
-
-  .ExternalClass {
-    width: 100%;
-  }
-
-  table {
-    border-collapse: collapse;
-  }
-
-  .ExternalClass,
-  .ExternalClass p,
-  .ExternalClass span,
-  .ExternalClass font,
-  .ExternalClass td,
-  .ExternalClass div {
-    line-height: 100%;
-  }
-
-  body {
-    -webkit-text-size-adjust: 100%;
-    -ms-text-size-adjust: 100%;
-    margin: 0 !important;
-  }
-
-  p {
-    margin: 1em 0;
-  }
-
-  table td {
-    border-collapse: collapse;
-  }
-
-  img {
-    outline: 0;
-  }
-
-  a img {
-    border: none;
-  }
-
-  @-ms-viewport {
-    width: device-width;
-  }
-</style>
-<style type=""text/css"">
-  @media only screen and (max-width: 480px) {
-    .container {
-      width: 100% !important;
-    }
-
-    .footer {
-      width: auto !important;
-      margin-left: 0;
-    }
-
-    .mobile-hidden {
-      display: none !important;
-    }
-
-    .logo {
-      display: block !important;
-      padding: 0 !important;
-    }
-
-    img {
-      max-width: 100% !important;
-      height: auto !important;
-      max-height: auto !important;
-    }
-
-    .header img {
-      max-width: 100% !important;
-      height: auto !important;
-      max-height: auto !important;
-    }
-
-    .photo img {
-      width: 100% !important;
-      max-width: 100% !important;
-      height: auto !important;
-    }
-
-    .drop {
-      display: block !important;
-      width: 100% !important;
-      float: left;
-      clear: both;
-    }
-
-    .footerlogo {
-      display: block !important;
-      width: 100% !important;
-      padding-top: 15px;
-      float: left;
-      clear: both;
-    }
-
-    .nav4,
-    .nav5,
-    .nav6 {
-      display: none !important;
-    }
-
-    .tableBlock {
-      width: 100% !important;
-    }
-
-    .responsive-td {
-      width: 100% !important;
-      display: block !important;
-      padding: 0 !important;
-    }
-
-    .fluid,
-    .fluid-centered {
-      width: 100% !important;
-      max-width: 100% !important;
-      height: auto !important;
-      margin-left: auto !important;
-      margin-right: auto !important;
-    }
-
-    .fluid-centered {
-      margin-left: auto !important;
-      margin-right: auto !important;
-    }
-
-    /* MOBILE GLOBAL STYLES - DO NOT CHANGE */
-    body {
-      padding: 0px !important;
-      font-size: 16px !important;
-      line-height: 150% !important;
-    }
-
-    h1 {
-      font-size: 22px !important;
-      line-height: normal !important;
-    }
-
-    h2 {
-      font-size: 20px !important;
-      line-height: normal !important;
-    }
-
-    h3 {
-      font-size: 18px !important;
-      line-height: normal !important;
-    }
-
-    .buttonstyles {
-      font-family: arial, helvetica, sans-serif !important;
-      font-size: 16px !important;
-      color: #FFFFFF !important;
-      padding: 10px !important;
-    }
-
-    /* END OF MOBILE GLOBAL STYLES - DO NOT CHANGE */
-  }
-
-  @media only screen and (max-width: 640px) {
-    .container {
-      width: 100% !important;
-    }
-
-    .mobile-hidden {
-      display: none !important;
-    }
-
-    .logo {
-      display: block !important;
-      padding: 0 !important;
-    }
-
-    .photo img {
-      width: 100% !important;
-      height: auto !important;
-    }
-
-    .nav5,
-    .nav6 {
-      display: none !important;
-    }
-
-    .fluid,
-    .fluid-centered {
-      width: 100% !important;
-      max-width: 100% !important;
-      height: auto !important;
-      margin-left: auto !important;
-      margin-right: auto !important;
-    }
-
-    .fluid-centered {
-      margin-left: auto !important;
-      margin-right: auto !important;
-    }
-  }
-</style>
-<!--[if mso]>
-      <style type=""text/css"">
-          /* Begin Outlook Font Fix */
-          body, table, td {
-              font-family: Arial, Helvetica, sans-serif ;
-              font-size:16px;
-              color:#000000;
-              line-height:1;
-          }
-          /* End Outlook Font Fix */
-      </style>
-    <![endif]-->
-
-
-
-<div style=""font-size:0; line-height:0;"">
-</div>
-<table width=""100%"" border=""0"" cellpadding=""0"" cellspacing=""0"" align=""center"">
-
-  <tr>
-    <td valign=""top"">
-      <!--[if mso]>
-                    <table align=""center"" border=""0"" cellpadding=""0"" cellspacing=""0"" class=""templateColumns1""
-                        role=""presentation"" style=""width:600px;"" width=""600"">
-                <![endif]-->
-      <table align=""center"" border=""0"" cellpadding=""0"" cellspacing=""0"" class=""templateColumns1"" role=""presentation"" style=""width:600px;"" width=""600"">
-        <!-- Footer -->
-
-        <tr>
-          <td align=""center"" class=""templateColumnContainer"" style=""width:100%;"" valign=""top"">
-            <table align=""center"" bgcolor=""#c6dccf"" border=""0"" cellpadding=""0"" cellspacing=""0"" class=""footer-dm"" role=""presentation"" width=""100%"">
-
-              <tr>
-                <td>
-                  <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-
-                    <tr>
-                      <td class=""one-column"" style=""padding-top:0;padding-bottom:0;padding-right:0;padding-left:0;"">
-                        <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-                          <tr>
-                            <td align=""center"" style=""width:100%;"" valign=""middle"">
-
-
-                              <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"" style=""border: 1px solid #c6dccf;"">
-                                <tr>
-                                  <td align=""left"" height=""30"" style=""font-size:1px;mso-line-height-rule:exactly;line-height:0px;border:0;height:0px;"">
-                                  </td>
-                                </tr>
-                                <tr>
-                                  <td class=""footer"" style=""color:#1a5252;text-align:center;font-family:Muli, Arial, sans-serif;font-weight:400;font-size:11px;mso-line-height-rule:exactly;line-height:20px;letter-spacing:0.07em;padding: 0px"">
-                                    <a alias="""" conversion=""true"" data-linkto=""https://"" href=""https://www.test.com"" style=""color:#1a5252;text-decoration:none;"" target=""_blank"" title=""test"">Test |
-                                      UK: 00000000 | US: 00000000
-                                  </a></td>
-                                </tr>
-                              </table>
-
-                            </td>
-                          </tr>
-                        </table>
-                      </td>
-                    </tr>
-                  </table>
-                </td>
-              </tr>
-            </table>
-          </td>
-        </tr>
-      </table>
-      <table align=""center"" border=""0"" cellpadding=""0"" cellspacing=""0"" class=""templateColumns1"" role=""presentation"" style=""width:600px; border-bottom: 1px solid #c6dccf"" width=""600"">
-        <!-- Footer -->
-        <tr>
-          <td align=""center"" class=""templateColumnContainer"" style=""width:100%;"" valign=""top"">
-            <table align=""center"" bgcolor=""#c6dccf"" border=""0"" cellpadding=""0"" cellspacing=""0"" class=""footer-dm"" role=""presentation"" width=""100%"">
-
-              <tr>
-                <td>
-                  <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-
-                    <tr>
-                      <td class=""one-column"" style=""padding-top:0;padding-bottom:0;padding-right:0;padding-left:0;"">
-                        <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-
-
-                          <tr>
-                            <td align=""center"" valign=""middle"">
-                              <table border=""0"" cellpadding=""0"" cellspacing=""0"" width=""100%"" role=""presentation"">
-
-                                <tr>
-                                  <td align=""center"" class=""templateColumnContainer"" valign=""middle"" style=""width: 250px!important;"">
-                                    <!--[if mso]>
-                                      <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-                                        <tr align=""center"">
-                                          <td align=""center"" width=""90"" style=""padding: 0;"">
-                                            <a href=""test"" target=""_blank"" title=""Test"">
-                                              <img src=""https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Test-Logo.svg/783px-Test-Logo.svg.png"" alt=""test"" width=""140"" height=""60"" style=""border: 0; display: block;"">
-                                            </a>
-                                          </td>
-                                        </tr>
-                                      </table>
-                                    <![endif]-->
-                                    <!--[if !mso]>-->
-                                    <a href=""test"" target=""_blank"" title=""test"">
-                                      <img src=""https://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Test-Logo.svg/783px-Test-Logo.svg.png"" alt=""test"" width=""140"" height=""60"" style=""border: 0; display: block;"">
-                                    </a>
-                                    <!--<![endif]-->
-
-                                  </td>
-                                  <td align=""center"" class=""templateColumnContainer"" style=""width:50%;"" valign=""middle"">
-                                    <!--[if mso]>
-                                    <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-                                      <tr>
-                                        <td align=""center"" style=""font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; color: #1a5252; text-align: center; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span style=""color: #1a5252;"">
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td align=""center"" style=""font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; color: #1a5252; text-align: center; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span style=""color: #1a5252;"">
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td align=""center"" style=""font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; color: #1a5252; text-align: center; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span style=""color: #1a5252;"">
-                                            <a href="""" style=""color: #1a5252; text-decoration: none;"" title=""test"">test</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td align=""center"" style=""font-family: Verdana, Arial, sans-serif; font-size: 15px; font-weight: 700; color: #1a5252; text-align: center; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span style=""color: #1a5252;"">
-                                            <a href=""tel:+0011223456"" style=""color: #1a5252; text-decoration: none;"" title=""+0011223456"">+0011223456</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td align=""left"" colspan=""3"" height=""20"" style=""font-size: 1px; mso-line-height-rule: exactly; line-height: 0px; border: 0; height: 10px;""></td>
-                                      </tr>
-                                    </table>
-                                  <![endif]-->
-                                    <!--[if !mso]>-->
-                                    <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-                                      <tr>
-                                        <td class=""footer2"" style=""color: #1a5252; text-align: center; font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span class=""contact"">
-                                            <a href="""" style=""color: #1a5252; text-decoration: none;"" title=""Test"">Test</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td class=""footer2"" style=""color: #1a5252; text-align: center; font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span class=""contact"">
-                                            <a href="""" style=""color: #1a5252; text-decoration: none;"" title=""test"">test</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td class=""footer2"" style=""color: #1a5252; text-align: center; font-family: Verdana, Arial, sans-serif; font-size: 14px; font-weight: 700; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span class=""contact"">
-                                            <a href="""" style=""color: #1a5252; text-decoration: none;"" title=""test"">test</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td class=""footer2"" style=""color: #1a5252; text-align: center; font-family: Verdana, Arial, sans-serif; font-size: 15px; font-weight: 700; mso-line-height-rule: exactly; line-height: 20px; letter-spacing: 0.07em;"">
-                                          <span class=""contact"">
-                                            <a href=""tel:+0011223456"" style=""color: #1a5252; text-decoration: none;"" title=""+0011223456"">+0011223456</a>
-                                          </span>
-                                        </td>
-                                      </tr>
-                                      <tr>
-                                        <td align=""left"" colspan=""3"" height=""20"" style=""font-size: 1px; mso-line-height-rule: exactly; line-height: 0px; border: 0; height: 10px;"">
-                                        </td>
-                                      </tr>
-                                    </table>
-                                    <!--<![endif]-->
-
-                                  </td>
-                                  <td align=""center"" class=""templateColumnContainer"" style=""width:25%;"" valign=""middle"">
-                                    <!--[if mso]>
-                                    <table border=""0"" cellpadding=""0"" cellspacing=""0"" role=""presentation"" width=""100%"">
-                                    <![endif]-->
-                                  </td>
-                                </tr>
-                                <tr align=""center"" width=""100%"">
-                                  <!--[if mso]>
-                                <td align=""center"" style=""width:90px;max-width:90px;"" width=""90"">
-                                <![endif]-->
-                                                              <!--[if !mso]>
-                                <td align=""center"" style=""width:100px;max-width:100px;"" width=""90"">
-                                <![endif]-->
-                                </tr>
-                              </table>
-                              <table align=""center"" border=""0"" cellpadding=""0"" cellspacing=""0"" style=""width:128px;max-width:128px;"" width=""90"">
-                                <tr align=""center"" width=""100%"">
-                                  <!--[if mso]>
-                                  <td align=""center"" style=""width:100px;max-width:100px;"" width=""90"">
-                                  <![endif]-->
-                                                          <!--[if !mso]>
-                                  <td align=""center"" style=""width:127px;max-width:127px;"" width=""90"">
-                                  <![endif]-->
-                                </tr>
-                              </table>
-                            </td>
-                          </tr>
-                        </table>
-                      </td>
-                    </tr>
-                    <!--[if mso]>
-                    </table>
-                    <![endif]-->
-
-
-
-                  </table>
-                </td>
-              </tr>
-              <tr>
-                <td align=""left"" class=""bit-smaller2"" height=""15"" style=""font-size:1px;mso-line-height-rule:exactly;line-height:0px;border:none;height:15px;"" valign=""top"">
-                </td>
-              </tr>
-            </table>
-          </td>
-        </tr>
-      </table>
-    </td>
-  </tr>
-</table>
-
-Tried changing line heights and image heights to some even digits make the lines disappear(or move them to exterior areas) for some email clients but they start appearing more prominently in other clients.
-Existing StackOverflow Question has an answer but that's specifically for when Outlook's zoom is 100% and Windows' display scaling factor is a multiple of 25%.
-Expecting the white lines to not appear at any zoom levels (affecting X-DPI clients on Litmus)
-Any idea?
-","1. When you assign a background color which is different than the ""body"" background color you have to assign that background color to almost all the child elements rather than applying it only to the container table/div to avoid that issue on Outlook.
-If you don't follow it then in some cases the body background color will overwrite the background color of some empty areas.
-Let's give an example with your email:
-There is a table cell right after ""footer2"" classes end. That table cell has a height of 10px but it is empty. I understand it is a spacer but this issue occurs in these empty areas. So you have to assign the background color to it as well to fix the issue.
-To understand it better you can give a different body background color then you will see it is that color not the ""white"" color always.
-I have coded a lot of html emails for my clients and this kind of Outlook issues are really annoying. But I have successfully overcome from this one :)
-",Litmus
-"I have this module for a responsive email design that inserts a little tiny amount of whitespace below my images and I can't figure out why. It doesn't seem to matter what the proportions or size of the image are. Always the same amount of space.
-I inlined all of my CSS - the class declarations are only for @media queries. I've added padding-bottom: 0; border:0; border-collapse:collapse; to anything I could think of that contains that image in some way, nothing seems to even change the result in any way...
-Here's a screenshot of the problem. You can see the small whitespace below her photo.
-
-
-
-<table style=""background-color:grey"">
-  <tr>
-    <td style=""padding-left: 25px; padding-right:25px; padding:bottom:0px!important; border:0!impotant;"">
-      <table class=""oneup50"" align=""center"" valign=""middle"" style=""width: 100%; vertical-align: middle; background-color: #FFFFFF; border:0; padding-bottom: 0; border-collapse: collapse!important;"" role=""presentation"" dir=""ltr"">
-        <tr>
-          <td class=""stackB"" style=""padding-bottom:0!important; display: inline-block!important; border:0!important; border-collapse:collapse!important;"">
-            <table>
-              <tr>
-                <td style=""padding: 0;"">
-                  <a href=""URL""><img width=""287"" class=""imgStack"" style=""border: 0;"" src=""https://i.imgur.com/I1d9YPY.png""></a>
-                </td>
-              </tr>
-            </table>
-          </td>
-          <td class=""stack"" valign=""middle"" align=""center"" width=""263"" style=""height:219px; border:0; padding-bottom: 0; display: inline-block; vertical-align: middle!important;"">
-            <table role=""presentation"" valign=""middle"" style=""border: 0; vertical-align: middle; display:inline-block;"">
-              <tr>
-                <td valign=""middle"" style=""width:47%; vertical-align: middle; padding:0;"" height=""218"">
-                  <table style=""display:inline-block;"">
-                    <tr>
-                      <td style=""padding: 0;"">
-                        <center>
-                          <h2 style=""padding: 0 15% 0 15%; margin:0; font-family: 'Arial'; font-size: 12pt; color: #002855; line-height: 14pt; font-weight: bold; text-align: middle;"">
-                            How mindful are you?
-                          </h2>
-                          <p style=""padding: 0 15% 10px 15%; font-family: 'Arial'; font-size: 10pt; line-height: 12pt; color: #63666a;"">
-                            See if your habits and attitudes are helping you be more present and purposeful.
-                          </p>
-                          <a class=""button"" rel=""noopener"" target=""_blank"" href=""URL"" style=""background-color: #1a7ead; font-size: 12px; font-family: Helvetica, Arial, sans-serif; font-weight: bold; text-decoration: none; padding: 14px 40px; color: #ffffff; display: inline-block; mso-padding-alt: 0;"">
-                            <!--[if mso]>
-    <i style=""letter-spacing: 25px; mso-font-width: -100%; mso-text-raise: 30pt;"">&nbsp;</i>
-    <![endif]-->
-                            <span style=""mso-text-raise: 15pt;"">Take the quiz</span>
-                            <!--[if mso]>
-    <i style=""letter-spacing: 25px; mso-font-width: -100%;"">&nbsp;</i>
-    <![endif]-->
-                          </a>
-                        </center>
-                      </td>
-                    </tr>
-                  </table>
-                </td>
-              </tr>
-            </table>
-          </td>
-        </tr>
-      </table>
-    </td>
-  </tr>
-</table>
-
-
-
-","1. I figured it out thanks to this post: Image inside div has extra space below the image
-Setting the image ITSELF to display: block fixed it!
-
-2. This is caused by the fact that an image is an inline element by default.
-You can change the image to block, <img style=""display:block;"" ... />, or, if you wanted to be able to center the image still, use vertical-align:middle: <img style=""vertical-align:top;"" ... /> (or middle, or bottom - it actually doesn't matter).
-That would allow you to, for example, keep your image centered on mobile screens, while still having it left aligned on desktop.
-<p style=""text-align:center;""><img style=""vertical-align:top;"" ... /></p>
-
-",Litmus
-"I'm having an issue where font sizes I'm applying to my email in media queries are being inconsistently applied. In some cases, it works fine, but in others it just gets ignored. Has anybody else encountered this or can see what I might be doing wrong?
-This is the CSS:
-     .stack {
-         display:block!important;
-         width:100%!important;
-         max-width:inherit;
-         height:auto;
-    }
-     .stackB {
-         max-width:100%!important;
-         display:block!important;
-         width:100%!important;
-         margin: auto;
-    }
-     .imgStack {
-         width:100%!important;
-         padding-right:0;
-         padding-left:0;
-    }
-     .mobSp {
-         display:block!important;
-    }
-     .w100p {
-         width:100%!important;
-         min-width: 350px;
-    }
-     .imgFull {
-         width:100%!important;
-         height:auto!important;
-    }
-     .rPad-0 {
-         padding-right:0!important;
-    }
-     .lPad-0 {
-         padding-left:0!important;
-    }
-     .copy2 {
-         padding:0px 10% 0px 10%;
-         width:100% 
-    }
-     .banner {
-         width:100%;
-         padding-left:20%;
-         padding-right:20%;
-    }
-     .hero{
-         width: 90%!important;
-    }
-     .headline{
-         width:92%;
-    }
-     .oneupimg {
-         width: 92%;
-    }
-     .oneupcopy{
-         width: 92%;
-    }
-     .oneup50{
-         width: 92%;
-    }
-     h2 {
-         font-family: 'Arial';
-         font-size: 16pt!important;
-         color: #002855;
-         padding: 0;
-         line-height: 16pt!important;
-         font-weight: bold;
-    }
-     h3 {
-         font-family: 'Arial';
-         font-size: 15pt!important;
-         color: #002855;
-         padding: 0;
-         line-height: 18pt!important;
-         font-weight: bold;
-    }
-     p a {
-         color: #1a7ead;
-         font-size:12pt!important;
-    }
-     p {
-         font-size: 12pt!important;
-    }
-}
- @media screen and (max-width:400px){
-     .stack {
-         display:block!important;
-         width:100%!important;
-         max-width:inherit;
-         height:auto;
-    }
-     .stackB {
-         max-width:100%!important;
-         display:block!important;
-         width:100%!important;
-         margin: auto;
-    }
-     .imgStack {
-         width:100%!important;
-         padding-right:0;
-         padding-left:0;
-    }
-     .mobSp {
-         display:block!important;
-    }
-     .w100p {
-         width:100%!important;
-         min-width: 350px;
-    }
-     .imgFull {
-         width:100%!important;
-         height:auto!important;
-    }
-     .rPad-0 {
-         padding-right:0!important;
-    }
-     .lPad-0 {
-         padding-left:0!important;
-    }
-     .copy2 {
-         padding:0px 10% 0px 10%;
-         width:100% 
-    }
-     .banner {
-         width:100%;
-         padding-left:20%;
-         padding-right:20%;
-    }
-     .hero{
-         width: 90%!important;
-    }
-     .headline{
-         width:92%;
-    }
-     .oneupimg {
-         width: 92%;
-    }
-     .oneupcopy{
-         width: 92%;
-    }
-     .oneup50{
-         width: 92%;
-    }
-     h2 {
-         font-family: 'Arial';
-         font-size: 16pt!important;
-         color: #002855;
-         padding: 0;
-         line-height: 16pt!important;
-         font-weight: bold;
-    }
-     h3 {
-         font-family: 'Arial';
-         font-size: 15pt!important;
-         color: #002855;
-         padding: 0;
-         line-height: 18pt!important;
-         font-weight: bold;
-    }
-     p a {
-         color: #1a7ead;
-         font-size:12pt!important;
-    }
-     p {
-         font-size: 12pt!important;
-    }
-}
- table {
-     border-spacing: 0;
-     border: 0;
-}
- td {
-     padding: 0;
-}
- p {
-     font-family: 'Arial';
-     font-size: 10pt;
-     line-height: 12pt;
-     color: #63666a;
-}
- img {
-     border: 0;
-}
- h1 {
-     font-family: 'Georgia';
-     font-size: 20pt;
-     color: #002855;
-     padding: 0;
-     line-height: 20pt;
-}
- h2 {
-     font-family: 'Arial';
-     font-size: 12pt;
-     color: #002855;
-     padding: 0;
-     line-height: 12pt;
-     font-weight: bold;
-}
- h3 {
-     font-family: 'Arial';
-     font-size: 14px;
-     color: #002855;
-     padding: 0;
-     line-height: 23px;
-     font-weight: bold;
-}
- p a {
-     color: #1a7ead;
-}
-
-This module works:
-                <!-- CTA banner module -->
-                    <tr>
-                        <td>
-                            <table width=""600"" role=""presentation"" align=""center"" valign=""middle"" bgcolor=""#002855"" style=""background-color: #002855; padding: 10px 0px 10px 0px;"">
-                                <tr>
-                                                                <!--[if (gte mso 9) | (IE)]>
-                                    <tr height=""10px style=""border:0;"" ></tr>
-                                    <! [endif] -->
-                                    <td class=""banner"" width=""600px"" style=""padding: 0px 11% 20px 11%;"">
-                                        <h1 align=""center"" style=""color:#41b6e6; font-size: 20px; padding-bottom: 0px;"">
-                                            Breakfast is just the start.
-                                        </h1>
-                                        <p align=""center"" style=""color: #FFFFFF; padding-bottom: 10px;"">Copy copy copy copy copy copy</p>
-                                        <center><a rel=""noopener"" target=""_blank"" href=""URL"" style=""background-color: #FDC661; font-size: 14px; font-family: Helvetica, Arial, sans-serif; font-weight: bold; text-decoration: none; padding: 14px 40px; color: #002855; display: inline-block; mso-padding-alt: 0;"">
-                                            <!--[if mso]>
-<i style=""letter-spacing: 25px; mso-font-width: -100%; mso-text-raise: 30pt;"">&nbsp;</i>
-<![endif]-->
-                                            <span style=""mso-text-raise: 15pt;"">Connect to well-being support</span>
-                                            <!--[if mso]>
-<i style=""letter-spacing: 25px; mso-font-width: -100%;"">&nbsp;</i>
-<![endif]-->
-                                        </a></center>
-                                    </td>   
-                                </tr>
-                            </table>
-                        </td>
-                    </tr>   
-                                            <!--[if (gte mso 9) | (IE)]>
-                                    <tr height=""10px style=""border:0;"" ></tr>
-                                    <! [endif] -->
-                <!-- end CTA banner module -->  
-
-This module does NOT work:
-                <!-- half image 1-up -->
-                <tr>
-                        <td>
-                            <table class=""oneup50"" align=""center"" valign=""middle"" style=""width: 550px; vertical-align: middle; background-color: #FFFFFF; border:0; padding-bottom: 0;"" role=""presentation"" dir=""ltr"">
-                                <tr>
-                                    <td class=""stackB"" style=""padding-bottom:0; display: inline-block;"">
-                                        <table>
-                                            <tr>
-                                                <td>
-                                                    <a href=""URL""><img class=""imgStack"" style=""width:287px; border: 0;"" src=""image"" alt=""Breakfast sandwich featuring rustic English Muffin and many toppings""></a>
-                                                </td>
-                                            </tr>
-                                        </table>
-                                    </td>
-                                    <td class=""stack"" valign=""middle"" align=""center"" width=""263"" style=""height:219px; border:0; padding-bottom: 0; display: inline-block; vertical-align: middle!important;"">
-                                        <table role=""presentation"" valign=""middle"" style=""border: 0; vertical-align: middle; display:inline-block;"">
-                                            <tr>
-                                                <td valign=""middle"" style=""height: 218px; vertical-align: middle;"" height=""218"">
-                                                    <table style=""display:inline-block;"">
-                                                        <tr>
-                                                            <td>
-                                                            <center>
-                                            <h2 style=""padding: 0 20% 0 20%; line-height: 12pt; font-weight: bold;"">
-                                            Make-ahead Breakfast Sandwich
-                                            </h2>
-                                            <p style=""padding: 0 20% 10px 20%; line-height: 12pt;"">
-                                                A hearty beginning for busy weekday mornings.
-                                            </p>
-                                            <a rel=""noopener"" target=""_blank"" href=""URL"" style=""background-color: #a7a8aa; font-size: 12px; font-family: Helvetica, Arial, sans-serif; font-weight: bold; text-decoration: none; padding: 14px 40px; color: #ffffff; display: inline-block; mso-padding-alt: 0;"">
-    <!--[if mso]>
-    <i style=""letter-spacing: 25px; mso-font-width: -100%; mso-text-raise: 30pt;"">&nbsp;</i>
-    <![endif]-->
-    <span style=""mso-text-raise: 15pt;"">See the recipe</span>
-    <!--[if mso]>
-    <i style=""letter-spacing: 25px; mso-font-width: -100%;"">&nbsp;</i>
-    <![endif]-->
-</a>
-                                    </center>
-                                                            </td>
-                                                        </tr>
-                                                    </table>
-                                                </td>
-                                            </tr>
-                                        </table>
-                                    </td>
-                                </tr>
-                            </table>
-                        </td>
-                    </tr>       
-                <!-- end half image 1-up -->
-
-I thought it might be something to do with my inline styling so I removed that on some of the p tags but that didn't seem to do anything. Then added !important which had some effect but not all. Renders in Litmus also show that a module may work in one email client but not another - not the same modules failing in every client. Let me know if any of you have dealt with this before or found a fix, thanks!
-","1. It's a very complex and specific template here so I can't solve it all for you.
-When you get this sort of issue, it's usually because there is an element that is too wide for the screen, and the email software is automatically shrinking that section to fit.
-Normally for the best stacking/responsive behaviour, we would use inline-block rather than block, on the main container.
-You would need to inline as much as possible, only relying on a <style> block where you want to progressively enhance.
-Make sure images are responsive. You can set them inline to style=""width:100%;height:auto;"" for example, or perhaps style=""max-width:100%"" will be okay for your images.
-Table widths are another to watch out for. I'm not sure what your outer structure is to comment on that, but you should be using max-width:600px or similar, and a fallback for Outlook. Inner tables should normally be using percentages.
-",Litmus
-"I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm launching my FastAPI app with the following command to include the AppDynamics agent:
-pyagent run -c appdynamics.cfg uvicorn my_app:app --reload
-
-My goal is to reduce the verbosity of the logs from both the AppDynamics agent and the proxy that are output to stdout, aiming to keep my console output clean and focused on more critical issues.
-My module versions:
-$ pip freeze | grep appdy
-appdynamics==23.10.0.6327
-appdynamics-bindeps-linux-x64==23.10.0
-appdynamics-proxysupport-linux-x64==11.64.3
-
-Here's the content of my appdynamics.cfg configuration file:
-[agent]
-app = my-app
-tier = my-tier
-node = teste-local-01
-
-[controller]
-host = my-controller.saas.appdynamics.com
-port = 443
-ssl = true
-account = my-account
-accesskey = my-key
-
-[log]
-level = warning
-debugging = off
-
-I attempted to decrease the log verbosity further by modifying the log4j.xml file for the proxy to set the logging level to WARNING. However, this change didn't have the effect I was hoping for. The log4j.xml file I adjusted is located at:
-/tmp/appd/lib/cp311-cp311-63ff661bc175896c1717899ca23edc8f5fa87629d9e3bcd02cf4303ea4836f9f/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j.xml
-
-Here are the adjustments I made to the log4j.xml:
-    <appender class=""com.singularity.util.org.apache.log4j.ConsoleAppender"" name=""ConsoleAppender"">
-        <layout class=""com.singularity.util.org.apache.log4j.PatternLayout"">
-            <param name=""ConversionPattern"" value=""%d{ABSOLUTE} %5p [%t] %c{1} - %m%n"" />
-        </layout>
-        <filter class=""com.singularity.util.org.apache.log4j.varia.LevelRangeFilter"">
-            <param name=""LevelMax"" value=""FATAL"" />
-            <param name=""LevelMin"" value=""WARNING"" />
-        </filter>
-
-Despite these efforts, I'm still seeing a high volume of logs from both the agent and proxy. Could anyone provide guidance or suggestions on how to effectively lower the log output to stdout for both the AppDynamics Python Agent and its proxy? Any tips on ensuring my changes to log4j.xml are correctly applied would also be greatly appreciated.
-Thank you in advance for your help!
-Example of logging messages I would like to remove from my stdout:
-2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759
-2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759
-...
-[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:51 BRT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader
-[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm
-[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - UUIDPool size is 10
-Agent conf directory set to [/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf]
-...
-11:15:52,167  INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Starting BT Logs at Sat Mar 23 11:15:52 BRT 2024
-11:15:52,168  INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - ###########################################################
-11:15:52,169  INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Using Proxy Version [Python Agent v23.10.0.6327 (proxy v23.10.0.35234) compatible with 4.5.0.21130 Python Version 3.11.6]
-11:15:52,169  INFO [AD Thread Pool-ProxyControlReq0] JavaAgent - Logging set up for log4j2
-...
-11:15:52,965  INFO [AD Thread Pool-ProxyControlReq0] JDBCConfiguration - Setting normalizePreparedStatements to true
-11:15:52,965  INFO [AD Thread Pool-ProxyControlReq0] CallGraphConfigHandler - Call Graph Config Changed  callgraph-granularity-in-ms  Value -null
-
-","1. I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution.
-I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages.
-Here's a summary of the steps:
-
-Proxy Log Level: The log4j2.xml file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. Modify the seven log level itens <AsyncLogger> within the <Loggers> section to one of the following: debug, info, warn, error, or fatal.
-
-Watch Dog Log Level: This can be adjusted in the proxy.py file found within the appdynamics Python module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable.
-
-
-My versions
-$ pip freeze | grep appdynamics
-appdynamics==24.2.0.6567
-appdynamics-bindeps-linux-x64==24.2.0
-appdynamics-proxysupport-linux-x64==11.68.3
-
-Original files
-log4j2.xml
-<Loggers>
-    <!-- Modify each <AsyncLogger> level as needed -->
-        <AsyncLogger name=""com.singularity"" level=""info"" additivity=""false"">
-            <AppenderRef ref=""Default""/>
-            <AppenderRef ref=""RESTAppender""/>
-            <AppenderRef ref=""Console""/>
-        </AsyncLogger>
-</Loggers>
-
-proxy.py
-def configure_proxy_logger(debug):
-    logger = logging.getLogger('appdynamics.proxy')
-    level = logging.DEBUG if debug else logging.INFO
-    pass
-
-def configure_watchdog_logger(debug):
-    logger = logging.getLogger('appdynamics.proxy')
-    level = logging.DEBUG if debug else logging.INFO
-    pass
-
-My Script to create environment variables to log4j2.xml and proxy.py
-update_appdynamics_log_level.sh
-#!/bin/sh
-
-# Check if PYENV_ROOT is not set
-if [ -z ""$PYENV_ROOT"" ]; then
-    # If PYENV_ROOT is not set, then set it to the default value
-    export PYENV_ROOT=""/usr/local/lib""
-    echo ""PYENV_ROOT was not set. Setting it to default: $PYENV_ROOT""
-else
-    echo ""PYENV_ROOT is already set to: $PYENV_ROOT""
-fi
-
-echo ""=========================== log4j2 - appdynamics_bindeps module =========================""
-
-# Find the appdynamics_bindeps directory
-APP_APPD_BINDEPS_DIR=$(find ""$PYENV_ROOT"" -type d -name ""appdynamics_bindeps"" -print -quit)
-
-if [ -z ""$APP_APPD_BINDEPS_DIR"" ]; then
-  echo ""Error: appdynamics_bindeps directory not found.""
-  exit 1
-fi
-
-echo ""Found appdynamics_bindeps directory at $APP_APPD_BINDEPS_DIR""
-
-# Find the log4j2.xml file within the appdynamics_bindeps directory
-APP_LOG4J2_FILE=$(find ""$APP_APPD_BINDEPS_DIR"" -type f -name ""log4j2.xml"" -print -quit)
-
-if [ -z ""$APP_LOG4J2_FILE"" ]; then
-  echo ""Error: log4j2.xml file not found within the appdynamics_bindeps directory.""
-  exit 1
-fi
-
-echo ""Found log4j2.xml file at $APP_LOG4J2_FILE""
-
-# Modify the log level in the log4j2.xml file
-echo ""Modifying log level in log4j2.xml file""
-sed -i 's/level=""info""/level=""${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}""/g' ""$APP_LOG4J2_FILE""
-
-echo ""log4j2.xml file modified successfully.""
-
-echo ""=========================== watchdog - appdynamics module ===============================""
-
-# Find the appdynamics directory
-APP_APPD_DIR=$(find ""$PYENV_ROOT"" -type d -name ""appdynamics"" -print -quit)
-
-if [ -z ""$APP_APPD_DIR"" ]; then
-  echo ""Error: appdynamics directory not found.""
-  exit 1
-fi
-
-echo ""Found appdynamics directory at $APP_APPD_DIR""
-
-# Find the proxy.py file within the appdynamics directory
-APP_PROXY_PY_FILE=$(find ""$APP_APPD_DIR"" -type f -name ""proxy.py"" -print -quit)
-
-if [ -z ""$APP_PROXY_PY_FILE"" ]; then
-  echo ""Error: proxy.py file not found within the appdynamics directory.""
-  exit 1
-fi
-
-echo ""Found proxy.py file at $APP_PROXY_PY_FILE""
-
-# Modify the log level in the proxy.py file
-echo ""Modifying log level in proxy.py file""
-sed -i 's/logging.DEBUG if debug else logging.INFO/os.getenv(""APP_APPD_WATCHDOG_LOG_LEVEL"", ""info"").upper()/g' ""$APP_PROXY_PY_FILE""
-
-
-echo ""proxy.py file modified successfully.""
-
-
-Dockerfile
-Dockerfile to run pyagent with FastAPI and run this script
-# Use a specific version of the python image
-FROM python:3.9
-
-# Set the working directory in the container
-WORKDIR /app
-
-# First, copy only the requirements file and install dependencies to leverage Docker cache
-COPY requirements.txt ./
-RUN python3 -m pip install --no-cache-dir -r requirements.txt
-
-# Now copy the rest of the application to the container
-COPY . .
-
-# Make the update_appdynamics_log_level.sh executable and run it
-RUN chmod +x update_appdynamics_log_level.sh && \
-    ./update_appdynamics_log_level.sh 
-
-# Set environment variables
-ENV APP_APPD_LOG4J2_LOG_LEVEL=""warn"" \
-    APP_APPD_WATCHDOG_LOG_LEVEL=""warn""
-
-EXPOSE 8000
-
-# Command to run the FastAPI application with pyagent
-CMD [""pyagent"", ""run"", ""uvicorn"", ""main:app"", ""--proxy-headers"", ""--host"",""0.0.0.0"", ""--port"",""8000""]
-
-Files changed by the script
-log4j2.xml
-<Loggers>
-    <!-- Modify each <AsyncLogger> level as needed -->
-        <AsyncLogger name=""com.singularity"" level=""${env:APP_APPD_LOG4J2_LOG_LEVEL:-info}"" additivity=""false"">
-            <AppenderRef ref=""Default""/>
-            <AppenderRef ref=""RESTAppender""/>
-            <AppenderRef ref=""Console""/>
-        </AsyncLogger>
-</Loggers>
-
-proxy.py
-def configure_proxy_logger(debug):
-    logger = logging.getLogger('appdynamics.proxy')
-    level = os.getenv(""APP_APPD_WATCHDOG_LOG_LEVEL"", ""info"").upper()
-    pass
-
-def configure_watchdog_logger(debug):
-    logger = logging.getLogger('appdynamics.proxy')
-    level = os.getenv(""APP_APPD_WATCHDOG_LOG_LEVEL"", ""info"").upper()
-    pass
-
-Warning
-Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations.
-I hope this helps!
-",AppDynamics
-"I was looking at a monitoring dashboard made out of app dynamics. 
-Here is the observation.
-Application calls per minute from matrix browser at a given point of time.
-    Observed 0
-    Sum 4
-    Count 288
-
-What does this mean, how it’s getting calculated? Could any one please clarify?
-","1. In the metric browser, you will find summarized metrics for a given time period. In the period you are looking at there are 0 calls, but int he graph there were 4 calls, and the total calls were 288. It's hard to tell what you are looking at, maybe attach a screenshot?
-",AppDynamics
-"I want to work with IMDB datasets. Trying to load using following command:
-from torchtext.datasets import IMDB
-train_iter = IMDB(root='~/datasets', split='train')
-
-I am getting following error:
-ImportError: cannot import name 'DILL_AVAILABLE' from 'torch.utils.data.datapipes.utils.common' (/home/user/env_p3.10.12_ml/lib/python3.10/site-packages/torch/utils/data/datapipes/utils/common.py)
-
-How to solve it?
-","1. I have met the same issue and I found this site.
-Following the GitHub issue page I modified the torch/utils/data/datapipes/utils/common.py by adding a new line:
-DILL_AVAILABLE = dill_available(). Then the issue was solved.
-",DataSet
-"I am looking to fine tune a summarization model from the HuggingFace Repository of NLP models to convert an extractive summary of a scientific research paper to an abstractive one. I already have a model that generates extractive summaries from the paper but I need data for training the model that would take the extractive summary as an input and generate corresponding abstractive summary as an output.
-So far most of the datasets I have seen contain the entire paper as an input and generate an abstractive summary. Whereas I need to input the extractive summary to the model.
-The closest dataset that I found would help me is this one:
-https://huggingface.co/datasets/allenai/scitldr/viewer?row=10.
-This has the Abstract, Introduction and Conclusion to the model as an input (which is kind of like an extractive summary), but only generates one line abstracts as the output. Whereas I would want the abstract also to be of multiple lines.
-Could you kindly suggest me any dataset that you know of, which would help me with this? Any help is very appreciated, Thanks a lot !
-","1. You can try this datasets:
-
-DialogSum is a large-scale dialogue summarization dataset, consisting
-of 13,460 (Plus 100 holdout data for topic generation) dialogues with
-corresponding manually labeled summaries and topics. ref:
-https://huggingface.co/datasets/knkarthick/dialogsum
-
-This dataset use for summarization long document:
-https://huggingface.co/datasets/ccdv/govreport-summarization?row=0
-
-Wiki_summary: The dataset extracted from Persian Wikipedia into the
-form of articles and highlights and cleaned the dataset into pairs of
-articles and highlights and reduced the articles' length [https://huggingface.co/datasets/wiki_summary]
-
-
-",DataSet
-"I have two arrays of objects which I want to ""Full Outer Join"", like in SQL:
-Dataset A:
-[ { id: 1, name: ""apple"", color: ""red"" }, {id: 2, name: ""banana"", color: ""yellow""} ]
-
-Dataset B:
-[ { id: 1, name: ""apple"", color: ""blue"" }, {id: 3, name: ""mango"", color: ""green""} ]
-
-Intended result:
-[ { id: 1, dataset_a: { id: 1, name: ""apple"", color: ""red"" }
-         , dataset_b: { id: 1, name: ""apple"", color: ""blue"" }
-  }
-, { id: 2, dataset_a: { id: 2, name: ""banana"", color: ""yellow""}
-         , dataset_b: null
-  }
-, { id: 3, dataset_a: null
-         , dataset_b: { id: 3, name: ""mango"", color: ""green""}
-  }
-]
-
-
-The id's are unique.
-Lodash may be used.
-I have no restriction on ES version.
-
-Instead of null, an empty object would be OK too. The id's don't necessarily need to be repeated, as shown below. So, this would be just as good:
-[ { id: 1, dataset_a: { name: ""apple"", color: ""red"" }
-         , dataset_b: { name: ""apple"", color: ""blue"" }
-  }
-, { id: 2, dataset_a: { name: ""banana"", color: ""yellow""}
-         , dataset_b: {}
-  }
-, { id: 3, dataset_a: {}
-         , dataset_b: { name: ""mango"", color: ""green""}
-  }
-]
-
-Nina Scholz solution, transformed into a a function:
-fullOuterJoin(dataset_a_name, dataset_b_name, dataset_a, dataset_b, key) {
-    const getNullProperties = keys => Object.fromEntries(keys.map(k => [k, null]));
-    var data = { [dataset_a_name]:dataset_a, [dataset_b_name]:dataset_b },
-        result = Object
-            .entries(data)
-            .reduce((r, [table, rows]) => {
-                //forEach dynamic destructuring
-                rows.forEach(({ [key]:id, ...row }) => {
-                    if (!r[id]) r.items.push(r[id] = { [key]:id, ...getNullProperties(r.tables) });
-                    r[id][table] = row;
-                });
-                r.tables.push(table);
-                r.items.forEach(item => r.tables.forEach(t => item[t] = item[t] || null));
-                return r;
-            }, { tables: [], items: [] })
-            .items;
-        
-    return result;
-},
-
-","1. A code snippet for specifically your need:    
-const datasetA = [ { id: 1, name: ""apple"", color: ""red"" }, {id: 2, name: ""banana"", color: ""yellow""} ]
-const datasetB = [ { id: 1, name: ""apple"", color: ""blue"" }, {id: 3, name: ""mango"", color: ""green""} ]
-
-
-const joined = [];
-
-// datasetA
-for (let i = 0; i < datasetA.length; i++) {
-    let item = {
-        id: datasetA[i].id,
-        dataset_a: datasetA[i],
-    };
-    joined.push(item);
-}
-// datasetB
-for (let i = 0; i < datasetB.length; i++) {
-    const foundObject = joined.find(d => d.id === datasetB[i].id);
-    if (foundObject) {
-        foundObject['dataset_b'] = datasetB[i];
-    }
-    else {
-        let item = {
-            id: datasetB[i].id,
-            dataset_a: {},
-            dataset_b: datasetB[i],
-        };
-        joined.push(item);
-    }
-}
-
-console.log(joined);
-
-
-2. You could take a dynamic approach and store the wanted data sets in an object and iterate the entries form the object. Then group by id and get all items back.
-This approach uses an object as hash table with id as key and an array as storage for the result set. If an id is not known, a new object with id and previously used keys with null value are used. Then the actual data set is added to the object.
-Finally for missing tables null values are assigned as well.
-
-
-const
-    getNullProperties = keys => Object.fromEntries(keys.map(k => [k, null]));
-
-var dataset_a = [{ id: 1, name: ""apple"", color: ""red"" }, { id: 2, name: ""banana"", color: ""yellow"" }],
-    dataset_b = [{ id: 1, name: ""apple"", color: ""blue"" }, { id: 3, name: ""mango"", color: ""green"" }],
-    data = { dataset_a, dataset_b },
-    result = Object
-        .entries(data)
-        .reduce((r, [table, rows]) => {
-            rows.forEach(({ id, ...row }) => {
-                if (!r[id]) r.items.push(r[id] = { id, ...getNullProperties(r.tables) });
-                r[id][table] = row;
-            });
-            r.tables.push(table);
-            r.items.forEach(item => r.tables.forEach(t => item[t] = item[t] || null));
-            return r;
-        }, { tables: [], items: [] })
-        .items;
-
-console.log(result);
-.as-console-wrapper { max-height: 100% !important; top: 0; }
-
-
-
-
-3. var array1 = [ { id: 1, name: ""apple"", color: ""red"" }, {id: 2, name: ""banana"", color: ""yellow""} ]
-var array2 = [ { id: 1, name: ""apple"", color: ""blue"" }, {id: 3, name: ""mango"", color: ""green""} ]
-
-var array_sum = array1.concat(array2)
-
-var array_result = []
-
-array_sum.forEach(function(candidate, index){
-  var obj_id = candidate.id;
-  delete candidate.id
-  if(array_result.length == 0){
-    array_result.push({
-      ""id"": obj_id,
-      [""dataset_"" + index]: candidate
-    })
-  }else{
-    for(var i=0; i<array_result.length; i++){
-      if(array_result[i].id == obj_id){
-        array_result[i][""dataset_"" + index] = candidate
-        break;
-      }else if(i == array_result.length - 1){
-        array_result.push({
-          ""id"": obj_id,
-          [""dataset_"" + index]: candidate
-        })
-      }
-    }
-  }
-})
-console.log(array_result)
-
-",DataSet
-"I have installed Dynatrace agent classic full stack deployment by following this link
-Oneagent and activegate pods are consuming more memory compared to our actual micro service in the AKS cluster. I have not found any Dynatrace documentation with explaining on memory consumption. Is there a way to reduce this memory consumption by enabling or disabling some feature of Dynatrace?
-Not providing any configuration yamls here, please provide what can i configure which reduce this memory consumption.
-
-","1. You can set the resource settings for OneAgents and ActiveGates in your DynaKube yaml.
-Suggested values can be found in the classicFullStack.yaml example file.
-But assuming the values shown are in MB, they are in a normal range of what a OneAgent will consume. There isn't really a lot you can do as memory consumption of the Agent Pod doesn't scale a lot with the number of Microservices running on the node. If you had a lot more services, it still wouldn't consume significantly more RAM.
-",Dynatrace
-"I have an application running with a combination of Falcon and Gunicorn. I am trying to use OpenTelemetry to instrument it and send traces to Jaeger.
-The following is my code:
-pyproject.toml:
-opentelemetry-distro = {extras = [""otlp""], version = ""0.44b0"" }
-opentelemetry-instrumentation = ""0.44b0""
-opentelemetry-instrumentation-falcon = ""0.44b0""
-
-gunicorn_conf.py:
-from opentelemetry import trace
-from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
-from opentelemetry.sdk.resources import Resource
-from opentelemetry.sdk.trace import TracerProvider
-from opentelemetry.sdk.trace.export import BatchSpanProcessor
-
-def post_fork(server, worker):
-    from opentelemetry.instrumentation.auto_instrumentation import sitecustomize
-    server.log.info(""Worker spawned (pid: %s)"", worker.pid)
-
-    resource = Resource.create(attributes={
-        ""service.name"": ""my-app""
-    })
-
-    trace.set_tracer_provider(TracerProvider(resource=resource))
-    span_processor = BatchSpanProcessor(
-        OTLPSpanExporter(endpoint=telemetry_endpoint, insecure=telemetry_insecure)
-    )
-    trace.get_tracer_provider().add_span_processor(span_processor)
-
-And the launch command:
-OTEL_RESOURCE_ATTRIBUTES=service.name={app_name} OTEL_EXPORTER_OTLP_TRACES_ENDPOINT={telemetry_endpoint} OTEL_EXPORTER_OTLP_METRICS_ENDPOINT={telemetry_endpoint} OTEL_EXPORTER_OTLP_INSECURE={telemetry_insecure} OTEL_TRACES_EXPORTER=otlp OTEL_METRICS_EXPORTER=none opentelemetry-instrument gunicorn -c gunicorn_conf.py
-
-So the application runs and then the traces appear on Jaeger but shows it like a single function call and not multiple calls even if there is a db call or an external api call.
-
-","1. I think you need to propagate the trace context manually in your application code. I run into a similar issue. I took the traceparentid from the header (in my case the header of a receiving kafka message):
-        value = headers_dict['traceparent']
-        value_string =value.decode('utf-8')
-        carrier ={'traceparent':value_string}
-        ctx = TraceContextTextMapPropagator().extract(carrier=carrier)
-
-The following code then uses an http call so I also made sure to provide the ctx in the generated trace, something like this:
-def getLocationDetails(self, ctx, headers):
-        tracer = trace.get_tracer(__name__)
-      
-        with tracer.start_span(""get geo data"", context=ctx):
-            try:
-                response = requests.request(""GET"", self.url, headers=headers, data=payload)
-                ...
-                return location_data
-            
-            except HTTPError as http_err:
-                
-                return http_err
-
-And then also include the ctx in the tracer I use to create a span for sending out a kafka message.
-  def send(self, topic, key, message,ctx, header):
-        tracer = trace.get_tracer(__name__)
-        with tracer.start_span(""send kafka message"", context=ctx):
-            self.producer.send(topic, value=message, key=key, headers=header)
-
-For me propagating the traceparent worked out and I am seeing now all my spans in one trace. I am not sure if that is the best solution, but I hope it helps.
-
-2. 
-So the application runs and then the traces appear on Jaeger but shows it like a single function call and not multiple calls even if there is a db call or an external api call.
-
-Looking at your pyproject.toml, the only instrumentation being installed is opentelemetry-instrumentation-falcon (and whatever dependencies that pulls in). In order to trace other components, like outgoing API requests through an HTTP client or db calls, you need to install the relevant instrumentations for those components. See Supported libraries and frameworks or you can search on PyPI for instrumentations. You can also try using the opentelemetry-bootstrap command as described here to automatically install relevant instrumentations.
-",Falcon
-"I tried to record some logs for front-end nginx-based containers using fluentd docker logging driver, but failed.
-I ended up with following configuration for fluentd (located in /tmp/fluentd/fluent.conf):
-<source>
-  @type forward
-  port 24224
-  bind 0.0.0.0
-</source>
-<match **-**>
-  @type stdout
-</match>
-
-and following docker swarm manifest:
-version: '3.8'
-
-networks:
-  base:
-
-services:
-  fluentd:
-    image: fluent/fluentd:v1.17.0-1.0
-    volumes:
-      - /tmp/fluentd:/fluentd/etc
-    ports:
-      - 24224:24224
-    networks:
-      - base
-  test-curl:
-    image: alpine
-    command: sh -c ""apk add curl && while true; do curl fluentd:24224; sleep 5; done""
-    depends_on:
-      - fluentd
-    networks:
-      - base
-  test-service:
-    image: nginx:1.25
-    depends_on:
-      - fluentd
-    logging:
-      driver: fluentd
-      options:
-        # fluentd-address: host.docker.internal:24224 # this line routes queries via docker host
-        fluentd-address: fluentd:24224 # this line tries to send logs directly to fluentd container
-        tag: something
-    networks:
-      - base
-
-This configuration runs fluentd service and two test services. First (with alpine-based curl test) works fine. Second, with fluentd driver, does not. It fails to start with following message:
-> docker service ps fluentd-test_test-service --no-trunc
-...
-<...>    \_ fluentd-test_test-service.1   nginx:1.25@sha256:<...> docker-desktop   Shutdown        Failed 22 seconds ago   ""starting container failed: failed to create task for container: failed to initialize logging driver: dial tcp: lookup fluentd: i/o timeout""
-...
-
-Looks like for some reason it can't see fluentd container in the very same network.
-But, as I am commenting out marked line and uncomment line with host.docker.internal address, everything works as intended.
-Running under WSL2 (2.1.5.0, kernel 5.15.146.1-2), Docker 26.1.1, Windows 10.0.19045.4412
-What I tried:
-
-upgrading docker
-upgrading wsl
-upgrading fluent/fluentd
-diagnosing fluentd (open ports, accessibility, dns resolution -- works fine)
-inspecting all docker objects (looks absolutely normal)
-asking chatgpt
-
-Please, any ideas will be appreciated.
-","1. Looks like desired behavior is not achievable with fluentd driver, as it is written in docs:
-
-fluentd Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
-
-https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers
-
-2. Dockers log driver is implemented by docker itself, and as such has no access to containers or container networking. So when you specify the address for docker to talk to, you need to specify a url the docker daemon itself can reach. As such, if you have fluentd running in a container, or directly as a daemon on the host, you would specify localhost:24224 as the target for logs.
-docker network names will not work, and if host.docker.internal works it must be because there is an entry in /etc/hosts aliasing it to localhost.
-In the case of docker swarm, you only need 1 fluentd service as ""localhost:24224"" will be routed to the fluentd container from any swarm node.
-
-There are some caveats that mean you should never run fluentd as a container - docker expects logging drivers to be present, and if it cannot log will literally pause critical operations (containers that need logging will get stuck in starting states and other worse things) until the log container is fixed.
-Use docker log drivers when you have an off-swarm highly available log target. Otherwise its best to stick to Dockers json file based logs, and tools like promtail/logstash etc. to to scrape them.
-",Fluentd
-"I'm doing testing Fluentd for collecting log files from Apache Tomcat in Windows OS.
-So, I tried install with [Fluentd-Packages v5.x] and [Calyptia-Fluent v1.3.x] and failed with Windows Message that ""Setup Wizard ended prematurely"" like below :
-
-I tried in 5 Windows boxes and only success in 1 Windows box. different thing is Windows Update and installed .NET framework only
-1. Succeed Windows : didn't update from 1st installation
-2. Other Windows : did update all windows
-
-Finally, I couldn't found any clues for solve.
-I need windows and Fluentd, Ruby experts help. if you have knowledge, share to me please.
-Thank you.
-","1. I ran into this same issue - resolved it by deleting the fluentd service that was left after another failed install
-sc delete fluentdwinsvc
-
-",Fluentd
-"I want to add a custom variable that allows me to filter the data where tag value for ""process"" equals to the variable value in grafana dashboard. I am able to add a custom variable to the dashboard with value options ""process1"", ""process2"" and ""process3"", but when I use this variable in the query as
- |> filter(fn: (r) => r[""Process ID""] == ${process})
-it is giving me error undefined identifier process2.
-Although when I replace the variable ${process} with ""process2"" the query works correctly and filters out the data by that particular process, but it doesn't work when I use variable.
-How can I fix this issue?
-I tried using the variable in the flux query as
- |> filter(fn: (r) => r[""Process ID""] == ${process})
-but it is not working
-","1. Try to use advanced variable format options:
-  |> filter(fn: (r) => r[""Process ID""] == ${process:doublequote})
-
-",Grafana
-"I am creating filter variables on Grafana 10 from influxdb 2.7.4 with below code. Each variable seperately.
-import ""influxdata/influxdb/schema""
-
-schema.tagValues(
-bucket: ""Vbuck"",
-tag: ""Service_Name""
-)
-
-But i need to create variables depend on other variables.
-For example :
-I have Service_Name, Owner_Group, Group_Manager tags.
-I have GroupManager and OwnerGroup variables.
-Service_Name is dependent to Group_Manager.
-Group_Manager is dependent to Owner_Group.
-When i choose Owner_Group in filter variables. The Group_Managers should be listed dependent to related Owner_Group. And when Group_Manager is selected Service_Names dependet to related Group_Manager should be listed in Service_Names variable list.
-In old influxdb version i was creating ""Service_Name Variable"" with below query in influxdb 1.8 with influxql :
-SHOW TAG VALUES FROM ""SELECTBOX_VALUES"" WITH KEY = ""Service_Name"" where ""Group_Manager""=~ /^$GroupManager$/ and ""Owner_Group""=~ /^$OwnerGroup$/
-
-Now how can i do same in v2.7.4?
-I should relate variables with eachother.
-So that when owner group is choosen, the groupmanagers will be listed just for this ownergroup and service names will be listed for just choosen owner group and choosen manager.
-","1. Use predicate function:
-import ""influxdata/influxdb/schema""
-    schema.tagValues(
-    bucket: ""Vbuck"",
-    tag: ""Service_Name"",
-    predicate: (r) => r.Group_Manager == ${GroupManager:doublequote}
-)
-
-",Grafana
-"I am using Grafana 10 and influxdb v2.7.4.
-I am using below flux code to create grafana variable for my dashboard.
-When i test this on influxdb script editor, i can get 3500 Client_Name variable values.
-But when i apply this as a variable and open the variable filter on dashboard the values are truncated. In alphabetic order i can only get until variable names starting with f… Nearly %70 of the variable value Client Names are missing in the filter. How can i display all 3500 Client Name variable filter values in the dropdown filter?
-import ""influxdata/influxdb/schema""
-schema.tagValues(
-bucket: ""VFBckStrg"",
-tag: ""Client_Name""
-)
-
-The strange thing is.
-If i create variable from influx 1.8 with influxql, i can see 3500 client names at variable creation page by preview of values section at bottom of page.
-If i create variable from influx v2.7.4, i can only see 1000 client names at variable creation page by preview of values section at bottom of page.
-Note : Influxdb 1.8 db and influxdb 2.7.4 db has same data. We are planning to pass to 2.7.4 version. But couldnt do it because of this limitation yet.
-","1. Grafana has 1000 items limit for variables.
-See https://github.com/grafana/grafana/issues/59959
-",Grafana
-"I'm still new to Grafana and I'm trying to extract hourly traffic data using the nginxplus_location_zone_responses metric. Just want to know if I'm using the correct promQL query.
-Any inputs would be greatly appreciated.
-Thanks!
-I'm currently using this query:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1H]))
-
-","1. Your PromQL query is generally correct for extracting hourly traffic data for the nginxplus_location_zone_responses metric. The increase function is used to calculate the increase in the metric over the specified time window (1 hour in this case), and the sum by(location_zone) is used to aggregate the data by the location_zone label.
-Here is your query for reference:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1h]))
-
-This query will sum the increase in the nginxplus_location_zone_responses metric for the specified HTTP response codes (2xx, 4xx, 5xx) over the past hour, grouped by the location_zone.
-To ensure accuracy, double-check that:
-The nginxplus_location_zone_responses metric is available and correctly scraped by your Prometheus instance.
-The labels and their values (like code, location_zone) are correct as per your NGINX Plus Prometheus exporter configuration.
-The time window ([1h]) is appropriate for your needs.
-If you need more detailed insights or to fine-tune the query, consider the following suggestions:
-Filter Specific Codes: If you need to filter specific status codes separately, you can use multiple queries or a more specific regex. For example:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx""}[1h]))
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""4xx""}[1h]))
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""5xx""}[1h]))
-
-Visualization: Ensure that your Grafana panel is configured correctly to visualize the time series data effectively (e.g., using a time series graph or bar chart for hourly data).
-Rate Function: If you need a more continuous rate rather than just the increase over the past hour, you could use the rate function:
-
-sum by(location_zone) (rate(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1h]))
-
-These tips should help you effectively monitor and visualize your hourly traffic data using Grafana and Prometheus.
-",Grafana
-"i know how to calculate number of successfull requests via
-#A:my.server.$Environment.api.*.httpServerRequests.exception.*.method.*.outcome.*.status.200.uri.{submit}.count
-#B:my.server.$Environment.api.*.httpServerRequests.exception.*.method.*.outcome.*.status.200.uri.{confirm}.count
-I'm trying to count percent of confirm to submit per hour.
-Will this query be correct for such purpose:
-asPecent(summarize(sumSeries(#B), '1h', 'sum', false),summarize(sumSeries(#A), '1h', 'sum', false))
-or am I doing something wrong?
-","1. summarize is not good option for such thing. I ended up using movingSum, which provides more relevant data
-",Graphite
-"I'm trying to do something in Grafana/Graphite which should be really easy, but has me baffled. I'd like to get the total by which a count has increased during the period I select in the top right corner of the Grafana UI.
-Users of our site can import content from various sources, and every time they do an import, we increment a StatsD count. We include the source type in the StatsD path, so we can break out the numbers for each type of import in Grafana, as in this example.
-
-Unfortunately, in this view of the graph Grafana is giving us crazy numbers. When we check them against another stats service we also use we see very different data. For example, the import Grafana shows as 96.9 in this screenshot, in our other system has a value of 4,466 over the last week. We're totally sure our other system is right and Grafana's number is wrong.
-Can anyone suggest what we're doing wrong? We want a chart that'll give us the breakdown of number of imports of each type during the selected date range. Over 7 days, we'd expect the big green item to be 4,466, not 96.6. On a date range like 1 day, we'd expect it to be something like
-629, because the count gets incremented by about that much per day.
-","1. Sorry to answer your question four years after it is helpful... The reason could be because grafana consolidates the datapoints based on the width of the graph. You can set the max data points to -1 on a query to get all datapoints returned.
-",Graphite
-"I have a metric that shows the state of a server. The values are integers and if the value is 0 (zero) then the server is stable, else it is unstable. And the graph we have is at a minute level. So, I want to show an aggregated value to know how many hours the server is unstable in the selected time range. 
-Lets say, if I select ""Last 7 days"" as the time duration...we have get X hours of instability of server.
-And one more thing, I have a line graph (time series graph) that shows the state of server...but, the thing is when I select ""Last 24 hours or 48 hours"" I am getting the graph at a minute level...when I increase the duration to a quarter I am getting the graph for every 5 min or something like that....I understand it's aggregating the values....but does any body know how the grafana is doing the aggregation ??
-I have tried ""scaleToSeconds"" function and ""ConsolidateBy"" functions and many more to first get the count of non zero value minutes, but no success.
-Any help would be greatly appreciated.
-Thanks in advance.
-","1. There are a few different ways to tackle this, there are 2 places that aggregation happens in this situation:
-
-When you query for a time range longer than your raw retention interval and whisper returns aggregated data.  The aggregation method used here is defined in your carbon aggregation configuration.
-When Grafana sends a query to Graphite it passes maxDataPoints=<width of graph in pixels>, and Graphite will perform aggregation to return at most that many points (because you don't have enough pixels to render more points than that).  The method used for this consolidation is controlled by the consolidateBy function.
-
-It is possible for both of these to be used in the same query if you eg have a panel that queries 3 days worth of data and you store 2 days at 1-minute and 7 days at 5-minute intervals in whisper then you'd have 72 * 60 / 5 = 864 points from the 5-minute archive in whisper, but if your graph is only 500px wide then at runtime that would be consolidated down to 10-minute intervals and return 432 points.
-So, if you want to always have access to the count then you can change your carbon configuration to use sum aggregation for those series (and remove the existing whisper files so new ones are created with the new aggregation config), and pass consolidateBy('sum') in your queries, and you'll always get the sum back for each interval.
-That said, you can also address this at query time by multiplying the average back out to get a total (assuming that your whisper aggregation config is using average).  The simplest way to do that will be to summarize the data with average into buckets that match the longest aggregation interval you'll be querying, then scale those values by that interval to calculate the total number of minutes.  Finally, you'll want to use consolidateBy('sum') so that any runtime consolidation will work properly.
-consolidateBy(scale(summarize(my.series, '10min', 'avg'), 60), 'sum')
-
-With all of that said, you may want to consider reporting uptime in terms of percentages rather than raw minutes, in which case you can use the raw averages directly.
-
-2. When you say the value is zero (0), the server is healthy - what other values are reported while the server is unhealthy/unstable? If you're only reporting zero (healthy) or one (unhealthy), for example, then you could use the sumSeries function to get a count across multiple servers.
-Some more information is needed here about the types of values the server is reporting in order to give you a better answer.
-Grafana does aggregate - or consolidate - data typically by using the average aggregation function. You can override this using the 'sum' aggregation in the consolidateBy function.
-To get a running calculation over time, you would most likely have to use the summarize function (also with the sum aggregation) and define the time period, e.g. 1 hour, 1 day, 1 week, and so on. You could take this a step further by combining this with a time template variable so that as the period grows/shrinks, the summarize period will increase/decrease accordingly.
-
-3. consolidateBy didn't work for me, so I went into ""Query Options"" and set ""max data points"" to -1 which meant that I got a value based on all datapoints
-",Graphite
-"In a Grafana dashboard with several datapoints, how can I get the difference between the last value and the previouse one for the same metric?
-Perhaps the tricky part is that the time between 2 datapoints for the same metric is not know.
-so the desired result is the <metric>.$current_value - <metric>.$previouse_value for each point in the metricstring.
-Edit:
-The metrics are stored in graphite/Carbon DB.
-thanks
-","1. You need to use the derivative function
-
-This is the opposite of the integral function. This is useful for taking a running total metric and calculating the delta between subsequent data points.
-This function does not normalize for periods of time, as a true derivative would. Instead see the perSecond() function to calculate a rate of change over time.
-
-Together with the keepLastValue
-
-Takes one metric or a wildcard seriesList, and optionally a limit to the number of ‘None’ values to skip over.
-Continues the line with the last received value when gaps (‘None’ values) appear in your data, rather than breaking your line.
-
-Like this
-derivative(keepLastValue(your_mteric))
-
-A good example can be found here http://www.perehospital.cat/blog/graphite-getting-derivative-to-work-with-empty-data-points
-",Graphite
-"I have logs which contains in full_message;
-
--EndPoint:example/example/abc
--EndPoint:example/example/qfdsf
--EndPoint:example/example
-.. and so on
-
-I am trying to write a search query to just get -EndPoint:example/example.
-""-EndPoint:example/example"" is not working
-
-and I cant use ""and"" or ""or"" cause there hundreds of versions
-Why and how can't/can I get only -EndPoint:example/example which doesn't have / at the end ? 
-","1. You can use the following regex:
-""-EndPoint:example/example$""
-
-It search for the string, making sure, it's the end of the string.
-",Graylog
-"When using tomcat within Eclipse why would I ever not want to use the tomcat installation as checked in the attached image. I always use the ""Tomcat Installation"" Are there advantages/disadvantages of using the other Tomcat server locations.
-
-","1. In fact, I always Use Workspace Metadata. When you say Use workspace metadata, Eclipse copies your files (class files, jsps, server.xml, context.xml) to /.metadata/.plugins/org.eclipse.wst.server.core/tmp0. It then starts Tomcat using these files. It does not change the Tomcat installation directory at all. Note that this doesn't copy the tomcat files, just the files which come from your project.
-If you choose Use Tomcat Installation, then it copies your files into the Tomcat installation directory, and boots it from there.
-If, like me, you're developing multiple projects from multiple workspaces, then this makes a big difference. With Use Workspace Metadata you will never get any interference between workspaces. For instance, it's possible that when rebooting Tomcat, one project will be in a bad state and your logs will be filled with stuff from another project. It's better to have two separate locations, and the workspace is a good place for this. 
-
-2. Always try to use use workspace metadata ...
-this option delpoys the web app in the workspace directory
-~WORKSPACE\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps
-
-so you have different work spaces for different projects , you will have applications deployed in different spaces and this solves ambiguity in deployment.
-even if you have singe application, this is recommended
-if you select 
-use tomcat installation... ,
- you will have to be careful while dealing with multiple applications as the old application will be overridden by the newly deployed web application.
-when you select 
-use custom location ...
- then you need to be more careful while handling multiple applications as you manually give the locations for deployment
-
-3. I'll add a little bit to Matthew Farwell's explanation.
-If you see the Server Locations area grayed out, you must first remove all applications you've added to the server configuration and clean the working directory. It seems that if you don't clean the working directory, you'll get some startup error when you Start or Debug the server.  Once you do that, your will then be able to change Server Locations.  You then add the application you removed back to the server. You can add then either before or after starting the server.
-When you use an Eclipse Tomcat server I observed that the ""server.xml"" file gets slightly modified.  It gets some ""Context"" elements added to the server.xml.  If you later switch back to using the metadata, those added ""Context"" elements get removed. So it's not exactly true to say that it just makes a copy. It makes a copy with some minor changes to the configuration files.
-And lastly, if you right click on you server configuration and select properties, you get a dialog box that lets you ""switch location"".  Making the change there, does not seem to be the same thing as changing ""server location"". Your still using the metadata location (with a different tmp directory). I was only able to  get it to switch to the Tomcat installation by double clicking on the server configuration.  That brings a the configuration page and where you can change server locations.  It's probably because I really don't understand what the ""switch location"" in the property page does. It confused me and I thought it might confuse others.
-",Helios
-"The code written below is to convert audio to text using CMU Sphinx in Java 1.6 and Eclipse Helios.   
-import java.io.FileInputStream;
-import java.io.IOException;
-import java.io.FileNotFoundException;
-
-import edu.cmu.sphinx.api.Configuration;
-import edu.cmu.sphinx.api.SpeechResult;
-import edu.cmu.sphinx.api.StreamSpeechRecognizer;
-
-public class AudioToText {
-    public static void main(String [] args) throws FileNotFoundException,IOException{
-        Configuration configuration = new Configuration();
-
-        // Set path to acoustic model.
-        configuration.setAcousticModelPath(""C:/Program Files/eclipse/sphinx4-5prealpha/models/acoustic"");
-        // Set path to dictionary.
-        configuration.setDictionaryPath(""C:/Program Files/eclipse/sphinx4-5prealpha/models/acoustic/wsj/dict/cmudict.0.6d"");
-        // Set language model.
-        configuration.setLanguageModelPath(""C:/Program Files/eclipse/sphinx4-5prealpha/models/language/en-us.lm.dmp"");
-
-        StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
-        //recognizer.startRecognition(new File(""D:/audio.mp3"").toURI().toURL());
-        recognizer.startRecognition(new FileInputStream(""D:/audio.mp3""));
-        SpeechResult result;
-        while ((result = recognizer.getResult()) != null) {
-            System.out.println(result.getHypothesis());
-        }
-        recognizer.stopRecognition();
-    }
-}
-
-Exceptions are arising because of not setting the path of acoustic model correctly as mentioned below:
-   Exception in thread ""main"" Property exception component:'acousticModelLoader' property:'location' - Bad URL C:/Program Files/eclipse/sphinx4-5prealpha/models/acousticunknown protocol: c
-edu.cmu.sphinx.util.props.InternalConfigurationException: Bad URL C:/Program Files/eclipse/sphinx4-5prealpha/models/acousticunknown protocol: c
-    at edu.cmu.sphinx.util.props.ConfigurationManagerUtils.getResource(ConfigurationManagerUtils.java:479)
-    at edu.cmu.sphinx.linguist.acoustic.tiedstate.Sphinx3Loader.newProperties(Sphinx3Loader.java:246)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.PropertySheet.getComponent(PropertySheet.java:290)
-    at edu.cmu.sphinx.linguist.acoustic.tiedstate.TiedStateAcousticModel.newProperties(TiedStateAcousticModel.java:102)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.PropertySheet.getComponent(PropertySheet.java:290)
-    at edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.newProperties(LexTreeLinguist.java:301)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.PropertySheet.getComponent(PropertySheet.java:290)
-    at edu.cmu.sphinx.decoder.search.WordPruningBreadthFirstSearchManager.newProperties(WordPruningBreadthFirstSearchManager.java:199)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.PropertySheet.getComponent(PropertySheet.java:290)
-    at edu.cmu.sphinx.decoder.AbstractDecoder.newProperties(AbstractDecoder.java:71)
-    at edu.cmu.sphinx.decoder.Decoder.newProperties(Decoder.java:37)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.PropertySheet.getComponent(PropertySheet.java:290)
-    at edu.cmu.sphinx.recognizer.Recognizer.newProperties(Recognizer.java:90)
-    at edu.cmu.sphinx.util.props.PropertySheet.getOwner(PropertySheet.java:508)
-    at edu.cmu.sphinx.util.props.ConfigurationManager.lookup(ConfigurationManager.java:161)
-    at edu.cmu.sphinx.api.Context.<init>(Context.java:77)
-    at edu.cmu.sphinx.api.Context.<init>(Context.java:49)
-    at edu.cmu.sphinx.api.AbstractSpeechRecognizer.<init>(AbstractSpeechRecognizer.java:37)
-    at edu.cmu.sphinx.api.StreamSpeechRecognizer.<init>(StreamSpeechRecognizer.java:33)
-    at AudioToText.main(AudioToText.java:21)
-Caused by: java.net.MalformedURLException: unknown protocol: c
-    at java.net.URL.<init>(URL.java:574)
-    at java.net.URL.<init>(URL.java:464)
-    at java.net.URL.<init>(URL.java:413)
-    at edu.cmu.sphinx.util.props.ConfigurationManagerUtils.resourceToURL(ConfigurationManagerUtils.java:495)
-    at edu.cmu.sphinx.util.props.ConfigurationManagerUtils.getResource(ConfigurationManagerUtils.java:472)
-
-I have specified path to acoustic folder. How to specify the correct path? 
-","1. 
-Change configuration.setAcousticModelPath(""C:/Program
-  Files/eclipse/sphinx4-5prealpha/models/acoustic"");
-
-to 
-
-configuration.setAcousticModelPath(""file:C:\Program
-  Files\eclips\sphinx4-5prealpha\models\\acoustic"");
-
-It should work then.
-
-2. private static final String ACOUSTIC_MODEL_PATH = 
-        TextAligner.class.getResource(""/resources/models/acoustic/wsj"").toString();
-configuration = new Configuration();
-configuration.setAcousticModelPath(ACOUSTIC_MODEL_PATH);
-
-I guess you can do it like this if you have added acoustic model to a resources folder in your project folder.
-
-3. private static String ACOUSTIC_MODEL = ""file:///C:/zero/zero_ru.cd_cont_4000"";
-private static String LANGUAGE_MODEL = ""file:///C:/zero/ru.lm"";
-private static String DICTIONARY     = ""file:///C:/zero/ru.dic"";
-
-",Helios
-"Java Decompiler (JD) is generally recommended as a good, well, Java Decompiler. JD-Eclipse is the Eclipse plugin for JD.
-I had problems on several different machines to get the plugin running. Whenever I tried to open a .class file, the standard ""Source not found"" editor would show, displaying lowlevel bytecode disassembly, not the Java source output you'd expect from a decompiler.
-Installation docs in http://java.decompiler.free.fr/?q=jdeclipse are not bad but quite vague when it comes to troubleshooting. 
-Opening this question to collect additional information: What problems did you encounter before JD was running in Eclipse Helios? What was the solution?
-","1. Here's the stuff I ran into:
-1) RTFM and install the ""Microsoft Visual C++ 2008 SP1 Redistributable Package"" mentioned 
-at top of the installation docs. I missed this at first because the Helios instructions are at the end.
-2) Close all open editor tabs before opening a class file. Otherwise it's easy to get an outdated editor tab from a previous attempt.
-3) Open the class file in the ""Java Class File Editor"" (not ""Java Class File Viewer""). Use ""Open With"" in the context menu to get the right editor. If pleased with results, make it the default editor in the File Association settings, in Window/Preference General/Editors/File Associations select *.class to open with ""Java Class File Editor"".
-4) This guy recommends installing the Equinox SDK from the Helios update site. I did, but I'm not sure if this was really necessary. Anyone know?
-5) If the class files you are trying to view are in an Eclipse Java project, they need to be in the project's build path. Otherwise, an exception (""Not in the build path"") will show up in the Eclipse error log, and decompile will fail. I added the class files as a library / class file folder to the build path.
-6) Drag/dropping a class file from Windows Explorer or opening it with File/Open File... will not work. In my tests, I gives a ""Could not open the editor: The Class File Viewer cannot handle the given input ('org.eclipse.ui.ide.FileStoreEditorInput')."" error. That is probably the wrong editor anyways, see 3).
-7) After getting the plugin basically running, some files would still not decompile for an unknown reason. This disappeared after closing all tabs, restarting Helios, and trying again.
-
-2. To Make it work in Eclipse Juno - I had to do some additional steps.
-In General -> Editors -> File Association
-
-Select ""*.class"" and mark ""Class File Editor"" as default
-Select ""*.class without source"" -> Add -> ""Class File Editor"" -> Make it as default
-Restart eclipse
-
-
-3. The JD-eclipse plugin 0.1.3 can only decompile .class files that are visible from the classpath/Build Path.
-If your class resides in a .jar, you may simply add this jar to the Build Path as another library. From the Package Explorer browse your new library and open the class in the Class File Editor.
-If you want to decompile any class on the file system, it has to reside in the appropriate folder hierachy, and the root folder has to be included in the build path. Here is an example:
-
-Class is foo.bar.MyClass in .../someDir/foo/bar/MyClass.class
-In your Eclipse project, add a folder with arbitrary name aClassDir, which links to .../someDir. 
-Add that linked folder to the Build Path of the project.
-Use the Navigator View to navigate and open the .class file in the Class File Editor. (Note: Plain .class files on the file system are hidden in the Package Explorer view.)
-
-Note: If someDir is a subfolder of your project, you might be able to skip step 2 (link folder) and add it directly to the Build Path. But that does not work, if it is the compiler output folder of the Eclipse project.
-P.S. I wish I could just double click any .class file in any project subfolder without the need to have it in the classpath...
-",Helios
-"I have some query regarding helios framework.
-Q1 What is Helios in Asp.net?
-Q2 Can we use asp extension like(razor pages and aspx pages) using them. 
-Some articles says...
-
-One of the core reasons is the performance-factor. Helios will be able
-  to achieve 2x-3x more throughput than standard ASP.Net application. In
-  terms of memory consumption, Helios is much better than System.Web
-  dll. In a taken benchmark Helios architecture allowed a sample
-  application to achieve 50000 concurrent requests with approximately
-  1GB less overhead compare to a standard ASP.Net application.
-
-So it is possible to use in asp.net application.
-","1. For those who stumbled upon this question like me, Helios was an IIS component used during development of ASP.NET Core (known then as ASP.NET 5 or ASP.NET vNext).
-Helios was replaced by HttpPlatformHandler Change to IIS hosting model #69
-which was subsequently replaced by ASP.NET Core Module HttpPlatformHandler has been replaced by ASP.NET Core Module #164.
-At the time of writing, this is the currently used solution for ASP.NET Core - see Microsoft Docs.
-",Helios
-"I deleted my ./bin folder in an Eclipse Indigo (super similar to Helios), and now I am wondering how to rebuild my Java project. I just cannot find a button like we can see in  Netbeans.
-","1. For Eclipse you can find the rebuild option under Project > Clean and then select the project you want to clean up... that's all.
-
-This will build your project and create a new bin folder.
-
-2. In Eclipse there is an ""Auto Build"" option, which is checked by default. When it is checked, you don't need to build your project, this happens automatically. If this behaviour is unwanted, uncheck this option and click build project whenever you want.
-To clean a project, select Clean Project. This will delete the bin folder, however if Auto build is checked, it will be immediatelly regenerated.
-
-3. In case you are unable to find a file in Eclipse code after pulling code from git or creating a file in intelliJ seperately (my case) you ca do the following
-Right click on 'src' folder and in the menu that appear click on the 'refresh' button
-",Helios
-"I'm using humio (https://www.humio.com) to aggregate logs sended by kuberntes pods.
-In some pod's a annotated the logs with humio-parser=json-for-action or humio-parser=json
-The pod logs are correctly json objects like:
-{""@timestamp"":""2021-11-16T08:46:32.557Z"",""@version"":""1"",""message"":""HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection@47ce61b9 (This connection has been closed.). Possibly consider using a shorter maxLifetime value."",""logger_name"":""com.zaxxer.hikari.pool.PoolBase"",""thread_name"":""http-nio-8080-exec-3"",""level"":""WARN"",""level_value"":30000}
-
-
-The problem is in humio console I can see the pods logs but they all have a datetime stdout F before the start of the json, which is causing parser error. Like as seen in the figure below:
-
-The humio kubernetes is using the oficial helm-chart (https://github.com/humio/humio-helm-charts) which in turn use the fluentbit for log discovery and parser.
-I suspect that I need to tweak the configuration of fluent bit, but how to do it?
-","1. I found an answer in https://github.com/microsoft/fluentbit-containerd-cri-o-json-log the problem is my container runtime is containerd, which requires a different parser than the default docker parser.
-To fix the issue in humio helm chart we need the following:
-humio-fluentbit:
- parserConfig: |-
-   [PARSER]
-       Name apache
-       Format regex
-       Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] ""(?<method>\S+)(?: +(?<path>[^\""]*?)(?: +\S*)?)?"" (?<code>[^ ]*) (?<size>[^ ]*)(?: ""(?<referer>[^\""]*)"" ""(?<agent>[^\""]*)"")?$
-       Time_Key time
-       Time_Format %d/%b/%Y:%H:%M:%S %z
-   [PARSER]
-       Name apache2
-       Format regex
-       Regex  ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] ""(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?"" (?<code>[^ ]*) (?<size>[^ ]*)(?: ""(?<referer>[^\""]*)"" ""(?<agent>[^\""]*)"")?$
-       Time_Key time
-       Time_Format %d/%b/%Y:%H:%M:%S %z
-   [PARSER]
-       Name apache_error
-       Format regex
-       Regex  ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
-   [PARSER]
-       Name nginx
-       Format regex
-       Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] ""(?<method>\S+)(?: +(?<path>[^\""]*?)(?: +\S*)?)?"" (?<code>[^ ]*) (?<size>[^ ]*)(?: ""(?<referer>[^\""]*)"" ""(?<agent>[^\""]*)"")
-       Time_Key time
-       Time_Format %d/%b/%Y:%H:%M:%S %z
-   [PARSER]
-       Name json
-       Format json
-       Time_Key time
-       Time_Format %d/%b/%Y:%H:%M:%S %z
-   [PARSER]
-       Name docker
-       Format json
-       Time_Key time
-       Time_Format %Y-%m-%dT%H:%M:%S.%L
-       Time_Keep   On
-   [PARSER]
-       Name syslog
-       Format regex
-       Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
-       Time_Key time
-       Time_Format %b %d %H:%M:%S
-   [PARSER]
-       Name cri
-       Format regex
-       Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
-       Time_Key time
-       Time_Format %Y-%m-%dT%H:%M:%S.%L%z
- inputConfig: |-
-   [INPUT]
-     Name             tail
-     Path             /var/log/containers/*.log
-     Parser           cri
-     Tag              kube.*
-     Refresh_Interval 5
-     Mem_Buf_Limit    5MB
-     Skip_Long_Lines  On
-
-Which add the cri parser and override the parser in the input config.
-",Humio
-"I would like to extract some data from the icinga monitoring tool DB.
-the tables:
-icinga_objects
-+---------------+---------------------+------+-----+---------+----------------+
-| Field         | Type                | Null | Key | Default | Extra          |
-+---------------+---------------------+------+-----+---------+----------------+
-| object_id     | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |
-| name1         | varchar(255)        | YES  | MUL |         |                |
-| name2         | varchar(255)        | YES  | MUL | NULL    |                |
-| is_active     | smallint(6)         | YES  |     | 0       |                |
-...
-+---------------+---------------------+------+-----+---------+----------------+
-
-(for information name1 contains hostnames and name2 monitoring services)
-icinga_statehistory
-+-----------------------+---------------------+------+-----+---------+----------------+
-| Field                 | Type                | Null | Key | Default | Extra          |
-+-----------------------+---------------------+------+-----+---------+----------------+
-| state_time            | timestamp           | YES  |     | NULL    |                |
-| object_id             | bigint(20) unsigned | YES  | MUL | 0       |                |
-| state                 | smallint(6)         | YES  |     | 0       |                |
-| output                | text                | YES  |     | NULL    |                |
-...
-+-----------------------+---------------------+------+-----+---------+----------------+
-
-I need to extract (I hope I'm clear enough):
-name1, name2, output and only the most recent state_time for each couple name1/name2
-where object_id are common in both tables
-and name2 = 'xxx' and is_active = '1' and state = '0'
-for exampmle, if icinga_objects contains:
-object_id | name1    | name2    | is_active |
-5         | groot    | os_info  | 1
-
-and icinga_statehistory contains:
-state_time          | object_id | state | output   |
-2023-01-16 16:40:07 | 5         | 0     | RHEL 8.7 |
-2023-01-14 12:47:52 | 5         | 0     | RHEL 8.7 |
-2023-01-17 05:12:27 | 5         | 0     | RHEL 8.7 |
-
-for the couple groot/os_info I want only one answer containing :
-name1    | name2    | output   | state_time          |
-groot    | os_info  | RHEL 8.7 | 2023-01-17 05:12:27 |
-
-I tried to use inner join that way:
-select name1, name2, output, state_time
-from icinga_objects cs
-inner join icinga_statehistory s on cs.object_id = s.object_id
-where name2 = 'xxx' and is_active = '1' and state = '0'
-GROUP BY name2, name1, state_time;
-
-which seems ok but gives me more information than I need, I obtain all the recorded times for each couple name1/name2: I now need to only keep the maximum value of state_time for each couple name1/name2, unfortunately my sql knowledge is way to low to do that.
-do you have any idea how to do that?
-Thanks for your help
-","1. For the sake of this answer I have assumed that object_id uniquely identifies a name1/name2 pair.
-The MySQL < 8.0 method is to find the max(state_time) per object_id and join back to icinga_statehistory on both object_id and max(state_time) -
-select max.name1, max.name2, st.output, st.state_time
-from (
-    select cs.object_id, cs.name1, cs.name2, max(s.state_time) max_state_time
-    from icinga_objects cs
-    inner join icinga_statehistory s on cs.object_id = s.object_id
-    where cs.name2 = 'os_info' and cs.is_active = 1 and s.state = 0
-    group by cs.object_id
-) max
-inner join icinga_statehistory st on max.object_id = st.object_id and max.max_state_time = st.state_time;
-
-For MySQL >= 8.0 you can use the ROW_NUMBER() window function -
-select name1, name2, output, state_time
-from (
-    select cs.name1, cs.name2, s.output, s.state_time, row_number() over (partition by cs.object_id order by s.state_time desc) rn
-    from icinga_objects cs
-    inner join icinga_statehistory s on cs.object_id = s.object_id
-    where cs.name2 = 'os_info' and cs.is_active = 1 and s.state = 0    
-) t
-where rn = 1;
-
-",Icinga
-"what I've found in the internet regarding instrumenting application for prometheus monitoring is, people are instrumenting their app (python, go) by hard coding the application. So, they are all developers. From the devops perspective, how to instrument a huge application?
-I am a DevOps engineer not a developer. I want to instrument company's current app which back end is C++ and 200+ developers are working on it. I have to monitor SOAP, GUI, DWM, TENENT, IFS etc. How can I plan my workflow?
-Note: Currently it is monitored by Icinga, planned to move to prometheus, Grafana.
-","1. As a DevOps engineer you need to write exporters to fetching data from your services. So you should have a little development knowledge about Python or Golang.
-You can check exporters list in this page.
-If no one has developed the exporter you need, you have to develop it yourself.
-Finally, you should be able to extract metrics from your services and monitor them using Prometheus and Grafana.
-",Icinga
-"Dears,
-I have applications that run in a docker which are from https://github.com/REANNZ/etcbd-public.
-the certificate of Icinga tool has expired and when I tried to install a new SSL from a Certificate Authority the system regenerates another self-signed SSL with the same name as server.crt and in the same directory which creates conflict and stopped the container. whenever I delete it and add the new SSL then reload apache2 it comes back again.
-I would like to know where this certificate comes from and how to prevent it.
-","1. I solved the issue, it was because the certificate that I am trying to install had an issue, that is why the system generates another OpenSSL cert to operate. then I generate another SSL, and it works.
-",Icinga
-"I am stuck on instlling Instana using help to monitor my K8s.
-Here is the commands that I am using for this:
-
-kubectl create namespace instana-agent
-
-helm install instana-agent    --repo https://agents.instana.io/helm    --namespace instana-agent    --create-namespace    --set agent.key=KEY    --set agent.downloadKey=KEY    --set agent.endpointHost=ingress-coral-saas.instana.io    --set agent.endpointPort=443    --set cluster.name='matt-helm-1212'    --set zone.name='US-1'    instana-agent
-
-
-After that, I run this command:
-kubectl get all -n instana-agent
-At this stage I am stuck, the Instana is stuck to be created and status is remained as ""Pneding"" all time.
-NAME                      READY   STATUS    RESTARTS   AGE
-pod/instana-agent-6qw8d   0/2     Pending   0          17m
-I am using the following helm version:
-version.BuildInfo{Version:""v3.11.0"", GitCommit:""472c5736ab01133de504a826bd9ee12cbe4e7904"", GitTreeState:""clean"", GoVersion:""go1.18.10""}
-I waited so long for my Instana agent to be appeard and it never happened. I tried to have a Linux agent and it works however this problem is related to only K8S clusters.
-I would like to also add that I am using EC2 machine and microk8s to run K8s clusters.
-Please let me know if you require further informaiton.
-I tried installing Instana using Helm and I was expecting to see my Instana agent on the Instana agent. It will not be appeard. Instead this is what I see when I check my Instana namespace:
-NAME                      READY   STATUS    RESTARTS   AGE
-pod/instana-agent-6qw8d   0/2     Pending   0          17m
-","1. This sounds like with your k8s cluster more than the instana-agent deployment.
-Try to get more logs by  describe the pending pod or describe the events in instana-agent naspace.
-kubectl get event --sort-by .metadata.creationTimestamp -n instana-agent
-
-kubectl describe pod -l app.kubernetes.io/name=instana-agent -n instana-agent
-
-Once the instana-agent pods are up and running, get agent logs by :
-kubectl logs -l app.kubernetes.io/name=instana-agent -n instana-agent -c instana-agent
-
-",Instana
-"I am trying to install Instana on docker, but when configuring the repository for RHEL there is a variable called DOWNLOAD_KEY=<download_key>
-how can I get that variable ""DOWNLOAD_KEY""?
-enter image description here
-export DOWNLOAD_KEY=<download_key>
-
-cat << EOF > /etc/yum.repos.d/Instana-Product.repo
-[instana-product]
-name=Instana-Product
-baseurl=https://_:$DOWNLOAD_KEY@artifact-public.instana.io/artifactory/rel-rpm-public-virtual/
-enabled=1
-gpgcheck=0
-gpgkey=https://_:$DOWNLOAD_KEY@artifact-public.instana.io/artifactory/api/security/keypair/public/repositories/rel-rpm-public-virtual
-repo_gpgcheck=1
-EOF
-
-","1. Your agent key (available from the Instana UI) can be used as a download key.
-
-2. Instead of localhost:42699, try to get the IP that assigned to k8s nodes.
-kubectl get node -o wide
-The provided link for helm procedure no longer accessible. Try to get the latest instruction the latest Instana docu.
-Good luck!
-",Instana
-"I'm using Instana to deliver view stats on my site, each daily file looks like this:
-{
-  ""items"" : [ {
-    ""name"" : ""page1.htm"",
-    ""earliestTimestamp"" : 1675222177839,
-    ""cursor"" : {
-      ""@class"" : "".IngestionOffsetCursor"",
-      ""ingestionTime"" : 1675292168217,
-      ""offset"" : 1
-    },
-    ""metrics"" : {
-      ""uniqueSessions.distinct_count"" : [ [ 1675292400000, 4.0 ] ]
-    }
-  }, {
-    ""name"" : ""page2.htm"",
-    ""earliestTimestamp"" : 1675260035165,
-    ""cursor"" : {
-      ""@class"" : "".IngestionOffsetCursor"",
-      ""ingestionTime"" : 1675292168217,
-      ""offset"" : 2
-    },
-    ""metrics"" : {
-      ""uniqueSessions.distinct_count"" : [ [ 1675292400000, 1.0 ] ]
-    }
-  }, {
-    ""name"" : ""page3.htm"",
-    ""earliestTimestamp"" : 1675228447118,
-    ""cursor"" : {
-      ""@class"" : "".IngestionOffsetCursor"",
-      ""ingestionTime"" : 1675292168217,
-      ""offset"" : 3
-    },
-    ""metrics"" : {
-      ""uniqueSessions.distinct_count"" : [ [ 1675292400000, 7.0 ] ]
-    }
-  } ],
-  ""canLoadMore"" : false,
-  ""totalHits"" : 12,
-  ""totalRepresentedItemCount"" : 12,
-  ""totalRetainedItemCount"" : 12,
-  ""adjustedTimeframe"" : {
-    ""windowSize"" : 86400000,
-    ""to"" : 1675292400000
-  }
-}
-
-These daily files should be merged into one json after filtering for the necessary info:
-
-url (from name)
-
-date (first value in ""uniqueSessions.distinct_count"")
-
-number of page visits: (second value in ""uniqueSessions.distinct_count"")
-It is important, that it has to be done in CMD, since I have to use a batch file as the target user is not allowed to run PowerShell scripts, nor have access to any other CL tool.
-
-
-So far, I managed to boil down the files to the needed data elements as separate JSON objects using:  type *.json | jq "".items[] | {url: .name, date: .metrics[][0][0], load: .metrics[][0][1]}""
-the result looks like:
-{
-  ""url"": ""page1.htm"",
-  ""date"": 1675292400000,
-  ""load"": 4
-}
-{
-  ""url"": ""page1.htm"",
-  ""date"": 1675292400000,
-  ""load"": 1
-}
-{
-  ""url"": ""page1.htm"",
-  ""date"": 1675292400000,
-  ""load"": 7
-}
-
-however, if I try to wrap it in square brackets (as tutorials suggest) to get a valid JSON,  I get one file with a bunch of arrays starting and ending where they did in the original files.
-I did the homework, and  am aware of this: combining multiple json files into a single json file with jq filters actually, I played around with this for a while now before asking. I was thinking if I could add again curly brackets and a root node, it would help, but I haven't found a way where JQ wouldn't fail to do noting, that most probably the error comes from windows cmd's quotation mark usage.
-How can I make this into one JSON instead of as many arrays as many source files?  Thanks!
-","1. For multiple input files, you can create another array around all of them using the --slurp (or -s option), then use map on that:
-jq -s 'map(.items[] | {…})' *.json
-
-Demo
-Or programmatically iterate (e.g.using reduce) over each input (using inputs in combination with the --null-input (or -n) flag):
-jq -n 'reduce inputs as {$items} ([]; . + [$items[] | {…}])' *.json
-
-Demo
-
-2. I'm sorry. I'm afraid I don't know Instana nor JQ enough in order to exactly understand what you need... You have not show your desired final output file either... However, I do know Batch files enough!
-The pure Batch file below process all *.json files and extract your ""needed data elements"" as you show above. This is a first step to get the right solution, because this Batch file could be modified in any way you need.
-@echo off
-setlocal
-
-for %%f in (*.json) do (
-   set ""url=""
-   for /F ""tokens=2,3 delims=[:,] "" %%a in ('findstr ""name uniqueSessions"" ""%%f""') do (
-      if not defined url (
-         echo ""url"": %%a
-         set ""url=%%a""
-      ) else (
-         echo ""date"": %%a
-         echo ""load"": %%~Nb
-         set ""url=""
-      )
-   )
-)
-
-Output example:
-""url"": ""page1.htm""
-""date"": 1675292400000
-""load"": 4
-""url"": ""page2.htm""
-""date"": 1675292400000
-""load"": 1
-""url"": ""page3.htm""
-""date"": 1675292400000
-""load"": 7
-""url"": ""page4.htm""
-""date"": 1675292400000
-""load"": 3
-""url"": ""page5.htm""
-""date"": 1675292400000
-""load"": 6
-""url"": ""page6.htm""
-""date"": 1675292400000
-""load"": 2
-
-Perhaps if you show us the desired output file, I could complete the solution
-
-3. Adding |jq -s to what you already have should work:
-type *.json | 
-jq "".items[] | {url: .name, date: .metrics[][0][0], load: .metrics[][0][1]}"" |
-jq -s 
-
-A trailing jq -s can do the array wrapping for you if you have a list of json objects like so:
-§ cat input-malformed.json 
-{ ""a"" : 1,
-  ""b"" : 2 }
-{ ""a"" : 11,
-  ""b"" : 22 }
-
-§ cat input-malformed.json | jq -s
-[
-  {
-    ""a"": 1,
-    ""b"": 2
-  },
-  {
-    ""a"": 11,
-    ""b"": 22
-  }
-]
-
-I don't have a Windows machine handy, but the bash equivalent on jq version 1.6 works (where a.json and b.json are copies of your input JSON documents):
-cat a.json b.json | 
-jq "".items[] | {url: .name, date: .metrics[][0][0], load: .metrics[][0][1]}"" |
-jq -s 
-
-",Instana
-"kubectl get ns gives the following namespaces
-communication-prod   Active   69d
-custom-metrics       Active   164d
-default              Active   218d
-kube-node-lease      Active   218d
-kube-public          Active   218d
-kube-system          Active   218d
-notification         Active   191d
-notification-stock   Active   118d
-
-However when I am running the following helm command
-helm install instana-agent \
-  --repo https://agents.instana.io/helm \
-  --namespace instana-agent \
-  --create-namespace \
-  --set agent.key=foo\
-  --set agent.downloadKey=bar \
-  --set agent.endpointHost=ingress-green-saas.instana.io \
-  --set agent.endpointPort=443 \
-  --set cluster.name='communication-engine-prod' \
-  --set zone.name='asia-south1' \
-  instana-agent
-
-I am getting the following error
-Error: INSTALLATION FAILED: rendered manifests contain a resource that already 
-exists. Unable to continue with install: ClusterRole ""instana-agent"" in namespace """" 
-exists and cannot be imported into the current release: invalid ownership metadata; label 
-validation error: missing key ""app.kubernetes.io/managed-by"": must be set to ""Helm""; 
-annotation validation error: missing key ""meta.helm.sh/release-name"": must be set to
-""instana-agent""; annotation validation error: missing key ""meta.helm.sh/release-namespace"": must be set to ""instana-agent""
-
-can anyone point me to the reason why I might be getting the error?
-","1. This looks like you installed the agent using kubectl and then deleted the namespace before trying helm. You need to delete all of the resources that were created in your first installation attempt. The easiest way to do this is to do:
-kubectl delete -f configuration.yaml
-using the same configuration file you used to install the agent (or you can generate a new one from the Instana dashboard if you didn't modify it). You could also use kubectl get clusterroles and kubectl get clusterrolebindings to get the entities that you will need to delete with kubectl delete
-
-2. I had similar issue when trying to install using Instana operator.
-I had to clean up custom resources, like instana-agent, also clusterrole and clusterrolebinding.
-Then agent was successfully instantiated.
-",Instana
-"I've been working on implementing distributed tracing in my .NET 8 application using OpenTelemetry. I've referred to the article : https://www.milanjovanovic.tech/blog/introduction-to-distributed-tracing-with-opentelemetry-in-dotnet and I've been successful in viewing the metrics and tracing information using the Jaeger UI.
-However, I'm looking to extend this concept and use Azure Application Insights instead of Jaeger UI. My goal is to interpret the tracing and metrics information in Azure AppInsights.
-Here is the code I've been using to configure OpenTelemetry:
-private static void ConfigureOpenTelemetry(WebApplicationBuilder builder, ConfigurationManager config)
-{
-    var appConfig = builder.Services.BindValidateReturn<DemoCloudServiceOptions>(config);
-    builder.Services.AddOpenTelemetry().UseAzureMonitor();
-    builder.Services.AddOpenTelemetry(appConfig.AppInsightsConnString);
-
-    builder.Services.AddOpenTelemetry()
-    .ConfigureResource(resource => resource.AddService(""MyApp""))
-    .WithMetrics(metrics =>
-    {
-        metrics
-            .AddAspNetCoreInstrumentation()
-            .AddHttpClientInstrumentation();
-
-        metrics.AddMeter(""MyApp"");
-
-        metrics.AddOtlpExporter();
-    })
-    .WithTracing(tracing =>
-    {
-        tracing
-            .AddAspNetCoreInstrumentation()
-            .AddHttpClientInstrumentation()
-            .AddEntityFrameworkCoreInstrumentation();
-
-        tracing.AddOtlpExporter();
-    });
-
-    builder.Logging.AddOpenTelemetry(logging => logging.AddOtlpExporter());
-}
-
-Can anyone please help me here by providing their guidance.Any help would be greatly appreciated.
-","1. I am able to log Open telemetry to Application Insights.
-I have followed this MSDoc to configure Opentelemetry in .NET Core 8 Application.
-Use the configuration in appsettings.json file as mentioned here in my SOThread
-My .csproj file:
-<Project Sdk=""Microsoft.NET.Sdk.Web"">
-  <PropertyGroup>
-    <TargetFramework>net8.0</TargetFramework>
-    <Nullable>enable</Nullable>
-    <ImplicitUsings>enable</ImplicitUsings>
-    <ApplicationInsightsResourceId>/subscriptions/b83c1ed3-c5b6-44fb-b5ba-2b83a074c23f/resourceGroups/****/providers/microsoft.insights/components/SampleAppInsights</ApplicationInsightsResourceId>
-    <UserSecretsId>****</UserSecretsId>
-  </PropertyGroup>
-
-  <ItemGroup>
-    <PackageReference Include=""Azure.Monitor.OpenTelemetry.Exporter"" Version=""1.2.0"" />
-    <PackageReference Include=""Microsoft.ApplicationInsights.AspNetCore"" Version=""2.21.0"" />
-    <PackageReference Include=""OpenTelemetry"" Version=""1.8.1"" />
-    <PackageReference Include=""OpenTelemetry.Exporter.Console"" Version=""1.8.1"" />
-    <PackageReference Include=""OpenTelemetry.Extensions.Hosting"" Version=""1.8.1"" />
-    <PackageReference Include=""OpenTelemetry.Instrumentation.AspNetCore"" Version=""1.8.1"" />
-  </ItemGroup>
-</Project>
-
-Thanks @Rahul Rai for the clear explanation.
-My Program.cs file:
-using Azure.Monitor.OpenTelemetry.Exporter;
-using OpenTelemetry;
-using OpenTelemetry.Logs;
-using OpenTelemetry.Resources;
-using OpenTelemetry.Trace;
-using System.Diagnostics;
-
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.AddRazorPages();
-var conn = builder.Configuration[""APPLICATIONINSIGHTS_CONNECTION_STRING""];
-
-builder.Logging.ClearProviders()
-    .AddOpenTelemetry(loggerOptions =>
-    {
-        loggerOptions
-            .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(""MyApp""))         
-            .AddAzureMonitorLogExporter(options =>
-                options.ConnectionString = conn)     
-            .AddConsoleExporter();
-
-        loggerOptions.IncludeFormattedMessage = true;
-        loggerOptions.IncludeScopes = true;
-        loggerOptions.ParseStateValues = true;
-    });
-
-builder.Services.AddApplicationInsightsTelemetry(new Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions
-{
-    ConnectionString = builder.Configuration[""APPLICATIONINSIGHTS_CONNECTION_STRING""]
-});
-
-var app = builder.Build();
-if (!app.Environment.IsDevelopment())
-{
-    app.UseExceptionHandler(""/Error"");
- 
-    app.UseHsts();
-}
-
-app.UseHttpsRedirection();
-app.UseStaticFiles();
-app.UseRouting();
-app.UseAuthorization();
-app.MapRazorPages();
-
-app.Run();
-
-
-Even the below code worked for me.
-
- builder.Services.AddOpenTelemetry()
-    .WithTracing(builder =>
-    {
-        builder.AddAspNetCoreInstrumentation();
-        builder.AddConsoleExporter();
-        builder.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(""MyApp""));
-        builder.AddAzureMonitorTraceExporter(options =>
-           {
-               options.ConnectionString = conn;
-           });
-        builder.AddConsoleExporter();
-    });
-
-Local Traces:
-Activity.TraceId:            729cc92ac67fe948aaeeaaa250af5431
-Activity.SpanId:             2b2e1eb570dbfe44
-Activity.TraceFlags:         Recorded
-Activity.ActivitySourceName: Microsoft.AspNetCore
-Activity.DisplayName:        GET
-Activity.Kind:               Server
-Activity.StartTime:          2024-05-24T14:51:20.6775264Z
-Activity.Duration:           00:00:00.0011907
-Activity.Tags:
-    server.address: localhost
-    server.port: 7285
-    http.request.method: GET
-    url.scheme: https
-    url.path: /_framework/aspnetcore-browser-refresh.js
-    network.protocol.version: 2
-    user_agent.original: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 Edg/125.0.0.0
-    http.response.status_code: 200
-
-Application Insights Transaction Search:
-
-Logs:
-
-
-You can see the local telemetry TraceID  is shown in ApplicationInsights as Operation ID.
-
-
-",Jaeger
-"I'm here once again because I can't figure this out. I'm building an orbit simulator and currently working on placing the ship on a hyperbolic trajectory upon entering the SoI of a body. (I'm using patched conics in 2D.) I'm having an issue with the math though, where the orbit is calculating all the correct parameters, but the ship is ending up in the wrong spot.
-I tracked the problem down to the point where the current hyperbolic anomaly (F) is calculated from the current mean anomaly (M). Based on a previous question I asked, I'm using the Newton-Raphson method (based on this) to find F:
-
-for(int i = 0; i < 10; i++) {F = F - (((e * sinh(F)) - F - M) / ((e * cosh(F)) -1));}
-
-The problem is that I'm not getting a symmetrical result from M -> F as from F -> M. A ship that has
-nu0 = -0.5346949277282228
-F0 = 0.04402263120230271
-M0 = 5.793100753021599E-4
-gets
-M = 5.793100753021599E-4 (good)
-F = 0.01027520200339216 (wrong)
-nu = 0.1276522417546593 (also wrong)
-
-It ends up at the wrong point on the orbit and nothing else is right.
-To try to narrow down the problem, I graphed the equations to visualize what they were doing. In this graph, I have both the hyperbolic equation and my method of solving it. The first graph, M from E, does what I would expect: smoothly curves from (-pi, -infinity) to (+pi, +infinity). That's what the proper shape of a hyperbolic orbit is. I would expect the Newton method to give a perfect inverse, going from (-infinity, -pi) to (+pi, +infinity). But that's not what it does, it has a couple weird humps near 0,0, but otherwise goes from (-infinity, -infinity) to (+infinity, +infinity). The asymptote is sloped, not horizontal.
-I also did the same thing with the elliptical case, and it produced a perfect inverse equation, exactly as I expected. But the hyperbolic equivalent does not, and I am completely lost as to why. I've tried numerous different forms of the equation, but they all give this same shape. I've tried different starting guesses and parameters, but none give me the mirrored function I want.
-Am I doing something wrong? Is this actually right and I'm just misleading myself? I've taken calculus but not enough to diagnose this myself. Hopefully it's something simple that I'm doing wrong.
-","1. The Newton-Raphson solver you wrote for Kepler's equation is correct. Still,
-I recommend using a fixed-point verification procedure as
-a test of convergence. This is a very common pattern in
-the implementation of approximate methods.
-The idea is that we run the loop until the solution
-doesn't change anymore. That means we found a fixed point.
-The iterated function applied to the current value
-produces the same value.
-Note that a fixed point might not necessarily be the
-solution - it requires a mathematical proof of about
-the relation between the two, but: a) it is so in a very large
-set of cases; b) it's the best we can do, and c) we can
-verify the equation and make sure it is indeed a solution
-what we found.
-Also, it is impractical and dangerous to check if the
-value is exactly equal to the next value, as in general
-comparison between floating point numbers should not be
-done by x == y but abs(x-y) < eps where eps is
-a very small number. The value of eps is related to
-the number of decimals we want the solution to be
-computed precisely: if eps = 5e-7 it means that x and
-y are equal up to 6 decimals, if eps = 5e-11, they
-agree up to 10 decimals, etc.
-We also need to set an NMAX - the maximum number of
-iterations after which we give up. It can be a large number,
-as in practice if the equation is well-behaved (as this is),
-the number of iterations never gets close to that value.
-With these, your code for the Newton-Raphson method applied
-to Kepler's equation in the hyperbolic case can be written in
-java:
-public static double solveKepler(double e, double M, 
-        double epsFixedPoint, int NMAX, double epsEquation) 
-        throws SolverFailedExeption{
-    double FNext = M,
-        F = FNext + 1000 * epsFixedPoint, // make sure the initially F-FNext > eps
-        i = 0;
-    while(abs(FNext - F) > epsFixedPoint && i < NMAX){
-        F = FNext;
-        FNext = F - (e * sinh(F) - F - M) / (e * cosh(F) -1);
-        i++;
-    }
-    if(abs(M + F - e *sinh(F)) > epsEquation){ // verify the equation, not the fixed point
-        throw new SolverFailedExeption(""Newton-Raphson method for Kepler equation failed"");
-    }
-    return FNext;
-}
-
-public static double solveKepler(double e, double M) throws SolverFailedExeption{
-    return Main.solveKepler(e, M, 5e-7, 10000, 5e-7);
-}
-
-Some might prefer the equivalent implementation of the loop
-by a for and a break.
-Here's the link for the full
-code of this example.
-",Kepler
-"I installed Kepler.g like this:
-npm i kepler.gl
-
-it got added to my package.json:
-""kepler.gl"": ""^2.1.2""
-
-However, if I try to import:
-import keplerGlReducer from ""kepler.gl/reducers"";
-
-I get an error that
-Could not find a declaration file for module 'kepler.gl/reducers'. '/Users/grannyandsmith/web/web-admin/node_modules/kepler.gl/reducers.js' implicitly has an 'any' type.
-  Try `npm install @types/kepler.gl` if it exists or add a new declaration (.d.ts) file containing `declare module 'kepler.gl/reducers';`ts(7016)
-
-I also tried 
-npm install @types/kepler.gl
-
-but as expected it gives me npm ERR! code E404
-npm ERR! 404 Not Found - GET https://registry.npmjs.org/@types%2fkepler.gl - Not found
-How can I fix this?
-Edit:
-tsconfig file:
-{
-  ""compilerOptions"": {
-    ""target"": ""es5"",
-    ""lib"": [
-      ""dom"",
-      ""dom.iterable"",
-      ""esnext""
-    ],
-    ""allowJs"": true,
-    ""skipLibCheck"": true,
-    ""esModuleInterop"": true,
-    ""allowSyntheticDefaultImports"": true,
-    ""strict"": true,
-    ""forceConsistentCasingInFileNames"": true,
-    ""module"": ""esnext"",
-    ""moduleResolution"": ""node"",
-    ""resolveJsonModule"": true,
-    ""isolatedModules"": true,
-    ""noEmit"": true,
-    ""jsx"": ""preserve"",
-    ""typeRoots"": [""types/global.d.ts""]
-  },
-  ""include"": [
-    ""src""
-  ]
-}
-
-","1. If the types are not available for the particular library that you are pulling in. Here are the 2 options.
-
-Add the typings yourself ( this can be a pain as you will have to understand the complete API ). The advantage of this is all the typings will be available. 
-The other option is to create a declarations file *d.ts file where you let TS know, hey I know what I am doing. The disadvantage is that you won't have the typings available and the autocomplete won't work.
-
-@types/index.d.ts ( need to let TS know to find the types here )
-declare module 'kepler.gl/reducers';
-
-
-2. I used a similar approach as Sushanth suggested. I used a *.d.ts file and added the following to deal with TypeScript issues
-declare module ""kepler.gl"";
-declare module ""kepler.gl/processors"";
-declare module ""kepler.gl/actions"";
-declare module ""kepler.gl/reducers"";
-
-and added ""typeRoots"": [""src/types"", ""node_modules/@types""] in my tsconfig.json file to allow TS look for my custom declarations
-",Kepler
-"We are reviewing Managed Anthos Service Mesh(istio) in GCP, their is no straight forward setup for Lightstep, so we are trying to push traces from envoy to otel collector process and export it to lightstep, the otel deployment config is as below
----
-
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: otel-collector-conf
-  labels:
-    app: opentelemetry
-    component: otel-collector-conf
-data:
-  otel-collector-config: |
-    receivers:
-      zipkin:
-        endpoint: 
-    processors:
-      batch:
-      memory_limiter:
-        # 80% of maximum memory up to 2G
-        limit_mib: 400
-        # 25% of limit up to 2G
-        spike_limit_mib: 100
-        check_interval: 5s
-    extensions:
-      zpages: {}
-      memory_ballast:
-        # Memory Ballast size should be max 1/3 to 1/2 of memory.
-        size_mib: 165
-    exporters:
-      logging:
-        loglevel: debug
-
-      otlp:
-        endpoint: 10.x.x.19:8184
-        insecure: true
-        headers:
-          ""lightstep-access-token"": ""xxx""
-    service:
-      extensions: [zpages, memory_ballast]
-      pipelines:
-        traces:
-          receivers: [zipkin]
-          processors: [memory_limiter, batch]
-          exporters: [otlp]
-
----
-apiVersion: v1
-kind: Service
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-spec:
-  ports:
-  - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver.
-    port: 4317
-    protocol: TCP
-    targetPort: 4317
-  - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver.
-    port: 4318
-    protocol: TCP
-    targetPort: 4318
-  - name: metrics # Default endpoint for querying metrics.
-    port: 8888
-  - name: zipkin # Default endpoint for OpenTelemetry HTTP receiver.
-    port: 9411
-    protocol: TCP
-    targetPort: 9411
-  selector:
-    component: otel-collector
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: otel-collector
-  labels:
-    app: opentelemetry
-    component: otel-collector
-spec:
-  selector:
-    matchLabels:
-      app: opentelemetry
-      component: otel-collector
-  minReadySeconds: 5
-  progressDeadlineSeconds: 120
-  replicas: 1 #TODO - adjust this to your own requirements
-  template:
-    metadata:
-      labels:
-        app: opentelemetry
-        component: otel-collector
-    spec:
-      containers:
-      - command:
-          - ""/otelcol""
-          - ""--config=/conf/otel-collector-config.yaml""
-        image: otel/opentelemetry-collector:latest
-        name: otel-collector
-        resources:
-          limits:
-            cpu: 1
-            memory: 2Gi
-          requests:
-            cpu: 200m
-            memory: 400Mi
-        ports:
-        - containerPort: 55679 # Default endpoint for ZPages.
-        - containerPort: 4317 # Default endpoint for OpenTelemetry receiver.
-        - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver.
-        - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver.
-        - containerPort: 9411 # Default endpoint for Zipkin receiver.
-        - containerPort: 8888  # Default endpoint for querying metrics.
-        volumeMounts:
-        - name: otel-collector-config-vol
-          mountPath: /conf
-      volumes:
-        - configMap:
-            name: otel-collector-conf
-            items:
-              - key: otel-collector-config
-                path: otel-collector-config.yaml
-          name: otel-collector-config-vol
-
-Exposing the otel collector service on 9411 and configuring Anthos Mesh to send traces to the service and export it to Ligthstep, the otel pod is all up, but i dont see any traces on lightstep. Infact I'm not certain if the input from envoy is coming into otel, as the logs for otel is empty.
-apiVersion: v1
-data:
-  mesh: |-
-    extensionProviders:
-    - name: jaeger
-      zipkin:
-        service: zipkin.istio-system.svc.cluster.local
-        port: 9411
-    - name: otel
-      zipkin:
-        service: otel-collector.otel.svc.cluster.local
-        port: 9411
-
-Also deployed a jaegar all in one deployment and sending traces to it, which works fine and i can view traces on the jaegar UI. Not certain on the otel part.
-Kindly assist.
-","1. Take a look at this link, https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider
-I think you should have your config as follows, otel doesn't listen on port 9411 by default:
-apiVersion: v1
-data:
-  mesh: |-
-    extensionProviders:
-    - name: jaeger
-      zipkin:
-        service: zipkin.istio-system.svc.cluster.local
-        port: 9411
-    - name: otel
-      opencensus:
-        service: otel-collector.otel.svc.cluster.local
-        port: 55678
-
-Tried this out on my cluster today and it works. However, you can only have one tracing tool configured in the the Telemetry resource, so I'm only able to use Jaeger or Otel. That config looks like:
-apiVersion: telemetry.istio.io/v1alpha1
-kind: Telemetry
-metadata:
-  name: mesh-default
-  namespace: istio-system
-spec:
-  tracing:
-  - providers:
-    - name: otel
-    randomSamplingPercentage: 100
-",LightStep
-"With Opentelemetry becoming the new standard of tracing, and it being vendor-agnostic, how do we then choose a backend vendor for opentelemetry?
-For example, there are currently many vendors that supports Opentelemetry like GCP Cloudtrace, Datadog, Dynatrace, Lightstep, Instana. How do you choose a vendor for just opentelemtry? Or it doesn't matter at all since opentelemetry is cloud-agnostic and we can just choose the cheapest one to store our traces
-","1. I guess the decision would depend on what you already use (or plan to use) for observability. If nothing, then you would probably want to compare existing solutions by parameters (cost would be one of them) important for your business.
-
-2. Disclosure: I'm a developer at Aspecto.
-Some parameters I would consider:
-
-Cost and plan compared to the scale
-Search capabilities: can I search my traces easily and accurately
-Sampling capabilities
-Works well with my infrastructue
-Supported languages
-Can handle high scale
-Good support, responsive
-Smooth UX, clear visualizations
-Data retention
-
-What qualifies a vendor? (list)
-A vendor can be considered ""Support OpenTelemetry"" or ""Implements OpenTelemetry"".
-Support OpenTelemetry:
-
-The vendor must accept the output of the default SDK through one of two mechanisms:
-
-By providing an exporter for the OpenTelemetry Collector and/or the OpenTelemetry SDKs
-By building a receiver for the OpenTelemetry protocol
-
-
-Implements OpenTelemetry:
-
-A vendor with a custom SDK implementation will be listed as ""Implements OpenTelemetry"". If the custom SDK is optional, the vendor can be listed as ""Supports OpenTelemetry"".
-
-",LightStep
-"Before I was using lightstep/opentelemetry-exporter-js, I can use my own exporters and Lightstep exporter at same time.
-import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
-import { NodeTracerProvider } from '@opentelemetry/node';
-import { BatchSpanProcessor, ConsoleSpanExporter } from '@opentelemetry/tracing';
-import { LightstepExporter } from 'lightstep-opentelemetry-exporter';
-
-const initTracer = () => {
-  const serviceName = 'server-trace-service';
-  const tracerProvider = new NodeTracerProvider({
-    plugins: {
-      http: {
-        enabled: true,
-        path: '@opentelemetry/plugin-http',
-      },
-    },
-  });
-
-  tracerProvider.addSpanProcessor(new BatchSpanProcessor(new ConsoleSpanExporter()));
-  tracerProvider.addSpanProcessor(
-    new BatchSpanProcessor(
-      new CollectorTraceExporter({
-        serviceName,
-      })
-    )
-  );
-  tracerProvider.addSpanProcessor(
-    new BatchSpanProcessor(
-      new LightstepExporter({
-        serviceName,
-        token: 'myToken',
-      })
-    )
-  );
-
-  tracerProvider.register();
-};
-
-However, just saw lightstep/opentelemetry-exporter-js is deprecated and replaced by lightstep/otel-launcher-node.
-I checked the source code of it and the demo, it looks like it is a ""framework"" on top of OpenTelemetry.
-const {
-  lightstep,
-  opentelemetry,
-} = require('lightstep-opentelemetry-launcher-node');
-
-const sdk = lightstep.configureOpenTelemetry({
-  accessToken: 'YOUR ACCESS TOKEN',
-  serviceName: 'locl-ex',
-});
-
-sdk.start().then(() => {
-  const tracer = opentelemetry.trace.getTracer('otel-node-example');
-  const span = tracer.startSpan('test-span');
-  span.end();
-
-  opentelemetry.trace.getTracerProvider().getActiveSpanProcessor().shutdown();
-});
-
-Is it possible to simply use it as one of OpenTelemetry exporters?
-","1. lightstep-opentelemetry-launcher-node basically bundles the required things for you for easier configuration so this is not an exporter. If you were to simply replace the ""LightstepExporter"" with ""OpenTelemetry Collector Exporter"" in your code you can simply do this
-  import { CollectorTraceExporter } from '@opentelemetry/exporter-collector';
-
-  tracerProvider.addSpanProcessor(
-    new BatchSpanProcessor(
-      new CollectorTraceExporter({
-        url: 'YOUR_DIGEST_URL',
-        headers: {
-          'Lightstep-Access-Token': 'YOUR_TOKEN'
-        }
-      })
-    )
-  );
-
-The default YOUR_DIGETS_URL from lightstep/otel-launcher-node is https://ingest.lightstep.com:443/api/v2/otel/trace
-",LightStep
-"I want to send log events to Loggly as JSON objects with parameterized string messages. Our project currently has a lot of code that looks like this:
-String someParameter = ""1234"";
-logger.log(""This is a log message with a parameter {}"", someParameter);
-
-We're currently using Logback as our SLF4J backend, and Logback's JsonLayout to serialize our ILogEvent objects into JSON. Consequentially, by they time our log events are shipped to Loggly, they look like this:
-{
-    ""message"": ""This is a log message with a parameter 1234"",
-    ""level"": INFO,
-    ....
-}
-
-While this does work, it sends a different message string for every value of someParameter, which renders Loggly's automatic filters next to useless.
-Instead, I'd like to have a Layout that creates JSON that looks like this:
-{
-    ""message"": ""This is a log message with a parameter {}"",
-    ""level"": INFO,
-    ""parameters"": [
-        ""1234""
-    ]
-}
-
-This format would allow Loggly to group all log events with the message This is a log message with a parameter together, regardless of the value of someParameter.
-It looks like Logstash's KV filter does something like this - is there any way to accomplish this task with Logback, short of writing my own layout that performs custom serialization of the ILogEvent object?
-","1. There is a JSON logstash encoder for Logback, logstash-logback-encoder
-
-2. So for me I was trying to log execution times, I created a pojo called ExecutionTime with name, method, class, duration.
-I was then able to create it:
-ExecutionTime time = new ExecutionTime(""Controller Hit"", methodName, className, sw.getTotalTimeMillis());
-
-For logging I then used:
-private final Logger logger = LoggerFactory.getLogger(this.getClass());
-logger.info(append(""metric"", time), time.toString());
-
-Make sure you have: 
-import static net.logstash.logback.marker.Markers.append;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-This will log something like this:
-{  
-   ""ts"":""2017-02-16T07:41:36.680-08:00"",
-   ""msg"":""ExecutionTime [name=Controller Hit, method=setupSession, className=class com.xxx.services.controllers.SessionController, duration=3225]"",
-   ""logger"":""com.xxx.services.metrics.ExecutionTimeLogger"",
-   ""level"":""INFO"",
-   ""metric"":{  
-      ""name"":""Controller Hit"",
-      ""method"":""setupSession"",
-      ""className"":""class com.xxx.services.controllers.SessionController"",
-      ""duration"":3225
-   }
-}
-
-Might be a different set up as I was using logback-spring.xml to output my logs to json:
-<?xml version=""1.0"" encoding=""UTF-8""?>
-<configuration>
-    <include resource=""org/springframework/boot/logging/logback/base.xml""/>
-    <property name=""PROJECT_ID"" value=""my_service""/>
-    <appender name=""FILE"" class=""ch.qos.logback.core.rolling.RollingFileAppender"">
-        <File>app/logs/${PROJECT_ID}.json.log</File>
-        <encoder class=""net.logstash.logback.encoder.LogstashEncoder"">
-            <fieldNames>
-                <timestamp>ts</timestamp>
-                <message>msg</message>
-                <thread>[ignore]</thread>
-                <levelValue>[ignore]</levelValue>
-                <logger>logger</logger>
-                <version>[ignore]</version>
-            </fieldNames>
-        </encoder>
-        <rollingPolicy class=""ch.qos.logback.core.rolling.FixedWindowRollingPolicy"">
-            <maxIndex>10</maxIndex>
-            <FileNamePattern>app/logs/${PROJECT_ID}.json.log.%i</FileNamePattern>
-        </rollingPolicy>
-        <triggeringPolicy class=""ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"">
-            <MaxFileSize>20MB</MaxFileSize>
-        </triggeringPolicy>
-    </appender>
-    <logger name=""com.xxx"" additivity=""false"" level=""DEBUG"">
-        <appender-ref ref=""FILE""/>
-        <appender-ref ref=""CONSOLE""/>
-    </logger>
-    <root level=""WARN"">
-        <appender-ref ref=""FILE""/>
-    </root>
-</configuration>
-
-
-3. You could use a Mapped Diagnostic Context  to set a stamp for each of those type of log messages that you could then filter on once in loggly.
-According to the source of JsonLayout the stamp is stored as a separate value in the JSON. 
-",Loggly
-"I'm using fluent-bit 2.1.4 in an AWS EKS cluster to ship container logs to loggly. When given properly formatted json in the 'log' field, loggly will parse it out so the fields can be easily used to filter, search, generate metrics, and some other nice things.
-The trouble is, when shipping logs w/ fluent-bit, it prepends some values by default that are just raw text - they're not json key/value pairs. So now the 'log' field looks like this:
-""log"":""2023-05-31T12:11:40.459220575Z stdout F {<properly-formatted json>}
-I have confirmed that when I manually tail the container logs myself, those values before the properly-formatted json aren't there. In reading about inputs, outputs, parsers, and filters in fluent-bit, everything I might use to remove these values seems to assume you're working on the json-formatted part of the log line (e.g. it expects you can address the field(s) you want to alter/remove are addressable with a key).
-How can I get rid of the parts of the log line here that are not json?
-Here's my configuration, taken from a running fluent-bit deployment using kubectl describe configmap:
-====
-custom_parsers.conf:
-----
-[PARSER]
-    Name        docker-local
-    Format      json
-    Time_Key    asctime
-    Time_Format %FT%T.%L%Z
-    Time_Keep   Off
-
-fluent-bit.conf:
-----
-[SERVICE]
-    Flush         1
-    Log_Level     info
-    Daemon        off
-    Parsers_File  custom_parsers.conf
-    HTTP_Server   Off
-
-[INPUT]
-    Name              tail
-    Tag               kube.*
-    Path              /var/log/containers/*.log
-    Parser            docker-local
-    multiline.parser  docker
-    DB                /var/log/flb_kube.db
-    Mem_Buf_Limit     512MB
-    Skip_Long_Lines   On
-    Refresh_Interval  10
-    Ignore_Older      10m
-
-[FILTER]
-    Name                kubernetes
-    Match               kube.*
-    Kube_URL            https://kubernetes.default.svc.cluster.local:443
-    Merge_Log           Off
-    Keep_Log            Off
-    K8S-Logging.Exclude Off
-    K8S-Logging.Parser  Off
-
-[OUTPUT]
-    Name             http
-    Match            *
-    Host             logs-01.loggly.com
-    Port             443
-    Tls              On
-    URI              /bulk/<token>/tag/testing/
-    Format           json_lines
-    Json_Date_Key    timestamp
-    Json_Date_Format iso8601
-    Retry_Limit      False
-
-","1. After reading more docs & a lot of trial and error, I came up with the following solution, which works perfectly:
-    [SERVICE]
-        Flush         1
-        Log_Level     info
-        Daemon        off
-        Parsers_File  custom_parsers.conf
-        HTTP_Server   On
-        HTTP_Listen   0.0.0.0
-        HTTP_Port     2020
-        Health_Check  On
-    [INPUT]
-        Name              tail
-        Path              /var/log/containers/*.log
-        Exclude_Path      /var/log/containers/fluent*
-        DB                /var/log/flb_kube.db
-        Tag               kube.*
-        Mem_Buf_Limit     512MB
-        Skip_Long_Lines   On
-        Refresh_Interval  10
-        Ignore_Older      10m
-    [FILTER]
-        Name parser
-        Match *
-        Key_name log
-        Parser custom-parser
-    [FILTER]
-        Name         parser
-        Parser       docker
-        Match        *
-        Key_Name     log
-        Reserve_Data Off
-        Preserve_Key Off
-    [FILTER]
-        Name                kubernetes
-        Match               kube.*
-        Kube_URL            https://kubernetes.default.svc.cluster.local:443
-        Merge_Log           On
-        Keep_Log            On
-        Merge_Log_Trim      On
-        K8S-Logging.Exclude On
-        K8S-Logging.Parser  On
-    [OUTPUT]
-        Name             http
-        Match            *
-        Host             ${loggly_host}
-        Port             443
-        Tls              On
-        URI              ${loggly_uri}
-        Format           json_lines
-        Json_Date_Key    timestamp
-        Json_Date_Format iso8601
-        Retry_Limit      False
-    [PARSER]
-        Name custom-parser
-        Format regex
-        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
-        Time_Key    time
-        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
-    [PARSER]
-        Name docker
-        Format json
-        Time_key time
-        Time_Format %Y-%m-%dT%H:%M:%S.%L %z
-
-As a bonus, this also resolved another issue where livenessProbe and readinessProbe were failing, causing the fluent-bit pods to be restarted when they timed out.
-",Loggly
-"I'm having trouble getting my Spark Application to ignore Log4j, in order to use Logback.  One of the reasons i'm trying to use logback, is for the loggly appender it supports.
-I have the following dependencies and exclusions in my pom file.  (versions are in my dependency manager in main pom library.)
-<dependency>
-        <groupId>org.apache.spark</groupId>
-        <artifactId>spark-core_2.12</artifactId>
-        <version>${spark.version}</version>
-        <scope>provided</scope>
-        <exclusions>
-            <exclusion>
-                <groupId>org.slf4j</groupId>
-                <artifactId>slf4j-log4j12</artifactId>
-            </exclusion>
-            <exclusion>
-                <groupId>log4j</groupId>
-                <artifactId>log4j</artifactId>
-            </exclusion>
-        </exclusions>            
-    </dependency>
-    
-    <dependency>
-        <groupId>ch.qos.logback</groupId>
-        <artifactId>logback-classic</artifactId>
-        <scope>test</scope>
-    </dependency>
-    
-    <dependency>
-        <groupId>ch.qos.logback</groupId>
-        <artifactId>logback-core</artifactId>           
-    </dependency>
-    
-    <dependency>
-        <groupId>org.logback-extensions</groupId>
-        <artifactId>logback-ext-loggly</artifactId>         
-    </dependency>
-    
-    <dependency>
-        <groupId>org.slf4j</groupId>
-        <artifactId>log4j-over-slf4j</artifactId>           
-    </dependency>    
-
-I have referenced these two articles:
-Separating application logs in Logback from Spark Logs in log4j
-Configuring Apache Spark Logging with Scala and logback
-I've tried using first using (when running spark-submit) :
---conf ""spark.driver.userClassPathFirst=true"" 
---conf ""spark.executor.userClassPathFirst=true""
-but receive the error
-    Exception in thread ""main"" java.lang.LinkageError: loader constraint violation: when resolving method ""org.slf4j.impl.StaticLoggerBinder.ge
-tLoggerFactory()Lorg/slf4j/ILoggerFactory;"" the class loader (instance of org/apache/spark/util/ChildFirstURLClassLoader) of the current cl
-ass, org/slf4j/LoggerFactory, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, org/slf4
-j/impl/StaticLoggerBinder, have different Class objects for the type org/slf4j/ILoggerFactory used in the signature      
-
-I would like to get it working with the above, but then i also looked at trying the below
---conf ""spark.driver.extraClassPath=$libs"" 
---conf ""spark.executor.extraClassPath=$libs""
-but since i'm passing my uber jar to spark submit locally AND (on a Amazon EMR cluster) i really can't be specifying a library file location that will be local to my machine.  Since the uber jar contains the files, is there a way for it to use those files? Am i forced to copy these libraries to the master/nodes on the EMR cluster when the spark app finally runs from there?
-The first approach about using the userClassPathFirst seems like the best route though.
-","1. So I solved the issue and had several problems going on.
-So in order to get Spark to allow logback to work, the solution that worked for me was from a combination of items from the articles i posted above, and in addition a cert file problem.
-The cert file i was using to pass into spark-submit was incomplete and overriding the base truststore certs. This was causing a problem SENDING Https messages to Loggly.
-Part 1 change:
-Update maven to shade org.slf4j (as stated in an answer by @matemaciek)
-      </dependencies>
-         ...
-         <dependency>
-            <groupId>ch.qos.logback</groupId>
-            <artifactId>logback-classic</artifactId>
-            <version>1.2.3</version>                
-        </dependency>
-                
-        <dependency>
-            <groupId>ch.qos.logback</groupId>
-            <artifactId>logback-core</artifactId>
-            <version>1.2.3</version>
-        </dependency>
-        
-        <dependency>
-            <groupId>org.logback-extensions</groupId>
-            <artifactId>logback-ext-loggly</artifactId>
-            <version>0.1.5</version>
-            <scope>runtime</scope>
-        </dependency>
-
-        <dependency>
-            <groupId>org.slf4j</groupId>
-            <artifactId>log4j-over-slf4j</artifactId>
-            <version>1.7.30</version>
-        </dependency>
-    </dependencies>
-
-    <build>
-        <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-shade-plugin</artifactId>
-                <version>3.2.1</version>
-                <executions>
-                    <execution>
-                        <phase>package</phase>
-                        <goals>
-                            <goal>shade</goal>
-                        </goals>
-                    </execution>
-                </executions>
-                <configuration>
-                    <transformers>
-                        <transformer
-                                implementation=""org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"">
-                            <manifestEntries>
-                                <Main-Class>com.TestClass</Main-Class>
-                            </manifestEntries>
-                        </transformer>
-                    </transformers>
-                    <relocations>
-                        <relocation>
-                            <pattern>org.slf4j</pattern>
-                            <shadedPattern>com.shaded.slf4j</shadedPattern>
-                        </relocation>
-                    </relocations>
-                </configuration>
-            </plugin>
-        </plugins>
-    </build>
-
-Part 1a: the logback.xml
-<configuration debug=""true"">
-    <appender name=""logglyAppender"" class=""ch.qos.logback.ext.loggly.LogglyAppender"">
-        <endpointUrl>https://logs-01.loggly.com/bulk/TOKEN/tag/TAGS/</endpointUrl>
-        <pattern>${hostName} %d{yyyy-MM-dd HH:mm:ss,SSS}{GMT} %p %t %c %M - %m%n</pattern>
-    </appender>
-    <appender name=""STDOUT"" class=""ch.qos.logback.core.ConsoleAppender"">
-        <encoder>
-          <pattern>${hostName} %d{yyyy-MM-dd HH:mm:ss,SSS}{GMT} %p %t %c %M - %m%n</pattern>
-        </encoder>
-    </appender>
-    <root level=""info"">
-        <appender-ref ref=""logglyAppender"" />
-        <appender-ref ref=""STDOUT"" />
-    </root>
-</configuration> 
-
-Part 2 change: The MainClass
-import org.slf4j.*;
-
-public class TestClass {
-
-    static final Logger log = LoggerFactory.getLogger(TestClass.class);
-
-    public static void main(String[] args) throws Exception {
-        
-        log.info(""this is a test message"");
-    }
-}
-
-Part 3 change:
-i was submitting spark application as such (example):
-sparkspark-submit --deploy-mode client --class com.TestClass --conf ""spark.executor.extraJavaOptions=-Djavax.net.ssl.trustStore=c:/src/testproject/rds-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"" --conf ""spark.driver.extraJavaOptions=-Djavax.net.ssl.trustStore=c:/src/testproject/rds-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"" com/target/testproject-0.0.1.jar 
-
-So the above spark-submit failed on a HTTPS certification problem (that was when Loggly was being contacted to send the message to loggly service) because the rds-truststore.jks overwrote the certs without all certs. I changed this to use cacerts store, and it now had all the certs it needed.
-No more error at the Loggly part when sending this
-sparkspark-submit --deploy-mode client --class com.TestClass --conf ""spark.executor.extraJavaOptions=-Djavax.net.ssl.trustStore=c:/src/testproject/cacerts -Djavax.net.ssl.trustStorePassword=changeit"" --conf ""spark.driver.extraJavaOptions=-Djavax.net.ssl.trustStore=c:/src/testproject/cacerts -Djavax.net.ssl.trustStorePassword=changeit"" com/target/testproject-0.0.1.jar 
-
-
-2. You have to usein spark opts -Dspark.executor.extraJavaOptions=-Dlogback.configurationFile=/spark/logback/logback.xml
-In logback.xml you should have settings for logback.
-",Loggly
-"I'm trying to use data streams and index templates in logstash v7.17
-What is the right elasticsearch output configuration to achieve this?
-Option 1 Using data_stream in the tempalte -> FAILS
-    output {
-      elasticsearch {
-        hosts => [""https://elasticsearch-master:9200""]
-        index => ""microservice-%{+YYYY.MM.dd}""
-        template => ""/usr/share/logstash/templates/microservices.json""
-        # template_overwrite => false
-        template_name => ""microservices""
-      }    
-    }
-
-Content of /usr/share/logstash/templates/microservices.json:
-    {
-      ""index_patterns"": ""microservice-*"",
-      ""template"": {
-        ""settings"" : {
-            ""index"" : {
-              ""number_of_shards"" : ""1"",
-              ""number_of_replicas"" : ""1""
-            }
-          },
-        ""mappings"" : {
-            ""properties"" : {
-              ""@timestamp"" : {
-                ""type"" : ""date""
-              },
-              ""@version"" : {
-                ""type"" : ""keyword""
-              },
-              ""host"" : {
-                ""type"" : ""keyword""
-              },
-              ""level"" : {
-                ""type"" : ""keyword""
-              },
-              ""service"" : {
-                ""type"" : ""keyword""
-              },
-              ""type"" : {
-                ""type"" : ""keyword""
-              }
-            }
-          }
-      },
-      ""data_stream"" : {
-          ""hidden"" : false
-      }
-    }
-
-Logstash logs (debug mode):
-18:37:34.459 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.outputs.elasticsearchmonitoring - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
-18:37:34.470 [Ruby-0-Thread-14: :1] INFO  logstash.outputs.elasticsearchmonitoring - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
-18:37:34.564 [[.monitoring-logstash]-pipeline-manager] WARN  logstash.javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
-18:37:34.666 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>""https://elastic:xxxxxx@elasticsearch-master:9200/""}
-18:37:34.678 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch version determined (7.16.3) {:es_version=>7}
-18:37:34.737 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
-18:37:34.850 [[main]-pipeline-manager] DEBUG logstash.outputs.elasticsearch - Not eligible for data streams because ecs_compatibility is not enabled. Elasticsearch data streams require that events adhere to the Elastic Common Schema. While `ecs_compatibility` can be set for this individual Elasticsearch output plugin, doing so will not fix schema conflicts caused by upstream plugins in your pipeline. To avoid mapping conflicts, you will need to use ECS-compatible field names and datatypes throughout your pipeline. Many plugins support an `ecs_compatibility` mode, and the `pipeline.ecs_compatibility` setting can be used to opt-in for all plugins in a pipeline. 
-18:37:34.850 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
-18:37:35.059 [Ruby-0-Thread-17: :1] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>""/usr/share/logstash/templates/microservices.json""}
-18:37:35.067 [Ruby-0-Thread-17: :1] DEBUG logstash.outputs.elasticsearch - Attempting to install template {:template=>{""index_patterns""=>""microservice-*"", ""template""=>{""settings""=>{""index""=>{""number_of_shards""=>""1"", ""number_of_replicas""=>""1"", ""refresh_interval""=>""5s""}}, ""mappings""=>{""properties""=>{""@timestamp""=>{""type""=>""date""}, ""@version""=>{""type""=>""keyword""}, ""host""=>{""type""=>""text"", ""fields""=>{""keyword""=>{""type""=>""keyword"", ""ignore_above""=>256}}}, ""level""=>{""type""=>""keyword""}, ""service""=>{""type""=>""keyword""}, ""type""=>{""type""=>""keyword""}}}}, ""data_stream""=>{""hidden""=>false}}}
-18:37:35.135 [[main]-pipeline-manager] INFO  logstash.javapipeline - Starting pipeline {:pipeline_id=>""main"", ""pipeline.workers""=>1, ""pipeline.batch.size""=>125, ""pipeline.batch.delay""=>50, ""pipeline.max_inflight""=>125, ""pipeline.sources""=>[""/usr/share/logstash/pipeline/logstash.conf"", ""/usr/share/logstash/pipeline/uptime.conf""], :thread=>""#<Thread:0x38ca1ed0@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>""}
-18:37:35.136 [[.monitoring-logstash]-pipeline-manager] INFO  logstash.javapipeline - Starting pipeline {:pipeline_id=>"".monitoring-logstash"", ""pipeline.workers""=>1, ""pipeline.batch.size""=>2, ""pipeline.batch.delay""=>50, ""pipeline.max_inflight""=>2, ""pipeline.sources""=>[""monitoring pipeline""], :thread=>""#<Thread:0x3d81d305 run>""}
-18:37:35.346 [Ruby-0-Thread-17: :1] INFO  logstash.outputs.elasticsearch - Installing Elasticsearch template {:name=>""microservices""}
-18:37:35.677 [Ruby-0-Thread-17: :1] ERROR logstash.outputs.elasticsearch - Failed to install template {:message=>""Got response code '400' contacting Elasticsearch at URL 'https://elasticsearch-master:9200/_template/microservices'"", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :backtrace=>[""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:84:in `perform_request'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:324:in `perform_request_to_url'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:311:in `block in perform_request'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:398:in `with_connection'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:310:in `perform_request'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318:in `block in Pool'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:408:in `template_put'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:85:in `template_install'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:29:in `install'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `install_template'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch.rb:494:in `install_template'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch.rb:318:in `finish_register'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch.rb:283:in `block in register'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:149:in `block in after_successful_connection'""]}
-
-
-Option 2 Using data_streams in the output -> FAILS
-    output {
-      elasticsearch {
-        hosts => [""https://elasticsearch-master:9200""]
-        data_stream => true
-        #data_stream_type => ""logs""
-        #data_stream_dataset => ""microservices""
-        #data_stream_namespace => """"    
-        index => ""microservice-%{+YYYY.MM.dd}""
-        template => ""/usr/share/logstash/templates/microservices.json""
-        # template_overwrite => false
-        # ecs_compatibility => ""v1""
-        template_name => ""microservices""
-      }    
-    }
-
-The logs are:
-18:41:47.356 [[main]-pipeline-manager] ERROR logstash.outputs.elasticsearch - Invalid data stream configuration, following parameters are not supported: {""template""=>""/usr/share/logstash/templates/microservices.json"", ""template_name""=>""microservices"", ""index""=>""microservice-%{+YYYY.MM.dd}""}
-18:41:47.357 [Ruby-0-Thread-16: :1] ERROR logstash.outputs.elasticsearch - Invalid data stream configuration, following parameters are not supported: {""template""=>""/usr/share/logstash/templates/microservices.json"", ""template_name""=>""microservices"", ""index""=>""microservice-%{+YYYY.MM.dd}""}
-18:41:47.400 [[main]-pipeline-manager] ERROR logstash.javapipeline - Pipeline error {:pipeline_id=>""main"", :exception=>#<LogStash::ConfigurationError: Invalid data stream configuration: [""template"", ""template_name"", ""index""]>, :backtrace=>[""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/data_stream_support.rb:68:in `check_data_stream_config!'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/data_stream_support.rb:33:in `data_stream_config?'"", ""/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch.rb:296:in `register'"", ""org/logstash/config/ir/compiler/OutputStrategyExt.java:131:in `register'"", ""org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:68:in `register'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:232:in `block in register_plugins'"", ""org/jruby/RubyArray.java:1821:in `each'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:231:in `register_plugins'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:589:in `maybe_setup_out_plugins'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:244:in `start_workers'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:189:in `run'"", ""/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:141:in `block in start'""], ""pipeline.sources""=>[""/usr/share/logstash/pipeline/logstash.conf"", ""/usr/share/logstash/pipeline/uptime.conf""], :thread=>""#<Thread:0x98bdee3 run>""}
-
-
-Option 3) Manually -> Works
-First, create a index template manually via API call:
-PUT _index_template/microservices using /usr/share/logstash/templates/microservices.json
-Then:
-    output {
-      elasticsearch {
-        hosts => [""https://elasticsearch-master:9200""]
-        index => ""microservice-test""
-        action => ""create""
-      }    
-    }
-
-But I don't want to do this manual step. I want to use the logstash output options to manage data_stream + index names + index templates.
-","1. When using data_stream in your elasticsearch output, you cannot specify any of index, template or template_name since data stream have a specific naming scheme composed of a type, a dataset and a namespace.
-In your case, the type seems to be microservice (if not specified it's logs by default), the default dataset is generic and the default namespace is default.
-So if your elasticsearch output looks like this...
-output {
-  elasticsearch {
-    hosts => [""https://elasticsearch-master:9200""]
-    data_stream => true
-    data_stream_type => ""microservice""
-  }    
-}
-
-...your data is going to be stored in a data stream called microservice-generic-default and that's going to match your index template matching microservice-*, so you'd be good to go.
-You just need to make sure to create the index template in advance with the following command because Logstash only supports the legacy index templates which don't support data stream
-PUT _index_template/microservices
-{
-  ""index_patterns"": ""microservice-*"",
-  ""template"": {
-    ""settings"" : {
-        ""index"" : {
-          ""number_of_shards"" : ""1"",
-          ""number_of_replicas"" : ""1""
-        }
-      },
-    ""mappings"" : {
-        ""properties"" : {
-          ""@timestamp"" : {
-            ""type"" : ""date""
-          },
-          ""@version"" : {
-            ""type"" : ""keyword""
-          },
-          ""host"" : {
-            ""type"" : ""keyword""
-          },
-          ""level"" : {
-            ""type"" : ""keyword""
-          },
-          ""service"" : {
-            ""type"" : ""keyword""
-          },
-          ""type"" : {
-            ""type"" : ""keyword""
-          }
-        }
-      }
-  },
-  ""data_stream"" : {
-      ""hidden"" : false
-  }
-}
-
-After creating this index template, you can run your Logstash pipeline and it will work.
-",Logstash
-"When the uniface product creates a log line it looks as follows:
-{""application"":""space_ship"",""platform"":""MSW"",""version"":""10.4.02.045"",""user"":""pluto"",""hostname"":""sun.universe.com"",""pid"":""9316"",""timestamp"":""2024-05-27T09:32:47.07"",""level"":""info"",""message"":""Hello Moon""}
-
-This shows all possible fields. The fields which are redundant can be removed by changing the settings in uniface. The only field which cannot be filtered out is ""message"".
-When I use logstash to push the data forward to elastic services the ""message"" field of the logstash output contains the complete JSON, making it impossible to filter in Kibana on the different fields of the uniface output.
-When using FileBeat the same problem can be observed.
-I'm looking for a way to pass through the JSON in a way that Kibana shows the fields in the JSON as separate fields to allow filtering.
-How can I achieve this?
-Thanks for help
-Jasper de Keijzer
-I used logstash with the plain text output of uniface but was missing the severity level which is available in the JSON logging output of uniface. So the JSON output is needed, however filtering became in Kibana impossible since the complete JSON logline is put into the message field.
-FileBeat shows the same issue.
-","1. The problem has been solved by adding a filter to the LogStash.conf file.
-The complete configuration file looks as follows:
-
-
-input {
-  file {
-    path => ""C:/usys91/uniface4gl/log/ide.json""
-  }
-}
-
-filter {
-  json {
-    source => ""message""
-  }
-}
-
-output {
-  elasticsearch {
-    hosts => [""http://localhost:9200""]
-  }
-}
-
-
-
-",Logstash
-"I need to use and in logstash filter I'm doing something like this
-if ""domain"" in [message] and ""[100]"" in [message] { drop { } } but its not working getting configuration error 
-
-I also tried doing if ""domain"" and ""[100]"" in [message] { drop { } } but its not dropping any logs.
-What am I doing wrong?
-","1. you have to use ( ) when use and.or
-if ([message] == ""domain"" and [message] == ""100"") {
-   drop { }
-}
-
-
-2. How to use and condition in logstash if statement.
-
-filter{  mutate {      remove_field => [""@timestamp"",""@version"",""tags""]   
-} if ![customerId]{         drop{ }   } 
-
-",Logstash
-"Here is my issue concerning the pipelines.yml file. Firstly, i am using Elasticsearch 6.6 with Logstash 6.2.2. Both are installed in a VM into my own Google Cloud account (not this that ELK provides, but just in my own hosting in my GCP account). There i have 3 folders where log files from IoT devices come and just want to injest them simultaneusly in 3 corresponding indexes, so i 've made a pipelines.yml file inside the logstash/config path, with the following content:
--pipeline.id: pipeline1
- path.config: ""/config/p1/logstash-learning.conf""
- pipeline.workers: 1
--pipeline.id: pipeline2
- path.config: ""/config/p2/logstash-groundtruth.conf""
- pipeline.workers: 1
--pipeline.id: pipeline3
- path.config: ""/config/p3/logstash-fieldtest.conf""
- pipeline.workers: 1
-
-So, when i run logstash with the command ./bin/logstash (with this command we tell Logstash to load the default file pipelines.yml, right?), i take the error message below and i cannot figure out why this happens. Note that pipelines.yml has full permission of accesibility.
-jruby: warning: unknown property jruby.regexp.interruptible
-Sending Logstash's logs to /home/evangelos/logstash-6.2.2/logs which is now configured via log4j2.properties
-[2019-12-17T16:36:43,877][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>""netflow"", :directory=>""/home/evangelos/logstash-6.2.2/modules/netflow/configuration""}
-[2019-12-17T16:36:43,933][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>""fb_apache"", :directory=>""/home/evangelos/logstash-6.2.2/modules/fb_apache/configuration""}
-ERROR: Failed to read pipelines yaml file. Location: /home/evangelos/logstash-6.2.2/config/pipelines.yml
-usage:
-  bin/logstash -f CONFIG_PATH [-t] [-r] [] [-w COUNT] [-l LOG]
-  bin/logstash --modules MODULE_NAME [-M ""MODULE_NAME.var.PLUGIN_TYPE.PLUGIN_NAME.VARIABLE_NAME=VALUE""] [-t] [-w COUNT] [-l LOG]
-  bin/logstash -e CONFIG_STR [-t] [--log.level fatal|error|warn|info|debug|trace] [-w COUNT] [-l LOG]
-  bin/logstash -i SHELL [--log.level fatal|error|warn|info|debug|trace]
-  bin/logstash -V [--log.level fatal|error|warn|info|debug|trace]
-  bin/logstash --help
-[2019-12-17T16:36:45,347][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit
-
-","1. Finally, with the use of http://www.yamllint.com/ online yaml tester that the right syntax of the pipelines.yml was the following:
--
-pipeline.id: pipeline1
- path.config: ""/config/p1/logstash-learning.conf""
- pipeline.workers: 1
-
--
-pipeline.id: pipeline2
- path.config: ""/config/p2/logstash-groundtruth.conf""
- pipeline.workers: 1
-
--
-pipeline.id: pipeline3
- path.config: ""/config/p3/logstash-fieldtest.conf""
- pipeline.workers: 1
-
-
-2. Did you change the path in path.config?,
--pipeline.id: pipeline1
- path.config: ""/home/evangelos/logstash-6.2.2/pipelines/logstash-learning.conf""
- pipeline.workers: 1
--pipeline.id: pipeline2
- path.config: ""/home/evangelos/logstash-6.2.2/pipelines/logstash-groundtruth.conf""
- pipeline.workers: 1
--pipeline.id: pipeline3
- path.config: ""/home/evangelos/logstash-6.2.2/pipelines/logstash-fieldtest.conf""
- pipeline.workers: 1
-
-After setting this above in pipelines.yml file,
-Run the below command to run the pipelines,
-bin/logstash --path.settings config/
-
-Write the full path instead:
-sudo /usr/share/logstash/bin/logstash --path.settings ""/etc/logstash""
-
-3. Please keep space line between each pipeline as below.
--pipeline.id: pipeline1
- path.config: ""/config/p1/logstash-learning.conf""
- pipeline.workers: 1
-
--pipeline.id: pipeline2
- path.config: ""/config/p2/logstash-groundtruth.conf""
- pipeline.workers: 1
-
--pipeline.id: pipeline3
- path.config: ""/config/p3/logstash-fieldtest.conf""
- pipeline.workers: 1
-
-",Logstash
-"Using spring boot 2.1.1.RELEASE one can seemingly format logs as JSON by providing a logback-spring.xml file as follows:
-
-<appender name=""stdout"" class=""ch.qos.logback.core.ConsoleAppender"">
-    <encoder class=""ch.qos.logback.core.encoder.LayoutWrappingEncoder"">
-        <layout class=""ch.qos.logback.contrib.json.classic.JsonLayout"">
-            <timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
-            <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>
-            <jsonFormatter class=""ch.qos.logback.contrib.jackson.JacksonJsonFormatter"">
-                <prettyPrint>true</prettyPrint>
-            </jsonFormatter>
-        </layout>
-    </encoder>
-</appender>
-
-<root level=""INFO"">
-    <appender-ref ref=""stdout"" />
-</root>
-
-
-and adding to the pom.xml
-<dependency>
-            <groupId>ch.qos.logback.contrib</groupId>
-            <artifactId>logback-json-classic</artifactId>
-            <version>0.1.5</version>
-        </dependency>
-        <dependency>
-            <groupId>ch.qos.logback.contrib</groupId>
-            <artifactId>logback-jackson</artifactId>
-            <version>0.1.5</version>
-        </dependency>
-
-indeed leading to messages like:
-{
-  ""timestamp"" : ""2018-12-11T18:20:25.641Z"",
-  ""level"" : ""INFO"",
-  ""thread"" : ""main"",
-  ""logger"" : ""com.netflix.config.sources.URLConfigurationSource"",
-  ""message"" : ""To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath."",
-  ""context"" : ""default""
-}
-
-Why?
-I'm trialing logz.io which appears to behave more favourably when logs are JSON formatted, some o the shippers struggle with multiline logs like we see in java stack traces and when formatting in JSON it can automatically parse fields like level and message and if there is MDC data it automatically gets that.
-I had some not so great experiences with a few of the methods of shipping logs to logzio, like their docker image and using rsyslog without using JSON formatted log messages.
-Issues With This Approach
-It works ok for console appending, but spring boot provides like logging.file=test.log, logging.level.com.example=WARN, logging.pattern.console. I can indeed import the managed configuration from spring-boot-2.1.1.RELEASE.jar!/org/springframework/boot/logging/logback/base.xml which in turn imports a console-appender.xml andfile-appender.xml`.
-An example of the console-appender
-<included>
-    <appender name=""CONSOLE"" class=""ch.qos.logback.core.ConsoleAppender"">
-        <encoder>
-            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
-        </encoder>
-    </appender>
-</included>
-
-An example of the file appender
-<included>
-    <appender name=""FILE""
-        class=""ch.qos.logback.core.rolling.RollingFileAppender"">
-        <encoder>
-            <pattern>${FILE_LOG_PATTERN}</pattern>
-        </encoder>
-        <file>${LOG_FILE}</file>
-        <rollingPolicy class=""ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"">
-            <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.%i.gz</fileNamePattern>
-            <maxFileSize>${LOG_FILE_MAX_SIZE:-10MB}</maxFileSize>
-            <maxHistory>${LOG_FILE_MAX_HISTORY:-0}</maxHistory>
-        </rollingPolicy>
-    </appender>
-</included>
-
-These two are exactly what I need to support spring configuration of the properties, but they don't include the encoder/layout I'd need. 
-It appears in my initial tests that I can't simple name my appender the same as those and provide my layouts. For example:
-<configuration>
-
-    <include resource=""org/springframework/boot/logging/logback/base.xml""/>
-
-    <appender name=""CONSOLE"" class=""ch.qos.logback.core.ConsoleAppender"">
-        <encoder class=""ch.qos.logback.core.encoder.LayoutWrappingEncoder"">
-            <layout class=""ch.qos.logback.contrib.json.classic.JsonLayout"">
-                <timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
-                <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>
-                <jsonFormatter class=""ch.qos.logback.contrib.jackson.JacksonJsonFormatter"">
-                    <prettyPrint>true</prettyPrint>
-                </jsonFormatter>
-            </layout>
-        </encoder>
-    </appender>
-
-
-
-    <root level=""INFO"">
-        <appender-ref ref=""CONSOLE"" />
-    </root>
-</configuration>
-
-leads to the message being logged in both JSON and plain text format.
-I can indeed just copy and paste the contents of these 3 files into my custom config rather than import them at all. Then I may override what I want to customise.
-However, as spring evolves and new releases are made which may add features, I'd be forever forcing myself to keep up, copy and paste the new files and make my changes and test them.
-Is there any better way that I can either:
-
-Just make additive changes to the appenders rather than entirely redefine them, e.g. keep the config from spring but provide my own encoder or layout to be used by those appenders.
-Configure spring to JSON log via properties entirely without any config - I doubt this :S
-
-
-Footnote: logzio do provide a dependency one can import, but I dislike the idea of coupling the logging provider into the code directly. I feel that if the servoce happens to produce JSON logs to stdout or a file, it's easy for any provider to process those and ship them to some destination. 
-","1. I am not using any dependency.
-Simply, doing it via application.yml, that's all.
-This solution solves, multiline log issue, too.
-logging:
-  pattern:
-    console: ""{\""time\"": \""%d\"", \""level\"": \""%p\"", \""correlation-id\"": \""%X{X-Correlation-Id}\"", \""source\"": \""%logger{63}:%L\"", \""message\"": \""%replace(%m%wEx{6}){'[\r\n]+', '\\n'}%nopex\""}%n""
-
-
-2. I use something like the following, has always worked fine.
-Spring Boot recommendation is to name the file logback-spring.xml and place it under src/main/resources/, this enables us to use spring profiles in logback.
-So in the file below you will see that for LOCAL profile you can log in the standard fashion but for the deployments on the server or a container you can you a different logging strategy.
-<?xml version=""1.0"" encoding=""UTF-8""?>
-<configuration>
-    <appender name=""stdout"" class=""ch.qos.logback.core.ConsoleAppender"">
-        <encoder>
-            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %5p [YourApp:%thread:%X{X-B3-TraceId}:%X{X-B3-SpanId}] %logger{40} - %msg%n
-            </pattern>
-        </encoder>
-    </appender>
-    <appender name=""jsonstdout"" class=""ch.qos.logback.core.ConsoleAppender"">
-        <encoder class=""net.logstash.logback.encoder.LogstashEncoder"">
-            <providers>
-                <timestamp>
-                    <timeZone>EST</timeZone>
-                </timestamp>
-                <pattern>
-                    <pattern>
-                        {
-                        ""level"": ""%level"",
-                        ""service"": ""YourApp"",
-                        ""traceId"": ""%X{X-B3-TraceId:-}"",
-                        ""spanId"": ""%X{X-B3-SpanId:-}"",
-                        ""thread"": ""%thread"",
-                        ""class"": ""%logger{40}"",
-                        ""message"": ""%message""
-                        }
-                    </pattern>
-                </pattern>
-                <stackTrace>
-                    <throwableConverter class=""net.logstash.logback.stacktrace.ShortenedThrowableConverter"">
-                        <maxDepthPerThrowable>30</maxDepthPerThrowable>
-                        <maxLength>2048</maxLength>
-                        <shortenedClassNameLength>20</shortenedClassNameLength>
-                        <rootCauseFirst>true</rootCauseFirst>
-                    </throwableConverter>
-                </stackTrace>
-            </providers>
-        </encoder>
-    </appender>
-    <root level=""info"">
-    <springProfile name=""LOCAL"">
-      <appender-ref ref=""stdout"" />
-    </springProfile>
-    <springProfile name=""!LOCAL"">
-      <appender-ref ref=""jsonstdout"" />
-    </springProfile>
-  </root>
-
-</configuration>
-
-
-3. Sounds like you need to copy paste with modifications 3 out of 4 files from here https://github.com/spring-projects/spring-boot/tree/v2.1.1.RELEASE/spring-boot-project/spring-boot/src/main/resources/org/springframework/boot/logging/logback into your configuration. 
-The good news is that you don't need to copy paste https://github.com/spring-projects/spring-boot/blob/v2.1.1.RELEASE/spring-boot-project/spring-boot/src/main/resources/org/springframework/boot/logging/logback/defaults.xml 
-That can be included like so <include resource=""org/springframework/boot/logging/logback/default.xml""/>
-
-That will get you some of spring's default config without copying and pasting
-",Logz.io
-"App logs are stored on logz.io
-I'm trying to aggregate the error logs from my app, for each version, I would like to aggregate the error messsages.
-I tried using a sub aggregate query:
-curl -X POST https://api.logz.io/v1/search \
-  -H 'Content-Type: application/json' \
-  -H 'X-API-TOKEN: xxxxxxxxxx' \
-  -d '{
-  ""query"": {
-    ""bool"": {
-      ""must"": [
-        {
-          ""range"": {""@timestamp"": { ""gte"": ""now-2w"", ""lte"": ""now""}}
-        }
-      ],
-      ""filter"": [{""terms"": {""log_level"": [""ERROR"",""CRITICAL"",""FATAL""]}}]
-    }
-  },
-  ""size"": 0,
-  ""aggs"": {
-    ""app_version_agg"": {
-      ""terms"": {
-        ""field"": ""app_version"",
-        ""size"": 1000
-      },
-      ""aggs"": {
-          ""error_message_agg"": {
-              ""terms"": {
-              ""field"": ""error_message"",
-              ""size"": 1000
-              }
-          }
-      }
-    }
-  }
-}'
-
-but I get this error:
-{""errorCode"":""LogzElasticsearchAPI/INVALID_QUERY"",""message"":""This search can't be executed: [Bad Request]. Please contact customer support for more details"",""requestId"":""xxxx"",""parameters"":{""reason"":""Bad Request""}}
-
-I will note that when I use multiple aggregations on same level, I do get results, (but results are separate aggregations and not aggregation according to a few fields)
-curl -X POST https://api.logz.io/v1/search \
-  -H 'Content-Type: application/json' \
-  -H 'X-API-TOKEN: xxxxxxxxxx' \
-  -d '{
-  ""query"": {
-    ""bool"": {
-      ""must"": [
-        {
-          ""range"": {""@timestamp"": { ""gte"": ""now-2w"", ""lte"": ""now""}}
-        }
-      ],
-      ""filter"": [{""terms"": {""log_level"": [""ERROR"",""CRITICAL"",""FATAL""]}}]
-    }
-  },
-  ""size"": 0,
-  ""aggs"": {
-    ""app_version_agg"": {
-      ""terms"": {
-        ""field"": ""app_version"",
-        ""size"": 1000
-      }
-    },
-    ""error_message_agg"": {
-      ""terms"": {
-        ""field"": ""error_message"",
-        ""size"": 1000
-      }
-    }
-  }
-}'
-
-","1. According to the Logz.io documentation for the search endpoint, there's a limitation for aggregations:
-
-Can't nest 2 or more bucket aggregations of these types: date_histogram, geohash_grid, histogram, ip_ranges, significant_terms, terms
-
-So that probably explains the issue you're encountering.
-",Logz.io
-"I've been asked to add to my current Serilog sinks also Logz.io(I never used it), I normally use Seq from DataLust.
-Here's my configuration part
-  {
-        ""Name"": ""LogzIo"",
-        ""Args"": {
-          ""authToken"": ""sometoken"",
-          ""dataCenterSubDomain"": ""listener"",
-          ""dataCenter"": {
-            ""subDomain"": ""listener"",
-            ""useHttps"": true
-          },
-          ""logEventsInBatchLimit"": 5000,
-          ""period"": ""00:00:02"",
-          ""restrictedToMinimumLevel"": ""Debug"",
-          ""lowercaseLevel"": false,
-          ""environment"": """",
-          ""serviceName"": """"
-        }
-      },
-
-But I've not seen this log to be pushed. what I saw is on my mvc applicaiton I've got a file called Buffer-20230412.txt which contains the log informations.
-How can I set it up so that logs are almost sent immediately?
-Thanks
-","1. In my experience, one error in a request will cause the entire batch to fail, and then subsequent batches as the buffer is continually retried.
-To prove this, you should see a build up of Buffer-... files, as they fill up.
-To triage, you'll need to enable the SelfLog and then check the file it produces.
-If you see a line like below, then there's a malformed line which will need to be cleared before the rest will go through.
-2023-06-26T00:14:17.9440519Z Received failed HTTP shipping result BadRequest: {""malformedLines"":1,""oversizedLines"":0,""successfulLines"":999}
-
-If you don't care too much about missing logs for a short period, just delete all the buffer files and then they'll flow through again.
-Ideally of course, we need to fix the root issue, if you have a low enough volume of logs, you should be able to see in the buffer file which is the offending line and fix it in source.
-",Logz.io
-"I'm just trying to ship some error logs from my ASP.NET MVC 5 app to Logz.io
-I'm using NLog to ship my logs.
-I've installed NLog and NLog.Web packages
-I have the following nlog.config file :
-<?xml version=""1.0"" encoding=""utf-8"" ?>
-<nlog xmlns=""http://www.nlog-project.org/schemas/NLog.xsd""
-      xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance""
-            autoReload=""true""
-            throwExceptions=""true""
-            internalLogLevel=""ERROR""
-            internalLogFile=""C:\Temp\nlog-internal.log"">
-
-  <extensions>
-    <add assembly=""Logzio.DotNet.NLog""/>
-  </extensions>
-
-  <targets async=""true"">
-    <target name=""file"" type=""File""
-            fileName=""<pathToFileName>""
-            archiveFileName=""<pathToArchiveFileName>""
-            keepFileOpen=""false""
-            layout=""<long layout patten>""/>
-
-    <target name=""logzio""
-                    type=""Logzio""
-                    token=""LRK......""
-                    logzioType=""nlog""
-                    listenerUrl=""https://listener.logz.io:8071""
-                    bufferSize=""1""
-                    bufferTimeout=""00:00:05""
-                    retriesMaxAttempts=""3""
-                    retriesInterval=""00:00:02""
-                    debug=""false"" />
-  </targets>
-  <rules>
-    <logger name=""*"" minlevel=""Debug"" writeTo=""logzio"" />
-  </rules>
-</nlog>
-
-Then, each of my C# controller have this line :
-private static Logger logger = LogManager.GetCurrentClassLogger();
-
-and then I try to ship my logs using something like :
-logger.Fatal(""Something bad happens"");
-
-However, when I writeTo=""file"" in the nlog.config file, I can find a log file on my local disk with ""Something bad happens"", so everything is fine.
-However, nothing appear on my LogzIo web interface when I writeTo=""logzio"", no logs are shipped there.
-What did I miss ?
-","1. Answering my own question after I found how to solve this.
-Actually, my whole project use HTTPS.
-In internal Nlog logs, I had this error
-
-Error : System.Net.WebException : The request was aborted: Could not create SSL/TLS secure channel
-
-I've just added this line of code at the very beginning of ApplicationStart in Global.asax.cs
-ServicePointManager.SecurityProtocol |= SecurityProtocolType.Tls12;
-
-After testing the whole project during some days, it seems it doesn't affect the other parts of the project.
-However, just be careful as it is a global setting
-
-2. I had the same issue, and it turned out that in my published app the logzio dlls were missing. I added them and it resolved the issue.
-Check if you're missing these files in your bin folder:
-Logzio.DotNet.NLog.dll
-Logzio.DotNet.Core.dll
-",Logz.io
-"I want to use Logz.io in .Net as a singleton service. The current documentation is not covering this for now. 
-This is some code I made until now while trying to understand how can I achive my goal... 
-I added ILogzioService as a dependency to one of my REST API endpoints and tried to push some data, but I can't see any log in Kibana and I don't get any error neither...
-    public class LogzioService : ILogzioService
-    {
-        private readonly string APPNAME = ""MyApp0"";
-
-        public ILogger logger;
-        private ILoggerRepository loggerRepository;
-        private Hierarchy hierarchy;
-        //private LogzioAppender logzioAppender;
-
-        public LogzioService(IConfiguration config)
-        {
-            hierarchy = (Hierarchy)LogManager.GetRepository();
-
-            LogzioAppender logzioAppender = new LogzioAppender();
-            logzioAppender.AddToken(config[""logzio_key""]);
-            logzioAppender.AddType(""log4net"");
-            logzioAppender.AddListenerUrl(""listener-nl.logz.io:8071""); // Azure - West Europe
-            logzioAppender.AddBufferSize(100);
-            logzioAppender.AddBufferTimeout(TimeSpan.FromSeconds(5));
-            logzioAppender.AddRetriesMaxAttempts(3);
-            logzioAppender.AddRetriesInterval(TimeSpan.FromSeconds(2));
-            logzioAppender.AddDebug(false);
-            logzioAppender.AddGzip(true);
-            logzioAppender.JsonKeysCamelCase(false);
-            // <-- Uncomment and edit this line to enable proxy routing: --> 
-            // logzioAppender.AddProxyAddress(""http://your.proxy.com:port"");
-            // <-- Uncomment this to prase messages as Json -->  
-            logzioAppender.ParseJsonMessage(true);
-            hierarchy.Root.AddAppender(logzioAppender);
-            hierarchy.Configured = true;
-
-            LogzioAppenderCustomField sourceField = new LogzioAppenderCustomField();
-            sourceField.Key = ""source"";
-            sourceField.Value = APPNAME;
-
-            logzioAppender.AddCustomField(sourceField);
-
-            logger = hierarchy.GetLogger(APPNAME);
-        }
-
-        public void LogCritical(string message, Exception ex)
-        {
-            LoggingEventData loggingEventData = new LoggingEventData();
-            loggingEventData.Message = message;
-            loggingEventData.Level = Level.Critical;
-            loggingEventData.Domain = ""Domain"";
-            loggingEventData.ThreadName = ""ThreadName"";
-            loggingEventData.ExceptionString = ex.ToString() + ""\r\n"" + ex.StackTrace;
-            loggingEventData.Identity = ""Identity"";
-            loggingEventData.TimeStampUtc = DateTime.UtcNow;
-            loggingEventData.LoggerName = ""LoggerName"";
-            loggingEventData.UserName = ""UserName"";
-            loggingEventData.LocationInfo = new LocationInfo(GetType()); // not sure how to use this
-            //loggingEventData.Properties = new log4net.Util.PropertiesDictionary(); // not sure how to use this
-
-            // method 1
-            LoggingEvent loggingEvent = new LoggingEvent(GetType(), loggerRepository, loggingEventData);
-            // method 2
-            //LoggingEvent loggingEvent = new LoggingEvent(GetType(), loggerRepository, loggingEventData.LoggerName, loggingEventData.Level, loggingEventData.Message, ex);
-
-            hierarchy.Log(loggingEvent); // not working
-            logger.Log(loggingEvent); // not working
-        }
-    }
-
-","1. I looked into your code and managed to make it work.
-Basically I added logzioAppender.ActiveOptions(); and added https:// to the logzioAppender.AddListenerUrl.
-I highly suggest that you set logzioAppender.AddDebug(true) it will show you all the debug logs of the Logz.io appender. Also it takes couple of seconds for the logs to be shipped to Logz.io.
-Hope it will work for you :)
-    public class LogzioService : ILogzioService
-    {
-        private readonly string APPNAME = ""MyApp0"";
-
-        public ILogger logger;
-        private ILoggerRepository loggerRepository;
-        private Hierarchy hierarchy;
-
-        public LogzioService(IConfiguration config)
-        {
-            hierarchy = (Hierarchy)LogManager.GetRepository();
-
-            LogzioAppender logzioAppender = new LogzioAppender();
-            logzioAppender.AddToken(config[""logzio_key""]);
-            logzioAppender.AddType(""log4net"");
-            logzioAppender.AddListenerUrl(""https://listener-nl.logz.io:8071""); // Azure - West Europe
-            logzioAppender.AddBufferSize(100);
-            logzioAppender.AddBufferTimeout(TimeSpan.FromSeconds(5));
-            logzioAppender.AddRetriesMaxAttempts(3);
-            logzioAppender.AddRetriesInterval(TimeSpan.FromSeconds(2));
-            logzioAppender.AddDebug(false);
-            logzioAppender.AddGzip(true);
-            logzioAppender.JsonKeysCamelCase(false);
-            // <-- Uncomment and edit this line to enable proxy routing: --> 
-            // logzioAppender.AddProxyAddress(""http://your.proxy.com:port"");
-            // <-- Uncomment this to prase messages as Json -->  
-            logzioAppender.ParseJsonMessage(true);
-
-            LogzioAppenderCustomField sourceField = new LogzioAppenderCustomField();
-            sourceField.Key = ""source"";
-            sourceField.Value = APPNAME;
-            logzioAppender.AddCustomField(sourceField);
-            
-            logzioAppender.ActiveOptions();
-
-            hierarchy.Root.AddAppender(logzioAppender);
-            hierarchy.Root.Level = Level.All;
-            hierarchy.Configured = true;
-            logger = hierarchy.GetLogger(APPNAME);
-        }
-
-        public void LogCritical(string message, Exception ex)
-        {
-            LoggingEventData loggingEventData = new LoggingEventData();
-            loggingEventData.Message = message;
-            loggingEventData.Level = Level.Critical;
-            loggingEventData.Domain = ""Domain"";
-            loggingEventData.ThreadName = ""ThreadName"";
-            loggingEventData.ExceptionString = ex.ToString() + ""\r\n"" + ex.StackTrace;
-            loggingEventData.Identity = ""Identity"";
-            loggingEventData.TimeStampUtc = DateTime.UtcNow;
-            loggingEventData.LoggerName = ""LoggerName"";
-            loggingEventData.UserName = ""UserName"";
-            loggingEventData.LocationInfo = new LocationInfo(GetType());
-
-
-            LoggingEvent loggingEvent = new LoggingEvent(GetType(), loggerRepository, loggingEventData);
-
-            hierarchy.Log(loggingEvent);
-            logger.Log(loggingEvent);
-        }
-    }
-
-",Logz.io
-"I have enabled log tracing through micrometer and zipkin. But i am not able to get span id and trace id in my requests.
-Dependencies in pom.xml are as follows:
-`    <dependency>
-            <groupId>io.micrometer</groupId>
-            <artifactId>micrometer-tracing-bridge-brave</artifactId>
-        </dependency>
-        <dependency>
-            <groupId>io.zipkin.reporter2</groupId>
-            <artifactId>zipkin-reporter-brave</artifactId>
-        </dependency>`
-
-zipkin configuration in application.properties is as follows:
-
-# zipkin configurations
-management.tracing.enabled=true
-management.zipkin.tracing.endpoint=http://localhost:9411/zipkin/api/v2/spans
-management.tracing.sampling.probability=1.0
-
-I added the required dependency in pom.xml and properties in application.properties. Is there any other configuration or handling required to achieve the tracing of requests?
-","1. I guess you mean ""log correlation"", if not, I'm not sure what ""log tracing"" is.
-Try to use SLF4J and log something out in one of your Spring Boot controllers.
-Alternatively, you can use this property to test:
-logging.level.org.springframework.web.servlet.DispatcherServlet=DEBUG
-
-If tracing information is not in these logs, you need to upgrade Boot to at least 3.2 but 3.3 is already out, I would use that instead.
-If you don't want to upgrade, you need to set the logging.pattern.level property (see docs):
-logging.pattern.level=""%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]""
-
-",Micrometer
-"I am inexperienced with Sleuth, so maybe my question is actually pretty self-evident.
-We are updating Spring Boot from 2 to 3 and Sleuth does not work with Spring Boot 3. The problem is that we use this class WebFluxSleuthOperators several times and it seems there are no evident candidate for an easy replacement in micrometer library.
-Here's my code:
-import org.springframework.beans.factory.annotation.Autowired
-import org.springframework.cloud.sleuth.CurrentTraceContext
-import org.springframework.cloud.sleuth.Tracer
-import org.springframework.cloud.sleuth.instrument.web.WebFluxSleuthOperators
-import org.springframework.core.Ordered
-import org.springframework.stereotype.Component
-import org.springframework.web.server.ServerWebExchange
-import org.springframework.web.server.WebFilter
-import org.springframework.web.server.WebFilterChain
-import reactor.core.publisher.Mono
-
-@Component
-open class SleuthBaggageToResponseHeadersFilter(
-    @Autowired private val tracer: Tracer,
-    @Autowired private val currentTraceContext: CurrentTraceContext,
-) : WebFilter, Ordered {
-
-    override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
-        exchange.response.beforeCommit {
-            Mono.deferContextual { Mono.just(it) }.doOnNext { addToResponseHeaders(exchange) }.then()
-        }
-
-        return chain.filter(exchange)
-    }
-
-    private fun addToResponseHeaders(exchange: ServerWebExchange) =
-        WebFluxSleuthOperators.withSpanInScope(tracer, currentTraceContext, exchange) {
-            val requestId = tracer.getBaggage(""REQUEST_ID"")?.get() ?: ""NOT_FOUND""
-            exchange.response.headers.remove(""REQUEST_ID"")
-            exchange.response.headers.add(""REQUEST_ID"", requestId)
-        }
-
-    override fun getOrder() = SleuthBaggageDefaultValuesFilter.ORDER + 1
-}
-
-What can we use instead for this purpose?
-There is a migration guide for this, but I found it to be rather unhelpful for my situation
-https://github.com/micrometer-metrics/tracing/wiki/Spring-Cloud-Sleuth-3.1-Migration-Guide
-","1. Finally found the answer. It is
-observationRegistry.currentObservation?.scoped {}
-Using this, I can access baggages not problem.
-To make it works, I needed to add also this in main application
-Hooks.enableAutomaticContextPropagation()
-I also needed to add the observationRegistry to the WebClient
-return WebClient.builder().exchangeStrategies(exchangeStrategies).baseUrl(""/"")
-    .filter(someFilterFunction).observationRegistry(observationRegistry)
-
-Finally, I replaced the Authenticate WebFilter with a Authentication Converter
-open class SetSecurityDetailsFromTokenConverter(
-    @Autowired val securityConfiguration: SecurityConfiguration,
-    @Autowired val authenticationFactory: UserClaimAuthenticationFactory,
-    @Autowired val authenticationManager: ReactiveAuthenticationManager,
-) : ServerAuthenticationConverter {
-
-    override fun convert(exchange: ServerWebExchange): Mono<Authentication> {
-        if (securityConfiguration.isExcluded(exchange.request.path.value())) {
-            return Mono.empty()
-        }
-
-        ...
-
-        return authenticationManager.authenticate(userClaimAuthentication).flatMap { Mono.just(userClaimAuthentication) }
-    }
-}
-
-",Micrometer
-"I have a Nagios directory which contains some servers config files :
-define host {
-   host_name            server1.srv
-   hostgroups           linux-servers+holmes
-   check_interval           5
-}
-
-define host {
-   host_name            server2.srv
-   hostgroups           linux-servers+holmes
-   check_interval           5
-}   
-
-I would like to reformat that data into CSV so I would get :
-host_name,hostgroups,check_interval
-server1.srv,linux-servers+holmes,5
-server2.srv,linux-servers+holmes,5
-
-I am happy to do this with either bash or powershell but I am not enough of a scripting guru to know how to do this...If anyone has a suggestion that would be greatly appreciated :) !
-","1. This awk should work :
-awk 'BEGIN{print ""host_name,hostgroups,check_interval""} /host_name/{v1=$2} /hostgroups/{v2=$2} /check_interval/{v3=$2} /}/{print v1"",""v2"",""v3; v1=v2=v3=""""}' file
-
-
-2. This will work using any POSIX awk no matter how many lines you have inside each define host { ... } block, even if the blocks have different numbers of lines each, no matter what the names are of the fields in those blocks, no matter what order those names appear in, and regardless of whether the associated values contain white space, double quotes or commas:
-$ cat tst.awk
-BEGIN { OFS="",""; qt=""\"""" }
-/^define[[:space:]]+host[[:space:]]+\{/ {
-    rowNr = ++numRows
-    next
-}
-
-NF && !/^}/ {
-    tag = $1
-    val = $0
-    sub(/^[[:space:]]*[^[:space:]]+[[:space:]]+/,"""",val)
-
-    if ( !seen[tag]++ ) {
-        cols2tags[++numCols] = tag
-        tags2cols[tag] = numCols
-    }
-    colNr = tags2cols[tag]
-
-    vals[rowNr,colNr] = val
-}
-
-END {
-    for ( colNr=1; colNr<=numCols; colNr++ ) {
-        tag = csvprep(cols2tags[colNr])
-        printf ""%s%s"", tag, (colNr<numCols ? OFS : ORS)
-    }
-    for ( rowNr=1; rowNr<=numRows; rowNr++ ) {
-        for ( colNr=1; colNr<=numCols; colNr++ ) {
-            val = csvprep(vals[rowNr,colNr])
-            printf ""%s%s"", val, (colNr<numCols ? OFS : ORS)
-        }
-    }
-}
-
-function csvprep(fld) {
-    if ( (fld ~ OFS) || (fld ~ qt) ) {
-        gsub(""^""qt ""|"" qt""$"","""",fld)
-        gsub(qt,qt qt,fld)
-        fld = qt fld qt
-    }
-    return fld
-}
-
-
-$ awk -f tst.awk file
-host_name,hostgroups,check_interval
-server1.srv,linux-servers+holmes,5
-server2.srv,linux-servers+holmes,5
-
-If we provide more interestiung sample input:
-$ cat file
-define host {
-   check_interval           5
-   host_name            server1.srv
-   foo              this would ""work"" easily
-   hostgroups           linux-servers+holmes
-}
-
-define host {
-   bar              this would, too
-   host_name            server2.srv
-   hostgroups           linux-servers+holmes
-   etc                and this
-   check_interval           5
-}
-
-we can see that the script still works to produce valid CSV including all values from the input:
-$ awk -f tst.awk file
-check_interval,host_name,foo,hostgroups,bar,etc
-5,server1.srv,""this would """"work"""" easily"",linux-servers+holmes,,
-5,server2.srv,,linux-servers+holmes,""this would, too"",and this
-
-",Nagios
-"Is it possible to enable notifications for services in NAGIOS but to disable hosts notifications? I have a lot of local printers which don't have a impact when they are down but I want to have a service notification e.g. for ""no paper"" or ""low toner cartridge"".
-Any experiences? Thank you
-","1. There are a couple of options, you can create a new host template to use for printers that inherits from your generic-host template, but turn off the setting to enable host notifications with:
-notifications_enabled           0 
-
-E.g.
-define host{
-    name                    generic-printer
-    use                     generic-host
-    notifications_enabled   0
-    register                0
-}
-
-Then each printers host definition could include the line
-use                     generic-printer
-
-in its definition. 
-Alternately, you could create a brand new printer template similar to the one for generic-host with notifications_enabled disabled and also not including any entry for check_command (which is where the command used to determine if a host is OK is chosen).
-
-2. Default checks like ping or ssh can be disabled by deleting the service definition in the corresponding machine's .cfg file in the monitoring host. nrpe or nrdp services can be used for checks like low toner cartrige or no paper.
-Steps to remove the ping check for printer1:
-1- Open the printer1.cfg file in the nagios server to edit. (Usually under .../nagios/configurations/objects/)
-2- Find the service definition in printer1.cfg where service_description value is PING and delete this definition.
-3- Restart NagiOS. After this, the ping check should not be visible from the nagios web interface too.
-",Nagios
-"I am trying to to get a connection from an Java application to a server. I am able to get data from the server by typing in the shell:
-cat query.txt | nc server port
-
-Now, I am trying to do the same in Java. I already tried some third party APIs like jetcat. The server should be also available through a Unix socket, but currently I am not able to get a connection.
-The standard Java socket also hasn't worked. When I sent my query, the server never responded.
-public class Server {
-    private Socket socket;
-    
-    public Server(String url, int port) throws UnknownHostException, IOException {
-        socket = new Socket(url, port);
-    }
-    
-    void sendMessage(String nachricht) throws IOException {     
-        PrintWriter printWriter = new PrintWriter( new OutputStreamWriter(socket.getOutputStream()));
-        printWriter.print(nachricht);
-        printWriter.flush();
-    }
-    
-    public void read() throws IOException {
-        Thread t = new Thread(new Runnable() {
-             public void run() {
-                    try {         
-                        BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
-                        char[] buffer = new char[200];
-                        int anzahlZeichen = bufferedReader.read(buffer, 0, 200); // blockiert bis Nachricht empfangen 
-                        System.out.println(new String(buffer, 0, anzahlZeichen));
-                    } catch(IOException e) {
-                        e.printStackTrace();                 
-                  }
-             }
-        });  
-        t.start();}}
-
-The system is Check_MK based on Nagios and using LQL (Livestatus Query Language)
-Any ideas how to use netcat in Java - or any alternatives to netcat?
-","1. You can run programs, including netcat, from Java.  See Runtime.exec().
-netcat is simply a program that opens a socket, writes to it and reads from it.  You can do that (open a socket, write to it and read from it) directly in Java without dealing with pipes and subprocesses.  Add to your question the Java program you wrote using java.io.Socket.  My first guess is you didn't set the TCP_NODELAY flag and/or didn't flush the buffered stream and so the server didn't receive your request.
-
-After creating your socket, try setting TCP_NODELAY:
-    socket.setTcpNoDelay(true);
-Use Wireshark, or similar, to capture the network traffic and see if your message is actually being sent to the server.
-
-2. Solved the problem by using http://junixsocket.googlecode.com/ and by directly accessing the UNIX socket on the Server.
-",Nagios
-"I am facing a peculiar issue in PowerShell 2.0. I am not an expert in PS, but occasionally write/edit few scripts to work with Nagios monitoring tool. Requesting help form Nagios experts. Your help will be appreciated.
-The script ExServiceAlert10.ps1 is embedded below:
-#First, findout if Exchange Management Shell is loaded:
-$snapins = Get-PSSnapin |select name
-$snapincount=0;
-$found = $false
-do
-{
-$founDName = $snapins[$snapincount].name
-if ($founDName -eq “Microsoft.Exchange.Management.PowerShell.E2010″)
-#Exchange Shell already loaded
-{
-$found = $True
-break
-}
-$snapincount++}
-while ($snapincount -lt $snapins.Count)
-if ($found -ne $True)
-{
-Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010
-}
-# Create variables
-$status = 0
-$desc = “”
-# Get Service Health
-Test-ServiceHealth  | ForEach-Object {
-$main = “`n"" + ""Role: "" + $_.Role + ""`n” + “Status: ” + $_.RequiredServicesRunning + “`n”
-if ($_.RequiredServicesRunning -eq “True”){
-$array = $_.ServicesRunning
-$runningsvcs = “Services running:”
-foreach ($svc in $array){
-$runningsvcs = $runningsvcs + ” ” + $svc
-}
-$desc += $main +$runningsvcs + “`n”
-}else{
-$status = 1
-$array = $_.ServicesNotRunning
-$notrunning = “Services Not running”
-foreach ($svc in $array){
-$notrunning = $notrunning + ” ” + $svc
-}
-$desc += $main + $notrunning
-}
-}
-if ($status -eq 1){
-echo “Critical – Exchange Services Alert $desc”
-exit 2
-}else{
-echo “OK – Exchange Services Alert $desc”
-exit 0
-}
-
-The script is working fine and shows no error, if I directly execute it in PowerShell like this:
-PS C:\Windows\system32> D:\ExServiceAlert10.ps1
-
-But it shows error when I execute it via cmd or from Nagios NSClient++ like this:
-Normal command Prompt execution: (Script placed at D:)
-    D:>echo .\ExServiceAlert10.ps1 ; exit $LastExitCode | powershell.exe -command -
-From nagios NSclient++ execution via check_nrpe: (Script placed at NSclient Script directory)
-cmd /c echo scripts\ExServiceAlert10.ps1; exit $LastExitCode | powershell.exe -command –
-
-The error I am getting is this:
-Missing expression after unary opearator ‘-’.
-
-At line:1 char:2
-+ – <<<<
-+ CategoryInfo : ParseError: (-:String) [], ParentContainsErrorRecordException
-+FullyQualifiedErrorId : MissingExpressionAfterOperator
-
-I am executing this script in PowerShell 2.0 and tried various debugging methods to solve this for the past week with no success.
-Here are some entries from my NSClient++ conf. file NSC.ini for other PS scripts which are working fine without any issue.
-exch_mail_flow10=cmd /c echo scripts\ExchMailFlow10.ps1; exit $LastExitCode | powershell.exe -command -
-exch_mailboxhealth10=cmd /c echo scripts\ExMailboxhealth10.ps1; exit $LastExitCode | powershell.exe -command –
-
-Even the reported errant script(ExServiceAlert10.ps1) is working fine in my test system with that extra dash in command prompt but not working at all in any of the the Prod. system. All the PS versions are 2.0.
-I think either I have to enable/disable some PS environment settings on those errant Prod. systems, or to escape something inside the script (suspecting newline characters- `n backtick n) . I don’t understand why it is reporting Line:1 and char:2, which is nothing but a comment line in the script.
-I have seen this particular error reported by few others and their issue was resolved after upgrading their PS to Ver 2.0. Mine is already 2.0 and don’t know what to do next.
-NSC.ini file:
-;Nagios agent for BR 1.8 dated Aug 30, 2012. The MSI is updated to 0.3.9 version of nsclient++ and is re-packaged for silent installation. Install.vbs is removed.
-[modules]
-NRPEListener.dll
-NSClientListener.dll
-NSCAAgent.dll
-CheckWMI.dll
-FileLogger.dll
-CheckSystem.dll
-CheckDisk.dll
-CheckEventLog.dll
-CheckHelpers.dll
-CheckExternalScripts.dll
-;# NSCLIENT++ MODULES
-;# A list with DLLs to load at startup.
-;  You will need to enable some of these for NSClient++ to work.
-; ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
-; *                                                               *
-; * N O T I C E ! ! ! - Y O U   H A V E   T O   E D I T   T H I S *
-; *                                                               *
-; ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
-;FileLogger.dll
-;CheckSystem.dll
-;CheckDisk.dll
-;NSClientListener.dll
-;NRPEListener.dll
-;SysTray.dll
-;CheckEventLog.dll
-;CheckHelpers.dll
-;CheckWMI.dll
-;CheckNSCP.dll
-;
-; Script to check external scripts and/or internal aliases.
-;CheckExternalScripts.dll
-;
-; NSCA Agent if you enable this NSClient++ will talk to NSCA hosts repeatedly (so dont enable unless you want to use NSCA)
-;NSCAAgent.dll
-;
-; LUA script module used to write your own ""check deamon"".
-;LUAScript.dll
-;
-; RemoteConfiguration IS AN EXTREM EARLY IDEA SO DONT USE FOR PRODUCTION ENVIROMNEMTS!
-;RemoteConfiguration.dll
-; Check other hosts through NRPE extreme beta and probably a bit dangerous! :)
-;NRPEClient.dll
-; Extreamly early beta of a task-schedule checker
-;CheckTaskSched.dll
-[crash]
-; Archive crash dump files if a crash is detected
-;archive=1
-; Submit crash reports to a crash report server (this overrrides archive)
-;submit=0
-; Restart service if a crash is detected
-;restart=1
-[Settings]
-;# OBFUSCATED PASSWORD
-;  This is the same as the password option but here you can store the password in an obfuscated manner.
-;  *NOTICE* obfuscation is *NOT* the same as encryption, someone with access to this file can still figure out the
-;  password. Its just a bit harder to do it at first glance.
-;obfuscated_password=Jw0KAUUdXlAAUwASDAAB
-;
-;# PASSWORD
-;  This is the password (-s) that is required to access NSClient remotely. If you leave this blank everyone will be able to access the daemon remotly.
-;password=secret-password
-;
-;# ALLOWED HOST ADDRESSES
-;  This is a comma-delimited list of IP address of hosts that are allowed to talk to the all daemons.
-;  If leave this blank anyone can access the deamon remotly (NSClient still requires a valid password).
-;  The syntax is host or ip/mask so 192.168.0.0/24 will allow anyone on that subnet access
-;allowed_hosts=127.0.0.1/32
-;
-;# USE THIS FILE
-;  Use the INI file as opposed to the registry if this is 0 and the use_reg in the registry is set to 1
-;  the registry will be used instead.
-use_file=1
-allowed_hosts=163.228.10.52
-;
-; # USE SHARED MEMORY CHANNELS
-;  This is the ""new"" way for using the system tray based on an IPC framework on top shared memmory channels and events.
-;  It is brand new and (probably has bugs) so dont enable this unless for testing!
-;  If set to 1 shared channels will be created and system tray icons created and such and such...
-;shared_session=0
-[log]
-;# LOG DEBUG
-;  Set to 1 if you want debug message printed in the log file (debug messages are always printed to stdout when run with -test)
-;debug=1
-;
-;# LOG FILE
-;  The file to print log statements to
-;file=nsclient.log
-;
-;# LOG DATE MASK
-;  The format to for the date/time part of the log entry written to file.
-;date_mask=%Y-%m-%d %H:%M:%S
-;
-;# LOG ROOT FOLDER
-;  The root folder to use for logging.
-;  exe = the folder where the executable is located
-;  local-app-data = local application data (probably a better choice then the old default)
-;root_folder=exe
-[NSClient]
-;# ALLOWED HOST ADDRESSES
-;  This is a comma-delimited list of IP address of hosts that are allowed to talk to NSClient deamon.
-;  If you leave this blank the global version will be used instead.
-;allowed_hosts=
-;
-;# NSCLIENT PORT NUMBER
-;  This is the port the NSClientListener.dll will listen to.
-port=12489
-;
-;# BIND TO ADDRESS
-;  Allows you to bind server to a specific local address. This has to be a dotted ip adress not a hostname.
-;  Leaving this blank will bind to all avalible IP adresses.
-;bind_to_address=
-;
-;# SOCKET TIMEOUT
-;  Timeout when reading packets on incoming sockets. If the data has not arrived withint this time we will bail out.
-socket_timeout=60
-[NRPE]
-;# NRPE PORT NUMBER
-;  This is the port the NRPEListener.dll will listen to.
-port=5666
-;
-;# COMMAND TIMEOUT
-;  This specifies the maximum number of seconds that the NRPE daemon will allow plug-ins to finish executing before killing them off.
-command_timeout=60
-;
-;# COMMAND ARGUMENT PROCESSING
-;  This option determines whether or not the NRPE daemon will allow clients to specify arguments to commands that are executed.
-allow_arguments=1
-;
-;# COMMAND ALLOW NASTY META CHARS
-;  This option determines whether or not the NRPE daemon will allow clients to specify nasty (as in |`&><'""\[]{}) characters in arguments.
-allow_nasty_meta_chars=1
-;
-;# USE SSL SOCKET
-;  This option controls if SSL should be used on the socket.
-;use_ssl=1
-;
-;# BIND TO ADDRESS
-;  Allows you to bind server to a specific local address. This has to be a dotted ip adress not a hostname.
-;  Leaving this blank will bind to all avalible IP adresses.
-; bind_to_address=
-;
-;# ALLOWED HOST ADDRESSES
-;  This is a comma-delimited list of IP address of hosts that are allowed to talk to NRPE deamon.
-;  If you leave this blank the global version will be used instead.
-;allowed_hosts=
-;
-;# SCRIPT DIRECTORY
-;  All files in this directory will become check commands.
-;  *WARNING* This is undoubtedly dangerous so use with care!
-;script_dir=scripts\
-;
-;# SOCKET TIMEOUT
-;  Timeout when reading packets on incoming sockets. If the data has not arrived withint this time we will bail out.
-;socket_timeout=30
-[Check System]
-;# CPU BUFFER SIZE
-;  Can be anything ranging from 1s (for 1 second) to 10w for 10 weeks. Notice that a larger buffer will waste memory
-;  so don't use a larger buffer then you need (ie. the longest check you do +1).
-;CPUBufferSize=1h
-;
-;# CHECK RESOLUTION
-;  The resolution to check values (currently only CPU).
-;  The value is entered in 1/10:th of a second and the default is 10 (which means ones every second)
-;CheckResolution=10
-;
-;# CHECK ALL SERVICES
-;  Configure how to check services when a CheckAll is performed.
-;  ...=started means services in that class *has* to be running.
-;  ...=stopped means services in that class has to be stopped.
-;  ...=ignored means services in this class will be ignored.
-;check_all_services[SERVICE_BOOT_START]=ignored
-;check_all_services[SERVICE_SYSTEM_START]=ignored
-;check_all_services[SERVICE_AUTO_START]=started
-;check_all_services[SERVICE_DEMAND_START]=ignored
-;check_all_services[SERVICE_DISABLED]=stopped
-[External Script]
-;# COMMAND TIMEOUT
-;  This specifies the maximum number of seconds that the NRPE daemon will allow plug-ins to finish executing before killing them off.
-;command_timeout=60
-;
-;# COMMAND ARGUMENT PROCESSING
-;  This option determines whether or not the NRPE daemon will allow clients to specify arguments to commands that are executed.
-allow_arguments=1
-;
-;# COMMAND ALLOW NASTY META CHARS
-;  This option determines whether or not the NRPE daemon will allow clients to specify nasty (as in |`&><'""\[]{}) characters in arguments.
-allow_nasty_meta_chars=1
-;
-;# SCRIPT DIRECTORY
-;  All files in this directory will become check commands.
-;  *WARNING* This is undoubtedly dangerous so use with care!
-;script_dir=c:\my\script\dir
-[Script Wrappings]
-vbs=cscript.exe //T:30 //NoLogo scripts\lib\wrapper.vbs %SCRIPT% %ARGS%
-ps1=cmd /c echo scripts\%SCRIPT% %ARGS%; exit($lastexitcode) | powershell.exe -command -
-bat=scripts\%SCRIPT% %ARGS%
-[External Scripts]
-;check_es_long=scripts\long.bat
-;check_es_ok=scripts\ok.bat
-;check_es_nok=scripts\nok.bat
-;check_vbs_sample=cscript.exe //T:30 //NoLogo scripts\check_vb.vbs
-;check_powershell_warn=cmd /c echo scripts\powershell.ps1 | powershell.exe -command -
-dfsdiag_cmd=dfsdiag  $ARG1$  $ARG2$
-dfsrdiag_cmd=dfsrdiag  $ARG1$  $ARG2$
-check_log=perl scripts\check_log2.pl -l ""$ARG1$"" -p1 ""$ARG2$"" -p2 ""$ARG3$"" -p3 ""$ARG4$"" -p4 ""$ARG5$""
-check_log2=perl scripts\check_log2.pl -l ""$ARG1$"" -t ""$ARG2$"" -p1 ""$ARG3$"" -p2 ""$ARG4$"" -p3 ""$ARG5$"" -p4 ""$ARG6$""
-check_log3=perl scripts\check_log2.pl -l ""$ARG1$"" -t ""$ARG2$"" -n ""$ARG3$"" -p1 ""$ARG4$"" -p2 ""$ARG5$"" -p3 ""$ARG6$"" -p4 ""$ARG7$""
-check_vlog=perl scripts\check_vlog.pl -l ""$ARG1$"" -t ""$ARG2$"" -p1 ""$ARG3$"" -p2 ""$ARG4$"" -p3 ""$ARG5$"" -p4 ""$ARG6$""
-check_vlog2=perl scripts\check_vlog.pl -l ""$ARG1$"" -t ""$ARG2$"" -p1 ""$ARG3$"" -p2 ""$ARG4$"" -p3 ""$ARG5$"" -p4 ""$ARG6$"" -w ""$ARG7$"" -c ""$ARG8$""
-check_log_mp1=perl scripts\check_log_mp.pl -l ""$ARG1$"" -p ""$ARG2$"" -t ""$ARG3$"" -w $ARG4$ -c $ARG5$
-check_log_mp2=perl scripts\check_log_mp.pl -l ""$ARG1$"" -p ""$ARG2$"" -t ""$ARG3$"" -n ""$ARG4$"" -w $ARG5$ -c $ARG6$
-check_log_mp3=perl scripts\check_log_mp.pl -l ""$ARG1$"" -p ""$ARG2$"" -t ""$ARG3$"" -n ""$ARG4$"" -s ""$ARG5$"" -w $ARG6$ -c $ARG7$
-check_foldersize=c:\windows\system32\cscript.exe //NoLogo //T:30 ""C:\Program Files\NSClient++\scripts\check_folder_size.vbs"" ""$ARG1$"" $ARG2$ $ARG3$
-check_foldersize2=c:\windows\system32\cscript.exe //NoLogo //T:30 ""C:\Program Files\NSClient++\scripts\check_folder_size.vbs"" ""$ARG1$"" $ARG2$ $ARG3$ $ARG4$ $ARG5$
-check_dfsutil=perl scripts\check_win_dfsutil.pl -H $ARG1$ $ARG2$
-dfsutil_cmd=dfsutil  $ARG1$  $ARG2$
-check_dfsdiag=perl scripts\check_win_dfsdiag.pl -H $ARG1$ -A ""$ARG2$""
-check_dfsrdiag=perl scripts\check_win_dfsrdiag_backlog.pl -H $ARG1$ -A ""$ARG2$"" -w $ARG3$ -c $ARG4$
-check_file_mtime=perl scripts\check_file_mtime.pl -f ""$ARG1$"" -t ""$ARG2$"" ""-$ARG3$""
-;Citrix WMI monitoring plugins
-check_licenses_pl=perl scripts\check_licenseserver.pl -w $ARG1$ -c $ARG2$
-check_licenses_vbs=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_ctx_lic.vbs"" $ARG1$ $ARG2$ $ARG3$ $ARG4$
-check_num_servers_in_zone=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_num_servers_in_zone.vbs"" $ARG1$ $ARG2$
-check_active_session=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_active_session.vbs"" $ARG1$ $ARG2$
-check_disconnected_session=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_disconnected_session.vbs"" $ARG1$ $ARG2$
-check_metaframe_application_loadlevel=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_metaframe_application_loadlevel.vbs"" $ARG1$ $ARG2$
-check_metaframe_server_loadlevel=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_metaframe_server_loadlevel.vbs"" $ARG1$ $ARG2$
-check_backupexec=cscript.exe scripts\check_backupexec.vbs //Nologo
-check_be=scripts\check_be.exe ""C:\Program Files\Symantec\Backup Exec\Data"" ""$ARG1$"" -w""$ARG2$"" -c""$ARG3$""
-[External Alias]
-alias_cpu=checkCPU warn=80 crit=90 time=5m time=1m time=30s
-alias_cpu_ex=checkCPU warn=$ARG1$ crit=$ARG2$ time=5m time=1m time=30s
-alias_mem=checkMem MaxWarn=80% MaxCrit=90% ShowAll=long type=physical type=virtual type=paged type=page
-alias_up=checkUpTime MinWarn=1d MinWarn=1h
-alias_disk=CheckDriveSize MinWarn=10% MinCrit=5% CheckAll FilterType=FIXED
-alias_disk_loose=CheckDriveSize MinWarn=10% MinCrit=5% CheckAll FilterType=FIXED ignore-unreadable
-alias_volumes=CheckDriveSize MinWarn=10% MinCrit=5% CheckAll=volumes FilterType=FIXED
-alias_volumes_loose=CheckDriveSize MinWarn=10% MinCrit=5% CheckAll=volumes FilterType=FIXED ignore-unreadable
-alias_service=checkServiceState CheckAll
-alias_service_ex=checkServiceState CheckAll ""exclude=Net Driver HPZ12"" ""exclude=Pml Driver HPZ12"" exclude=stisvc
-alias_process=checkProcState ""$ARG1$=started""
-alias_process_stopped=checkProcState ""$ARG1$=stopped""
-alias_process_count=checkProcState MaxWarnCount=$ARG2$ MaxCritCount=$ARG3$ ""$ARG1$=started""
-alias_process_hung=checkProcState MaxWarnCount=1 MaxCritCount=1 ""$ARG1$=hung""
-alias_event_log=CheckEventLog file=application file=system MaxWarn=1 MaxCrit=1 ""filter=generated gt -2d AND severity NOT IN ('success', 'informational') AND source != 'SideBySide'"" truncate=800 unique descriptions ""syntax=%severity%: %source%: %message% (%count%)""
-alias_file_size=CheckFiles ""filter=size > $ARG2$"" ""path=$ARG1$"" MaxWarn=1 MaxCrit=1 ""syntax=%filename% %size%"" max-dir-depth=10
-alias_file_age=checkFile2 filter=out ""file=$ARG1$"" filter-written=>1d MaxWarn=1 MaxCrit=1 ""syntax=%filename% %write%""
-alias_sched_all=CheckTaskSched ""filter=exit_code ne 0"" ""syntax=%title%: %exit_code%"" warn=>0
-alias_sched_long=CheckTaskSched ""filter=status = 'running' AND most_recent_run_time < -$ARG1$"" ""syntax=%title% (%most_recent_run_time%)"" warn=>0
-alias_sched_task=CheckTaskSched ""filter=title eq '$ARG1$' AND exit_code ne 0"" ""syntax=%title% (%most_recent_run_time%)"" warn=>0
-alias_updates=check_updates -warning 0 -critical 0
-check_ok=CheckOK Everything is fine!
-[Wrapped Scripts]
-;check_test_vbs=check_test.vbs /arg1:1 /arg2:1 /variable:1
-;check_test_ps1=check_test.ps1 arg1 arg2
-;check_test_bat=check_test.bat arg1 arg2
-;check_battery=check_battery.vbs
-;check_printer=check_printer.vbs
-;check_updates=check_updates.vbs
-; [includes]
-;# The order when used is ""reversed"" thus the last included file will be ""first""
-;# Included files can include other files (be carefull only do basic recursive checking)
-;
-; myotherfile.ini
-; real.ini
-[NSCA Agent]
-;# CHECK INTERVALL (in seconds)
-;   How often we should run the checks and submit the results.
-;interval=5
-;
-;# ENCRYPTION METHOD
-;   This option determines the method by which the send_nsca client will encrypt the packets it sends
-;   to the nsca daemon. The encryption method you choose will be a balance between security and
-;   performance, as strong encryption methods consume more processor resources.
-;   You should evaluate your security needs when choosing an encryption method.
-;
-; Note: The encryption method you specify here must match the decryption method the nsca daemon uses
-;       (as specified in the nsca.cfg file)!!
-; Values:
-;   0 = None    (Do NOT use this option)
-;   1 = Simple XOR  (No security, just obfuscation, but very fast)
-;   2 = DES
-;   3 = 3DES (Triple DES)
-;   4 = CAST-128
-;   6 = xTEA
-;   8 = BLOWFISH
-;   9 = TWOFISH
-;   11 = RC2
-;   14 = RIJNDAEL-128 (AES)
-;   20 = SERPENT
-;encryption_method=14
-;
-;# ENCRYPTION PASSWORD
-;  This is the password/passphrase that should be used to encrypt the sent packets.
-;password=
-;
-;# BIND TO ADDRESS
-;  Allows you to bind server to a specific local address. This has to be a dotted ip adress not a hostname.
-;  Leaving this blank will bind to ""one"" local interface.
-; -- not supported as of now --
-;bind_to_address=
-;
-;# LOCAL HOST NAME
-;  The name of this host (if empty ""computername"" will be used.
-;hostname=
-;
-;# NAGIOS SERVER ADDRESS
-;  The address to the nagios server to submit results to.
-;nsca_host=192.168.0.1
-;
-;# NAGIOS SERVER PORT
-;  The port to the nagios server to submit results to.
-;nsca_port=5667
-;
-;# CHECK COMMAND LIST
-;  The checks to run everytime we submit results back to nagios
-;  Any command(alias/key) starting with a host_ is sent as HOST_COMMAND others are sent as SERVICE_COMMANDS
-;  where the alias/key is used as service name.
-;
-[NSCA Commands]
-;my_cpu_check=checkCPU warn=80 crit=90 time=20m time=10s time=4
-;my_mem_check=checkMem MaxWarn=80% MaxCrit=90% ShowAll type=page
-;my_svc_check=checkServiceState CheckAll exclude=wampmysqld exclude=MpfService
-;host_check=check_ok
-;# REMOTE NRPE PROXY COMMANDS
-;  A list of commands that check other hosts.
-;  Used by the NRPECLient module
-[NRPE Client Handlers]
-check_other=-H 192.168.0.1 -p 5666 -c remote_command -a arguments
-;# LUA SCRIPT SECTION
-;  A list of all Lua scripts to load.
-;[LUA Scripts]
-;scripts\test.lua
-[EventLog]
-debug=0
-buffer_size=512000
-[NRPE Handlers]
-exch_dag=cmd /c echo scripts\ExDAG.ps1; exit $LastExitCode; | powershell.exe -command –
-exch_mail_flow10=cmd /c echo scripts\ExchMailFlow10.ps1; exit $LastExitCode | powershell.exe -command -
-exch_mailboxhealth10=cmd /c echo scripts\ExMailboxhealth10.ps1; exit $LastExitCode | powershell.exe -command –
-exch_mapi10=cmd /c echo scripts\ExchMapi10.ps1; exit $LastExitCode | powershell.exe -command -
-exch_queue_health10=cmd /c echo scripts\ExQueueHealth10.ps1; exit $LastExitCode | powershell.exe -command -
-exch_search10=cmd /c echo scripts\ExchSearch10.ps1; exit $LastExitCode | powershell.exe -command -
-exch_service_alert10=cmd /c echo scripts\ExServiceAlert10.ps1; exit $LastExitCode | powershell.exe -command –
-;#Windows Update Checker
-check_updates=c:\windows\system32\cscript.exe //NoLogo //T:40 ""C:\Program Files\NSClient++\scripts\check_updates.wsf"" $ARG1$
-check_foldersize=c:\windows\system32\cscript.exe //NoLogo //T:30 ""C:\Program Files\NSClient++\scripts\check_folder_size.vbs"" ""$ARG1$"" $ARG2$ $ARG3$
-check_foldersize2=c:\windows\system32\cscript.exe //NoLogo //T:30 ""C:\Program Files\NSClient++\scripts\check_folder_size.vbs"" ""$ARG1$"" $ARG2$ $ARG3$ $ARG4$ $ARG5$
-check_dfsutil=perl scripts\check_win_dfsutil.pl -H $ARG1$ $ARG2$
-dfsutil_cmd=dfsutil  $ARG1$  $ARG2$
-dfsdiag_cmd=dfsdiag  $ARG1$  $ARG2$
-dfsrdiag_cmd=dfsrdiag  $ARG1$  $ARG2$
-check_dfsdiag=perl scripts\check_win_dfsdiag.pl -H $ARG1$ -A ""$ARG2$""
-check_dfsrdiag=perl scripts\check_win_dfsrdiag_backlog.pl -H $ARG1$ -A ""$ARG2$"" -w $ARG3$ -c $ARG4$
-check_file_mtime=perl scripts\check_file_mtime.pl -f ""$ARG1$"" -t ""$ARG2$"" ""-$ARG3$""
-;Citrix WMI monitoring plugins
-check_licenses_pl=perl scripts\check_licenseserver.pl -w $ARG1$ -c $ARG2$
-check_licenses_vbs=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_ctx_lic.vbs"" $ARG1$ $ARG2$ $ARG3$ $ARG4$
-check_num_servers_in_zone=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_num_servers_in_zone.vbs"" $ARG1$ $ARG2$
-check_active_session=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_active_session.vbs"" $ARG1$ $ARG2$
-check_disconnected_session=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_disconnected_session.vbs"" $ARG1$ $ARG2$
-check_metaframe_application_loadlevel=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_metaframe_application_loadlevel.vbs"" $ARG1$ $ARG2$
-check_metaframe_server_loadlevel=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_metaframe_server_loadlevel.vbs"" $ARG1$ $ARG2$
-check_nbu_backup=cmd /c echo scripts\check_nbu_backstat.ps1 -nbuClient $ARG1$; exit $LastExitCode | powershell.exe -command -
-check_nbu_backup2=cscript.exe //nologo ""C:\Program Files\NSClient++\scripts\check_nbu_backstat.vbs""  /nbuClient:$ARG1$ /filePath:""$ARG2$""
-check_dcdiag=scripts\check_ad.exe --dc
-check_dcdiag2=perl scripts\check_ad.pl --dc
-check_dcdiag3=cscript.exe //nologo scripts\check_AD.vbs
-
-Please let me know if any further information is required here. Thank you in advance.
-","1. 
-At the end I am able to crack the issue and it was very difficult to find. Our administartor copied/pasted the command line for NSC.ini file from a MS-Word document (How To doc.) shared by me to them. In the original MS-Word doc the hypen (at the end of the line after the word command in the beow given command line) has automatically changed to a little bigger hypen during documentation, which also gone to the NSC.ini file and hence the error was coming for this particular command execution. After changing the hypen manually in the NSC.ini, it worked fine without any issues
-
-",Nagios
-"I am trying to push metrics collected by a Netdata container acting as a parent in a GKE cluster into Google Managed Prometheus. I am following the netdata docs here: https://learn.netdata.cloud/docs/exporting/prometheus-remote-write
-and I realized that I don't know how to write metrics to GMP. Does anyone know what the endpoint of GMP actually is, and if it even supports the Remote Write protocol?
-Thanks!
-","1. It's not implemented yet, thanks @fariya!
-",Netdata
-"I am struggling configuring traefik:v2.6 so I can access a self-hosted netdata instance running on docker. This is my compose file:
-version: ""3""
-services:
-  netdata:
-    image: netdata/netdata
-    volumes:
-      - ""netdataconfig:/etc/netdata""
-      - ""netdatalib:/var/lib/netdata""
-      - ""netdatacache:/var/cache/netdata""
-      - ""/etc/passwd:/host/etc/passwd:ro""
-      - ""/etc/group:/host/etc/group:ro""
-      - ""/proc:/host/proc:ro""
-      - ""/sys:/host/sys:ro""
-      - ""/etc/os-release:/host/etc/os-release:ro""
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.http.routers.netdata.rule=Host(`netdata.$myserver`)""
-      - ""traefik.http.services.netdata.loadbalancer.server.port=19999""
-      - ""traefik.http.routers.netdata.entrypoints=http""
-      - ""traefik.docker.network=proxy""
-    restart: unless-stopped
-    cap_add:
-      - SYS_PTRACE
-    security_opt:
-      - apparmor:unconfined
-    networks:
-      - proxy
-volumes:
-  netdataconfig:
-  netdatalib:
-  netdatacache:
-networks:
-  proxy:
-    external: true
-
-When I try to access netdata via netdata.$myserver I see the backdrop only but no dashboard:
-
-Checking developer settings in Firefox reveal the following message:
-Loading failed for the <script> with source “http://netdata.$myserver/dashboard-react.js”. [netdata.$myserver:16:1](http://netdata.$myserver/)
-Uncaught ReferenceError: NETDATA is not defined
-    225 main.js:435
-    p (index):16
-    566 main.7d1bdca1.chunk.js:2
-    p (index):16
-    324 main.7d1bdca1.chunk.js:2
-    p (index):16
-    f (index):16
-    e (index):16
-    <anonymous> main.7d1bdca1.chunk.js:2
-
-The documentation has templates for several reverse proxy configurations, but I cannot translate them into settings I might have missed for traefik.
-I tried adding a && Path('/netdata') to the rule traefik.http.routers.netdata.rule=Host('netdata.$myserver')"" but there was no difference. The traefik dashboard does not show any errors related to netdata. What am I missing here?
-","1. If you use a Subpath like /netdata you have to call the directory with a slash at the end, then it will work.
- labels:
-   - ""traefik.enable=true""
-   - ""traefik.http.routers.netdata.entrypoints=web""
-   - ""traefik.http.routers.netdata.rule=Host(`yourdomain`) && PathPrefix(`/netdata`)""
-   - ""traefik.http.routers.netdata.service=netdata""
-   - ""traefik.http.services.netdata.loadbalancer.server.port=19999""
-   - ""traefik.http.routers.netdata.middlewares=netdatapathstrip""
-   - ""traefik.http.middlewares.netdatapathstrip.stripprefix.prefixes=/netdata""
-
-I can reach the dashboard under http://yourdomain/netdata/
-If I call http://yourdomain/netdata I get the same behaviour as you mentioned.
-
-2. It looks like you're using a subdomain.
-That Host value needs to actually resolve to something.
-(as in the dns needs to be setup or be added to the docker host machine's /etc/hosts file.)
-Here's my docker-compose file which is almost verbatim from the netdata documentation.
-version: '3.4'
-
-networks:
-  traefikweb:
-    external: true
-
-services:
-  netdata:
-    image: netdata/netdata
-    container_name: netdata
-    pid: host
-    labels:
-      - ""traefik.enable=true""
-      - ""traefik.ports=19999""
-      - ""traefik.http.routers.netdata.rule=Host(`netdata.traefik.example.com`)""
-      - ""traefik.http.routers.netdata.entrypoints=websecure""
-      - ""traefik.http.routers.netdata.tls=true""   
-    networks:
-      - traefikweb
-    restart: unless-stopped
-    cap_add:
-      - SYS_PTRACE
-      - SYS_ADMIN
-    security_opt:
-      - apparmor:unconfined
-    volumes:
-      - netdataconfig:/etc/netdata
-      - netdatalib:/var/lib/netdata
-      - netdatacache:/var/cache/netdata
-      - /etc/passwd:/host/etc/passwd:ro
-      - /etc/group:/host/etc/group:ro
-      - /proc:/host/proc:ro
-      - /sys:/host/sys:ro
-      - /etc/os-release:/host/etc/os-release:ro
-      - /var/run/docker.sock:/var/run/docker.sock:ro
-
-volumes:
-  netdataconfig:
-  netdatalib:
-  netdatacache:
-
-",Netdata
-"I recently deployed Netdata on Kubernetes. And I wanted to use Netdata to record the status of Managed MySQL hosted in Linode.
-I read the documentation and it said that I could set up a plugin called MySQL Collector. However, the documentation only mentioned how to edit the configuration file directly, not how to configure it for Kubernetes.
-It says to edit /etc/netdata/go.d/mysql.conf, but it is in the container. Perhaps you can do it temporarily by editing it from the shell inside the container, but I don't think that's official.
-How can I set up a MySQL collector in Netdata on Kubernetes?
-I tried creating a folder /etc/netdata/go.d on the Kubernetes node computer anyway just to be sure and created the file there, but still could not do it. I have that folder group in the container, so I think I should be able to set it up there. But if I edit it directly, I think it will be reset after reboot.
-So I came up with a way to set up one volume of /etc/netdata in Kubernetes, but I am not sure if this is a good decision. So I am not doing this because it seems dangerous.
-That said, I would like to know how to officially set up a MySQL collector plugin to Netdata on Kubernetes.
-","1. I think you need to pass the config into the helm chart via whatever values.yaml you are using.
-Here are some of the defaults for example.
-So i think in your case:
-    go.d:
-      enabled: true
-      path: /etc/netdata/go.d.conf
-      data: |
-        modules:
-          pulsar: no
-          prometheus: yes
-          mysql: yes
-
-    mysql:
-      enabled: true
-      path: /etc/netdata/go.d/mysql.conf
-      data: |
-        update_every: 1
-        autodetection_retry: 0
-        jobs:
-          - name: some_name1
-          - name: some_name1
-
-something like above to pass in the relevant config files (as per docs you read) into k8s via the Netdata helmchart.
-",Netdata
-"I have a Quarkus application (consuming/producing messages from multiple Kafka topics) deployed on an AWS Kubernetes cluster. I want to monitor/observe my application. Right now, I am using the Quarkus OpenTelemtry library to send metrics to New Relic which is very straightforward (provide distributed tracing as well with Kafka events out of the box), but it doesn't contain JVM metrics and I can't create SLO for New Relic OpenTelemetry service, there are other features which are not available for OpenTelemetry but available with the Java agent.
-Is there a way to get JVM metrics with OpenTelemetry? If not then will it be a good idea to move to java agent?
-Please suggest which one should we go with, New Relic OpenTelemtry or New Relic Java agent.
-","1. Yes, there is.
-Metrics in Quarkus are currently implemented with Micrometer and you have an OTLP registry (HTTP protocol only) that you can use to send those metrics over.
-The dependency from Quarkiverse: https://docs.quarkiverse.io/quarkus-micrometer-registry/dev/micrometer-registry-otlp.html
-<dependency>
-   <groupId>io.quarkiverse.micrometer.registry</groupId>
-   <artifactId>quarkus-micrometer-registry-otlp</artifactId>
-</dependency>
-
-Also there are plans in the Quarkus roadmap around this area, on the Quarkus mailing list: https://groups.google.com/g/quarkus-dev/c/y5-ojIVsa_M/m/4wquJi4bBQAJ
-",New Relic
-"I am using OpenSearch version 7.10.2 and encountering an issue with the match_phrase_prefix query on certain asset numbers. My asset_number field is analyzed using a lowercase analyzer. Here is the detailed setup:
-Index Settings:
-{
-  ""settings"": {
-    ""analysis"": {
-      ""normalizer"": {
-        ""lowercase_normalizer"": {
-          ""type"": ""custom"",
-          ""filter"": [""lowercase""]
-        }
-      }
-    }
-  },
-  ""mappings"": {
-    ""_routing"": {
-      ""required"": true
-    },
-    ""properties"": {
-      ""asset_number"": {
-        ""type"": ""text"",
-        ""fields"": {
-          ""keyword"": {
-            ""type"": ""keyword"",
-            ""normalizer"": ""lowercase_normalizer""
-          }
-        }
-      }
-    }
-  }
-}
-
-Problem Description
-I have an index with asset_number values and am trying to run a match_phrase_prefix query to fetch assets starting with specific prefixes. The query works for some prefixes but not for others. Here are the results
-
-Working: PB1234, PE1234, PG1234, PI1234, PJ1234, PK1234, PL1234, PM1234, PN1234, AH1234, BH1234
-Not working: PC1234, PD1234, PF1234, PH1234, AC1234, BC1234, AD1234, BD1234, CA1234, CB1234, CC1234, CD1234, CE1234, DA1234, DB1234, DC1234, FA1234
-
-The query is supposed to match asset numbers that start with specific prefixes like ""PC"", ""PD"", etc., but it fails for many of them. But when I try to search for pc1 or pd1, I am getting the result
-Query Used
-{
-  ""query"": {
-    ""bool"": {
-      ""must"": [
-        {
-          ""match_phrase_prefix"": {
-            ""asset_number"": ""pc""
-          }
-        }
-      ]
-    }
-  }
-}
-
-Any suggestions on what might be causing the query to fail for specific prefixes, I want to know the actual reason behind it. Because the same data and same query is working perfectly fine in local env or staging env ?
-Additional Information:
-
-OpenSearch version: 7.10.2
-Lucene version: 9.4.2
-Query works for some prefixes but not others even though they are in the same index.
-
-","1. I am not sure if this will help, but I tried to do this and it worked for me:
-First adding an index with mapping and analyzer
-PUT /stacktest
-{
-  ""settings"": {
-    ""analysis"": {
-      ""normalizer"": {
-        ""lowercase_normalizer"": {
-          ""type"": ""custom"",
-          ""filter"": [""lowercase""]
-        }
-      }
-    }
-  },
-  ""mappings"": {
-    ""_routing"": {
-      ""required"": true
-    },
-    ""properties"": {
-      ""asset_number"": {
-        ""type"": ""text"",
-        ""fields"": {
-          ""keyword"": {
-            ""type"": ""keyword"",
-            ""normalizer"": ""lowercase_normalizer""
-          }
-        }
-      }
-    }
-  }
-}
-
-Then adding the data (with routing)
-POST /stacktest/_doc/PB1234?routing=PB1234
-{
-  ""asset_number"": ""PB1234""
-}
-
-POST /stacktest/_doc/PE1234?routing=PE1234
-{
-  ""asset_number"": ""PE1234""
-}
-
-POST /stacktest/_doc/PG1234?routing=PG1234
-{
-  ""asset_number"": ""PG1234""
-}
-
-POST /stacktest/_doc/PI1234?routing=PI1234
-{
-  ""asset_number"": ""PI1234""
-}
-.
-.
-.
-
-Then doing the search
-GET /stacktest/_search
-{
-  ""query"": {
-    ""bool"": {
-      ""must"": [
-        {
-          ""match_phrase_prefix"": {
-            ""asset_number"": ""pc""
-          }
-        }
-      ]
-    }
-  }
-}
-
-Here is the result:
-{
-  ""took"": 3,
-  ""timed_out"": false,
-  ""_shards"": {
-    ""total"": 1,
-    ""successful"": 1,
-    ""skipped"": 0,
-    ""failed"": 0
-  },
-  ""hits"": {
-    ""total"": {
-      ""value"": 1,
-      ""relation"": ""eq""
-    },
-    ""max_score"": 2.9618306,
-    ""hits"": [
-      {
-        ""_index"": ""stacktest"",
-        ""_id"": ""PC1234"",
-        ""_score"": 2.9618306,
-        ""_routing"": ""PC1234"",
-        ""_source"": {
-          ""asset_number"": ""PC1234""
-        }
-      }
-    ]
-  }
-}
-
-The search worked for all the cases, I feel that your data is not inserted in the index, maybe wrong routing or something, because your query looks totally correct, and I tested it and it is working correctly.
-my answer is not too helpful but it may tell you that the problem is not the query, it is somewhere else.
-",OpenSearch
-"I'm new with TSDB and I have a lot of temperature sensors to store in my database with one point per second. Is it better to use one unique metric per sensor, or only one metric (temperature for example) with distinct tags depending sensor??
-I searched on Internet what is the best practice, but I didn't found a good answer...
-Thank you! :-)
-Edit:
-I will have 8 types of measurements (temperature, setpoint, energy, power,...) from 2500 sources
-","1. If you are storing your data in InfluxDB, I would recommend storing all the metrics in a single measurement and using tags to differentiate the sources, rather than creating a measurement per source. The reason being that you can trivially merge or decompose the metrics using tags within a measurement, but it is not possible in the newest InfluxDB to merge or join across measurements.
-Ultimately the decision rests with both your choice of TSDB and the queries you care most about running.
-
-2. For comparison purposes, in Axibase Time-Series Database you can store temperature as a metric and sensor id as entity name. ATSD schema has a notion of entity which is the name of system for which the data is being collected. The advantage is more compact storage and the ability to define tags for entities themselves, for example sensor location, sensor type etc. This way you can filter and group results not just by sensor id but also by sensor tags.
-To give you an example, in this blog article 0601911 stands for entity id - which is EPA station id. This station collects several environmental metrics and at the same time is described with multiple tags in the database: http://axibase.com/environmental-monitoring-using-big-data/.
-The bottom line is that you don't have to stage a second database, typically a relational one, just to store extended information about sensors, servers etc. for advanced reporting.
-UPDATE 1: Sample network command: 
-series e:sensor-001 d:2015-08-03T00:00:00Z m:temperature=42.2 m:humidity=72 m:precipitation=44.3
-
-Tags that describe sensor-001 such as location, type, etc are stored separately, minimizing storage footprint and speeding up queries. If you're collecting energy/power metrics you often have to specify attributes to series such as Status because data may not come clean/verified. You can use series tags for this purpose.
-series e:sensor-001 d:2015-08-03T00:00:00Z m:temperature=42.2 ... t:status=Provisional
-
-
-3. You should use one metric per sensor. You probably won't be needing to aggregate values from different temperature sensors, but you will be needing to aggregate values of a given sensor (average over a minute for instance).
-Metrics correspond to data coming from the same source, or at least data you will be likely to aggregate. You can create almost as many metrics as you want (up to 16 million metrics in OpenTSDB for instance).
-Tags make distinctions between these pieces of data. For instance, you could tag data differently if they suddenly change a lot, in order to retrieve only relevant data if needed, without losing the rest of the data. Although for a temperature sensor getting data every second, the best would probably be to filter and only store data when the value changed...
-Best practices are summed up here
-",OpenTSDB
-"I have a table that will sort of resemble a [metadata table][1].
-        CREATE TABLE IF NOT EXISTS sensor1 (
-        datetime TIMESTAMPTZ NOT NULL,
-        device_id TEXT NOT NULL,
-        field_name TEXT NOT NULL,
-        device_value FLOAT NOT NULL
-        );
-
-In this table the datetime and device_id will not be unique.
-I would like to make this a hypertable and be able to make indexes out of the device_id and datetime columns (the two indexes combined will also not be unique).
-My question is when and how should I make the indexes? Before or after making the hypertable?
-My read query will be
-        SELECT * 
-        FROM sensor1 
-        WHERE device_id = '5555' 
-        AND datetime >= 'Feb 10 2024'
-        AND datetime < 'Feb 20 2024';
-
-or
-SELECT device_id, 
-datetime,
-MAX(CASE WHEN name = '[field_name]' THEN '[device_value]' AS '[field_name]',
-MAX(CASE WHEN name = '[field_name]' THEN '[device_value]' AS '[field_name]',
-MAX(CASE WHEN name = '[field_name]' THEN '[device_value]' AS '[field_name]'
-FROM sensor1
-GROUP BY device_id, datetime
-ORDER BY datetime DESC;
-
-  [1]: https://www.timescale.com/learn/best-practices-for-time-series-metadata-tables
-
-","1. you don't need to worry about the indices as if you make your primary key combining time and the device_id, it will be already creating the index for you.
-https://docs.timescale.com/use-timescale/latest/hypertables/hypertables-and-unique-indexes/#create-a-unique-index-on-a-hypertable
-",OpenTSDB
-"Given a particular tag value, is there a way to obtain the list of all metrics associated to it?
-Example
-
-tags (key=value)
-
-host=box1.onenet.tv
-host=box2.onenet.tv
-
-metrics
-
-net.bytes_received
-net.bytes_sent
-net.error_count
-
-metrics associated to tag value ""box1.onenet.tv"" 
-
-net.bytes_received
-net.bytes_sent
-net.error_count""
-
-
-How to obtain ""net.bytes_received,net.bytes_sent,net.error_count"" using tag value ""box1.onenet.tv""?
-","1. No, i don't think that you can find data without a given metric but only by giving a tag name. The metric name is the biggest aggregation level. Below one metric you can only use  tags to find special areas of data.
-Perhaps you have to shift your metric name down into the tagnames, so that you define a new common metric name which fits on all your possible aggregation queries. Then you can search for all tags with your old metric name in the metric with your new common metric name. Hope that was understandable. 
-By the way and more detailed for your information:
-In your query you can use wildcards for your tags (e.g.  tag1=*), but not for the metric name. 
-Here is an overview about what your query consist of (see: http://opentsdb.net/query-execution.html):
-All queries have:
-  - A metric name for which to retrieve data;
-  - A start time;
-  - A stop time (optional, if not set, assumed to be ""now"");
-  - A possibly empty set of tags to filter the data
-      (e.g. host=foo, or wildcards such as host=*);
-  - An aggregation function (e.g. sum or avg);
-    Whether or not to get the ""rate of change"" the data (in mathematical terms: 
-    the first derivative). Optionally: a downsampling interval (e.g. 10 minutes) 
-    and downsampling function (e.g. avg)
-
-And i think it is very useful to read the OpenTSDB documentation, especially about metric and tags. See here: http://opentsdb.net/metrics.html
-Right now, you cannot combine two metrics into one plot line.
-This means you want a metric to be the biggest possible aggregation point. 
-If you want to drill down to specifics within a metric, use tags. 
-
-
-2. FYI, it is possible with OpenTSDB 2.1.0, which introduces metadata handling:  Google Groups: Pulling (meta)data from OpenTSDB
-Remember to enable metadata parsing and then you can pull all the data you ever dreamed of :-)
-
-3. You could use the label endpoint to retrieve this:
-https://<prometheus-url>/api/v1/label/__name__/values?match[]={host=box1.onenet.tv}
-
-This will show you all label __name__ (i.e. the metric names) associated with that filter (host=box1.onenet.tv).
-This will provide a json with a list under the data key with all metrics that match the filter.
-In your case the response to that URL will be something like
-{""status"": ""success"", ""data"": [""net.bytes_received"" ,""net.bytes_sent"", ""net.error_count""]}
-
-",OpenTSDB
-"I have to metric one metric is total count, another is error count, they have same labels, but different label-val combination, total count metric's label-vals combinations mush constains error count metric's label-val combination,such as:
-total count metric:
-method_totalCall{host=""h1"", dst=""a1"", src=""a2""}@t1 = 24
-method_totalCall{host=""h1"", dst=""b1"", src=""b2""}@t1 = 30
-method_totalCall{host=""h1"", dst=""c1"", src=""c2""}@t1 = 3
-method_totalCall{host=""h1"", dst=""d1"", src=""d2""}@t1 = 6
-method_totalCall{host=""h1"", dst=""e1"", src=""e2""}@t1 = 21
-
-this metric means method call count in $host from $src to $dest
-error count metric:
-method_totalCall_ERROR{host=""h1"", dst=""c1"", src=""c2""}@t1 = 1
-method_totalCall_ERROR{host=""h1"", dst=""d1"", src=""d2""}@t1 = 2
-method_totalCall_ERROR{host=""h1"", dst=""e1"", src=""e2""}@t1 = 3
-
-this metric means method call error count in $host from $src to $dest
-i want to get a metric which means method call success count in $host from $src to $dest
-so i write a promQL:
-method_totalCall{host='h1'} - method_totalCall_ERROR{host='h1'}
-
-but
-if i use promQL :
-method_totalCall{host='h1'} - method_totalCall_ERROR{host='h1'}
-
-I will get a new temporary metric :
-methed_succ_count{host=""h1"", dst=""c1"", src=""c2""}@t1 = 2 
-methed_succ_count{host=""h1"", dst=""d1"", src=""d2""}@t1 = 4
-methed_succ_count{host=""h1"", dst=""e1"", src=""e2""}@t1 = 18
-
-
-(come from:method_totalCall{host=""h1"", dst=""c1"", src=""c2""}@t1 - method_totalCall_ERROR{host=""h1"", dst=""c1"", src=""c2""}@t1)
-(come from:method_totalCall{host=""h1"", dst=""d1"", src=""d2""}@t1 - method_totalCall_ERROR{host=""h1"", dst=""d1"", src=""d2""}@t1)
-(come from:method_totalCall{host=""h1"", dst=""e1"", src=""e2""}@t1 - method_totalCall_ERROR{host=""h1"", dst=""e1"", src=""e2""}@t1)
-
-but for label
-{host=""h1"", dst=""a1"", src=""a2""}
-{host=""h1"", dst=""b1"", src=""b2""}
-
-value missed in new metric. which means i doesn't get the sucess count in host h1 from  from specific $src(a1, b1) to specific $dest(a2,b2).how should I modify promQL? i want get val like this:
-methed_succ_count{host=""h1"", dst=""a1"", src=""a2""}@t1 = 24
-methed_succ_count{host=""h1"", dst=""b1"", src=""b2""}@t1 = 30
-methed_succ_count{host=""h1"", dst=""c1"", src=""c2""}@t1 = 2 
-methed_succ_count{host=""h1"", dst=""d1"", src=""d2""}@t1 = 4
-methed_succ_count{host=""h1"", dst=""e1"", src=""e2""}@t1 = 18
-
-which means when no error occur on host h1 from some specific $src(a1, b1) to specific $dst (a2,b2), error count is zero from (a1,b1) to (a2, b2), success count from (a1, b1) to (a2, b2) is total count.
-the tsdb i used is victoriametrics.
-","1. Essentially what you are trying to do, is to create a left join.
-PromQL doesn't have built-in operator for this, but it can be emulated with or.
-In your particular case, use
-method_totalCall{host='h1'} - method_totalCall_ERROR{host='h1'}
- or method_totalCall{host='h1'}
-
-For more details on the matter check this post by Brian Brasil.
-
-A sidenote, regarding metric naming: you metric names don't conform to the best practices.
-Additionally, consider (if possible) changing your instrumentation, so that you actually expose metric with the single name, but additional label result that would contain values Success or Failure. This way you have values split by results, and if you need aggregation for total call, you can use sum without (result) (method_calls_total).
-",OpenTSDB
-"I'm using the following code to create 100 datapoints in tsdb from 0 till 99:
-package main
-
-import (
-    ""context""
-    ""fmt""
-    ""github.com/prometheus/prometheus/model/labels""
-    ""github.com/prometheus/prometheus/storage""
-    ""github.com/prometheus/prometheus/tsdb""
-    ""os""
-    ""time""
-)
-
-func main() {
-    // Create a new TSDB instance
-    db, err := tsdb.Open(
-        ""./data"", // directory where the data will be stored
-        nil,      // a logger (can be nil for no logging)
-        nil,      // an optional prometheus.Registerer
-        tsdb.DefaultOptions(),
-        nil,
-    )
-    if err != nil {
-        fmt.Println(""Error opening TSDB:"", err)
-        os.Exit(1)
-    }
-    defer db.Close()
-
-    // Create a new appender
-    app := db.Appender(context.Background())
-
-    // Create labels for the gauge time series
-    lbls := labels.FromStrings(""__name__"", ""example_gauge"", ""type"", ""gauge"")
-
-    // Initialize a SeriesRef
-    var ref storage.SeriesRef
-
-    startTimestamp := time.Now().Add(-1 * time.Hour).Unix()
-    // Add some data points
-    for i := 0; i < 100; i++ {
-        var err error
-        ref, err = app.Append(ref, lbls, (startTimestamp+int64(i))*1000, float64(i))
-        if err != nil {
-            fmt.Println(""Error appending:"", err)
-            os.Exit(1)
-        }
-    }
-
-    // Commit the data
-    err = app.Commit()
-    if err != nil {
-        fmt.Println(""Error committing:"", err)
-        os.Exit(1)
-    }
-}
-
-
-It works fine except one thing, when I issue the following PromQL query: example_gauge{type=""gauge""} I get 300 points in response, first 100 is from 0 till 99 like expected and last 200 points all have the same value 99, and my chart looks so:
-
-Why it happens?
-","1. Big thanks to @markalex and his suggestions. He pointed me to the explanation of staleness here: https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness and I was able to update my example, so now it generates data that I need:
-Here I adding hole without data, i.e. marking timeserie as stale and then unmark it:
-
-And here the code that does it:
-func main() {
-    // Create a new TSDB instance
-    db, err := tsdb.Open(
-        ""./data"", // directory where the data will be stored
-        nil,      // a logger (can be nil for no logging)
-        nil,      // an optional prometheus.Registerer
-        tsdb.DefaultOptions(),
-        nil,
-    )
-    if err != nil {
-        fmt.Println(""Error opening TSDB:"", err)
-        os.Exit(1)
-    }
-    defer db.Close()
-
-    // Create a new appender
-    app := db.Appender(context.Background())
-
-    // Create labels for the gauge time series
-    lbls := labels.FromStrings(""__name__"", ""example_gauge"", ""type"", ""gauge"")
-
-    // Initialize a SeriesRef
-    var ref storage.SeriesRef
-
-    startTimestamp := time.Now().Add(-1 * time.Hour).Unix()
-    // Add some data points
-    for i := 0; i < 200; i++ {
-        var err error
-        if i < 100 || i > 120 {
-            ref, err = app.Append(ref, lbls, (startTimestamp+int64(i))*1000, float64(i))
-        } else {
-            // Mark time series as stale by appending a NaN
-            ref, err = app.Append(ref, lbls, (startTimestamp+100)*1000, math.NaN())
-        }
-        if err != nil {
-            fmt.Println(""Error appending:"", err)
-            os.Exit(1)
-        }
-    }
-
-    // Commit the data
-    err = app.Commit()
-    if err != nil {
-        fmt.Println(""Error committing:"", err)
-        os.Exit(1)
-    }
-}
-
-I know that AI generated answers not allowed, but nevetherless I would like to attach ChatGPT explanation about staleness, explanation in Prometheus documentation sounds a bit vague up to me:
-
-Sure, I can break this down for you. Let's use a weather monitoring
-example to help clarify the concept of ""staleness"" in Prometheus.
-Timestamps and Data Sampling: Imagine you have sensors recording
-temperature, humidity, and wind speed. These sensors push data to
-Prometheus at different time intervals. Let's say temperature data
-comes every 1 minute, humidity every 2 minutes, and wind speed every 5
-minutes. When you want to query an average temperature for the last 10
-minutes, Prometheus picks a set of timestamps to sample the data,
-regardless of when the actual data came in. This helps Prometheus to
-compare or aggregate multiple time series that may not align exactly
-in time.
-Marking as Stale: If one of your sensors (let's say the wind speed
-sensor) suddenly stops sending data, Prometheus marks that time series
-as ""stale"". This is because it hasn't received any new sample for that
-specific series.
-Effect on Query: Now, if you run a query to get the average wind speed
-after this time series has been marked as stale, Prometheus won't
-return a value for it. It's like saying, ""Hey, I can't trust this data
-because it hasn't been updated recently.""
-Returning to Normal: If the wind speed sensor starts working again and
-sends a new sample, the time series is no longer stale, and new
-queries will return values as usual.
-5-Minute Threshold: If Prometheus doesn't find any sample within 5
-minutes before a query's sampling timestamp, it treats it as if it's
-stale. This means even if the time series isn't marked as stale,
-you'll still get no value if the latest sample is older than 5
-minutes.
-Timestamps in Scrapes: If your sensor sends data with timestamps, then
-only the 5-minute rule applies, ignoring the staleness flag.
-So, in essence, staleness in Prometheus helps to keep your metrics
-accurate by ignoring time series that haven't been updated recently.
-Hope that clears things up!
-
-",OpenTSDB
-"As part of our spring application, we are using Spring Sleuth to inject traceid & spanid into the requests. This neatly works with SL4J via MDC integration to propagate to the logs as well.
-But running into issues with our organization not using B3 headers that Sleuth is tightly coupled with. So looking at alternatives for using custom request header like ""x-trace-id"" that could be injected into the traces.
-Our traceability is still via centralized logging like splunk. We do not yet have a centralized collector like zipkin & hence sampling is not relevant yet. So the immediate usecase is to ensure log traceability and once we have a central collector for tracing, hoping sampling is available out of the box to use.
-","1. Sleuth is not tightly coupled with B3, it supports AWS, B3, W3C, and custom (B3 is the default): see the docs about Context Propagation
-You can change the context propagation mechanism, see docs: How to Change The Context Propagation Mechanism?
-",OpenTracing
-"JaegerExporter is no longer supported so I am attempting to convert to use OTLPTraceExporter ,from what I can tell I should be able to just configure it with the url and it should work, but clearly, I am missing something, any help even pointing in the right direction would be greatly appreciated.
-Working Depricated code
-const jaegerExporter = new JaegerExporter({
-  endpoint: process.env.JAEGER_ENDPOINT, 
-});
-
-But when I convert it to, no tracing data comes through
-const jaegerExporter = new OTLPTraceExporter({
-  url: process.env.JAEGER_ENDPOINT,
-}); 
-
-my Tracing code
-export const Tracer = new NodeSDK({
-  resource: new Resource({
-    [SemanticResourceAttributes.SERVICE_NAME]: process.env.JAEGER_SERVICE_NAME || 'node-be-unset',
-  }),
-  traceExporter: jaegerExporter,
-  metricReader: prometheusExporter,
-  contextManager: new AsyncLocalStorageContextManager(),
-  
-  textMapPropagator: new CompositePropagator({
-    propagators: [
-      new JaegerPropagator(),
-      new W3CTraceContextPropagator(),
-      new W3CBaggagePropagator(),
-      new B3Propagator(),
-      new B3Propagator({
-        injectEncoding: B3InjectEncoding.MULTI_HEADER,
-      }),
-    ],
-  }),
-  instrumentations: [
-    getNodeAutoInstrumentations(),
-  ],
-});
-
-A bit more info, This is a node app using the nestjs framework
-","1. After some digging, I found I just need to remove the traceExporter option completly from nodeSdk, and then set the variables for jaeger in my enviroment
-
-OTEL_TRACES_EXPORTER=jaeger
-OTEL_EXPORTER_JAEGER_ENDPOINT=http://localhost:14268/api/traces
-
-
-based on https://github.com/open-telemetry/opentelemetry-specification/blob/6ce62202e5407518e19c56c445c13682ef51a51d/specification/sdk-environment-variables.md#jaeger-exporter
-",OpenTracing
-"I am trying to setup jaeger-all-in-one on one windows server [without Docker] with Badger DB for persistent storage to test it.
-Used the following config file to run the Jaeger by command
-jaeger-all-in-one.exe"" --config-file=""C:\Users\Administrator\Downloads\jaeger-config.yaml""
-# Global options
-sampler:
-  type: const
-  param: 1  # 1 means to sample all traces
-
-log-level: debug
-storage:
-  type: badger  # Use Badger as storage backend
-  options:
-    directory-key: C:\Program Files\Jaeger\Badger\Data  # Path to store Badger data
-    directory-value: C:\Program Files\Jaeger\Badger\Data # Path to store Badger value data
-    max-cache-size: 10MB  # Size of Badger cache
-    gc-interval: 10m  # Badger garbage collection interval
-    key-timestamp: true  # Enable key timestamps for better querying
-    ephemeral: false  # Set ephemeral to false for persistent storage
-    span-retention: ""7d""  # Retain trace data for 7 days
-    dependency-retention: ""7d""  # Retain dependency data for 7 days
-
-# HTTP server settings
-http:
-  port: 16686  # Port for the Jaeger query service, adjust as necessary
-
-# gRPC server settings
-grpc:
-  port: 4317  # Port for the Jaeger collector service, adjust as necessary
-
-But the Jaeger is not using Badger storage instead it uses memory storage ""msg"":""Memory storage initialized"". Reference attached entire debug logs,
-2024/04/15 04:57:46 maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined
-2024/04/15 04:57:46 application version: git-commit=ecbae67ea32f189df1ddb4ec2da46d5fcd328b03, git-version=v1.56.0, build-date=2024-04-03T19:57:40Z
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""flags/service.go:110"",""msg"":""Mounting metrics handler on admin server"",""route"":""/metrics""}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""flags/service.go:116"",""msg"":""Mounting expvar handler on admin server"",""route"":""/debug/vars""}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""flags/admin.go:130"",""msg"":""Mounting health check on admin server"",""route"":""/""}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""flags/admin.go:144"",""msg"":""Starting admin HTTP server"",""http-addr"":"":14269""}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""flags/admin.go:122"",""msg"":""Admin server started"",""http.host-port"":""[::]:14269"",""health-status"":""unavailable""}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""grpc@v1.62.1/clientconn.go:429"",""msg"":""[core][Channel #1] Channel created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""grpc@v1.62.1/clientconn.go:1724"",""msg"":""[core][Channel #1] original dial target is: \""localhost:4317\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1173787,""caller"":""grpc@v1.62.1/clientconn.go:1731"",""msg"":""[core][Channel #1] parsed dial target is: resolver.Target{URL:url.URL{Scheme:\""localhost\"", Opaque:\""4317\"", User:(*url.Userinfo)(nil), Host:\""\"", Path:\""\"", RawPath:\""\"", OmitHost:false, ForceQuery:false, RawQuery:\""\"", Fragment:\""\"", RawFragment:\""\""}}"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.123022,""caller"":""grpc@v1.62.1/clientconn.go:1745"",""msg"":""[core][Channel #1] fallback to scheme \""passthrough\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1396246,""caller"":""grpc@v1.62.1/clientconn.go:1753"",""msg"":""[core][Channel #1] parsed dial target is: passthrough:///localhost:4317"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1403294,""caller"":""grpc@v1.62.1/clientconn.go:1876"",""msg"":""[core][Channel #1] Channel authority set to \""localhost:4317\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1415153,""caller"":""grpc@v1.62.1/resolver_wrapper.go:197"",""msg"":""[core][Channel #1] Resolver state updated: {\n  \""Addresses\"": [\n    {\n      \""Addr\"": \""localhost:4317\"",\n      \""ServerName\"": \""\"",\n      \""Attributes\"": null,\n      \""BalancerAttributes\"": null,\n      \""Metadata\"": null\n    }\n  ],\n  \""Endpoints\"": [\n    {\n      \""Addresses\"": [\n        {\n          \""Addr\"": \""localhost:4317\"",\n          \""ServerName\"": \""\"",\n          \""Attributes\"": null,\n          \""BalancerAttributes\"": null,\n          \""Metadata\"": null\n        }\n      ],\n      \""Attributes\"": null\n    }\n  ],\n  \""ServiceConfig\"": null,\n  \""Attributes\"": null\n} (resolver returned new addresses)"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1428385,""caller"":""grpc@v1.62.1/balancer_wrapper.go:161"",""msg"":""[core][Channel #1] Channel switches to new LB policy \""pick_first\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1434665,""caller"":""grpc@v1.62.1/balancer_wrapper.go:213"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1447263,""caller"":""grpc@v1.62.1/clientconn.go:532"",""msg"":""[core][Channel #1] Channel Connectivity change to CONNECTING"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.145355,""caller"":""grpc@v1.62.1/clientconn.go:335"",""msg"":""[core][Channel #1] Channel exiting idle mode"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.145355,""caller"":""grpc@v1.62.1/clientconn.go:1223"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1460228,""caller"":""memory/factory.go:85"",""msg"":""Memory storage initialized"",""configuration"":{""MaxTraces"":0}}
-{""level"":""info"",""ts"":1713157066.146683,""caller"":""grpc@v1.62.1/clientconn.go:1338"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel picks a new address \""localhost:4317\"" to connect"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1511598,""caller"":""static/strategy_store.go:211"",""msg"":""No sampling strategies provided or URL is unavailable, using defaults""}
-{""level"":""warn"",""ts"":1713157066.1633234,""caller"":""grpc@v1.62.1/clientconn.go:1400"",""msg"":""[core][Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: \""localhost:4317\"", ServerName: \""localhost:4317\"", }. Err: connection error: desc = \""transport: Error while dialing: dial tcp [::1]:4317: connectex: No connection could be made because the target machine actively refused it.\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1665883,""caller"":""grpc@v1.62.1/clientconn.go:1225"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel Connectivity change to TRANSIENT_FAILURE, last error: connection error: desc = \""transport: Error while dialing: dial tcp [::1]:4317: connectex: No connection could be made because the target machine actively refused it.\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1691306,""caller"":""grpc@v1.62.1/clientconn.go:532"",""msg"":""[core][Channel #1] Channel Connectivity change to TRANSIENT_FAILURE"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1691306,""caller"":""grpc@v1.62.1/server.go:679"",""msg"":""[core][Server #3] Server created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1755738,""caller"":""server/grpc.go:104"",""msg"":""Starting jaeger-collector gRPC server"",""grpc.host-port"":""[::]:14250""}
-{""level"":""info"",""ts"":1713157066.176147,""caller"":""server/http.go:56"",""msg"":""Starting jaeger-collector HTTP server"",""http host-port"":"":14268""}
-{""level"":""info"",""ts"":1713157066.1765969,""caller"":""grpc@v1.62.1/server.go:879"",""msg"":""[core][Server #3 ListenSocket #4] ListenSocket created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.177802,""caller"":""app/collector.go:146"",""msg"":""Not listening for Zipkin HTTP traffic, port not configured""}
-{""level"":""info"",""ts"":1713157066.1793082,""caller"":""handler/otlp_receiver.go:77"",""msg"":""OTLP receiver status change"",""status"":""StatusStarting""}
-{""level"":""warn"",""ts"":1713157066.1793082,""caller"":""internal@v0.97.0/warning.go:42"",""msg"":""Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. Enable the feature gate to change the default and remove this warning."",""documentation"":""https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"",""feature gate ID"":""component.UseLocalHostAsDefaultHost""}
-{""level"":""info"",""ts"":1713157066.1805592,""caller"":""grpc@v1.62.1/server.go:679"",""msg"":""[core][Server #5] Server created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1812196,""caller"":""otlpreceiver@v0.97.0/otlp.go:102"",""msg"":""Starting GRPC server"",""endpoint"":""0.0.0.0:4317""}
-{""level"":""warn"",""ts"":1713157066.1818514,""caller"":""internal@v0.97.0/warning.go:42"",""msg"":""Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks. Enable the feature gate to change the default and remove this warning."",""documentation"":""https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks"",""feature gate ID"":""component.UseLocalHostAsDefaultHost""}
-{""level"":""info"",""ts"":1713157066.1831172,""caller"":""otlpreceiver@v0.97.0/otlp.go:152"",""msg"":""Starting HTTP server"",""endpoint"":""0.0.0.0:4318""}
-{""level"":""info"",""ts"":1713157066.1818514,""caller"":""grpc@v1.62.1/server.go:879"",""msg"":""[core][Server #5 ListenSocket #6] ListenSocket created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1842952,""caller"":""grpc/builder.go:74"",""msg"":""Agent requested insecure grpc connection to collector(s)""}
-{""level"":""info"",""ts"":1713157066.1858487,""caller"":""grpc@v1.62.1/clientconn.go:429"",""msg"":""[core][Channel #7] Channel created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1885502,""caller"":""grpc@v1.62.1/clientconn.go:1724"",""msg"":""[core][Channel #7] original dial target is: \""localhost:14250\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1943777,""caller"":""grpc@v1.62.1/clientconn.go:1731"",""msg"":""[core][Channel #7] parsed dial target is: resolver.Target{URL:url.URL{Scheme:\""localhost\"", Opaque:\""14250\"", User:(*url.Userinfo)(nil), Host:\""\"", Path:\""\"", RawPath:\""\"", OmitHost:false, ForceQuery:false, RawQuery:\""\"", Fragment:\""\"", RawFragment:\""\""}}"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1949522,""caller"":""grpc@v1.62.1/clientconn.go:1745"",""msg"":""[core][Channel #7] fallback to scheme \""passthrough\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.19587,""caller"":""grpc@v1.62.1/clientconn.go:1753"",""msg"":""[core][Channel #7] parsed dial target is: passthrough:///localhost:14250"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1967628,""caller"":""grpc@v1.62.1/clientconn.go:1876"",""msg"":""[core][Channel #7] Channel authority set to \""localhost:14250\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1979344,""caller"":""grpc@v1.62.1/resolver_wrapper.go:197"",""msg"":""[core][Channel #7] Resolver state updated: {\n  \""Addresses\"": [\n    {\n      \""Addr\"": \""localhost:14250\"",\n      \""ServerName\"": \""\"",\n      \""Attributes\"": null,\n      \""BalancerAttributes\"": null,\n      \""Metadata\"": null\n    }\n  ],\n  \""Endpoints\"": [\n    {\n      \""Addresses\"": [\n        {\n          \""Addr\"": \""localhost:14250\"",\n          \""ServerName\"": \""\"",\n          \""Attributes\"": null,\n          \""BalancerAttributes\"": null,\n          \""Metadata\"": null\n        }\n      ],\n      \""Attributes\"": null\n    }\n  ],\n  \""ServiceConfig\"": null,\n  \""Attributes\"": null\n} (resolver returned new addresses)"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.1995714,""caller"":""grpc@v1.62.1/balancer_wrapper.go:161"",""msg"":""[core][Channel #7] Channel switches to new LB policy \""round_robin\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2002034,""caller"":""grpc@v1.62.1/balancer_wrapper.go:213"",""msg"":""[core][Channel #7 SubChannel #8] Subchannel created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2014635,""caller"":""base/balancer.go:182"",""msg"":""[roundrobin]roundrobinPicker: Build called with info: {map[]}"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.202357,""caller"":""grpc@v1.62.1/clientconn.go:532"",""msg"":""[core][Channel #7] Channel Connectivity change to CONNECTING"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2014635,""caller"":""grpc@v1.62.1/clientconn.go:1223"",""msg"":""[core][Channel #7 SubChannel #8] Subchannel Connectivity change to CONNECTING"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2067246,""caller"":""grpc@v1.62.1/clientconn.go:1338"",""msg"":""[core][Channel #7 SubChannel #8] Subchannel picks a new address \""localhost:14250\"" to connect"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2060769,""caller"":""grpc@v1.62.1/clientconn.go:335"",""msg"":""[core][Channel #7] Channel exiting idle mode"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2100976,""caller"":""all-in-one/main.go:265"",""msg"":""Starting agent""}
-{""level"":""info"",""ts"":1713157066.2114675,""caller"":""app/agent.go:69"",""msg"":""Starting jaeger-agent HTTP server"",""http-port"":5778}
-{""level"":""info"",""ts"":1713157066.2114675,""caller"":""grpc@v1.62.1/server.go:679"",""msg"":""[core][Server #10] Server created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2155974,""caller"":""app/static_handler.go:109"",""msg"":""Using UI configuration"",""path"":""""}
-{""level"":""info"",""ts"":1713157066.2100976,""caller"":""grpc/builder.go:115"",""msg"":""Checking connection to collector""}
-{""level"":""info"",""ts"":1713157066.2100976,""caller"":""sync/once.go:74"",""msg"":""[core]CPU time info is unavailable on non-linux environments."",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2221143,""caller"":""app/server.go:236"",""msg"":""Query server started"",""http_addr"":""[::]:16686"",""grpc_addr"":""[::]:16685""}
-{""level"":""info"",""ts"":1713157066.234199,""caller"":""grpc/builder.go:131"",""msg"":""Agent collector connection state change"",""dialTarget"":""localhost:14250"",""status"":""CONNECTING""}
-{""level"":""info"",""ts"":1713157066.240293,""caller"":""healthcheck/handler.go:129"",""msg"":""Health Check state change"",""status"":""ready""}
-{""level"":""info"",""ts"":1713157066.240293,""caller"":""app/server.go:319"",""msg"":""Starting GRPC server"",""port"":16685,""addr"":"":16685""}
-{""level"":""info"",""ts"":1713157066.2465863,""caller"":""grpc@v1.62.1/server.go:879"",""msg"":""[core][Server #10 ListenSocket #12] ListenSocket created"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.240293,""caller"":""app/server.go:301"",""msg"":""Starting HTTP server"",""port"":16686,""addr"":"":16686""}
-{""level"":""info"",""ts"":1713157066.240293,""caller"":""grpc@v1.62.1/clientconn.go:1223"",""msg"":""[core][Channel #7 SubChannel #8] Subchannel Connectivity change to READY"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.267438,""caller"":""base/balancer.go:182"",""msg"":""[roundrobin]roundrobinPicker: Build called with info: {map[SubConn(id:8):{{Addr: \""localhost:14250\"", ServerName: \""\"", }}]}"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2687113,""caller"":""grpc@v1.62.1/clientconn.go:532"",""msg"":""[core][Channel #7] Channel Connectivity change to READY"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157066.2695167,""caller"":""grpc/builder.go:131"",""msg"":""Agent collector connection state change"",""dialTarget"":""localhost:14250"",""status"":""READY""}
-{""level"":""info"",""ts"":1713157067.174541,""caller"":""grpc@v1.62.1/clientconn.go:1225"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel Connectivity change to IDLE, last error: connection error: desc = \""transport: Error while dialing: dial tcp [::1]:4317: connectex: No connection could be made because the target machine actively refused it.\"""",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157067.1752045,""caller"":""grpc@v1.62.1/clientconn.go:1223"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157067.177659,""caller"":""grpc@v1.62.1/clientconn.go:1338"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel picks a new address \""localhost:4317\"" to connect"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157067.180894,""caller"":""grpc@v1.62.1/clientconn.go:1223"",""msg"":""[core][Channel #1 SubChannel #2] Subchannel Connectivity change to READY"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""info"",""ts"":1713157067.1815722,""caller"":""grpc@v1.62.1/clientconn.go:532"",""msg"":""[core][Channel #1] Channel Connectivity change to READY"",""system"":""grpc"",""grpc_log"":true}
-{""level"":""debug"",""ts"":1713157260.8726559,""caller"":""app/span_processor.go:165"",""msg"":""Span written to the storage by the collector"",""trace-id"":""a5409530ea95d096f1ba6732f81db8ce"",""span-id"":""859b85e92ed972b5""}
-
-Couldn't find how to run jaeger-all-in-one.exe with Badger database.
-","1. File configuration is not an actively supported mode of configuring Jaeger v1. Theoretically you could make it work by translating the documented CLI flags into their corresponding positions in the YAML file (since the library we use for CLI flags does support reading from a config file), but what you have is nowhere close to that translation.
-You can print all supported flags with this command:
-SPAN_STORAGE_TYPE=badger all-in-one print-config
-
-",OpenTracing
-"Prometheus Endpoint Not Working with springboot application. Getting 404 error page.
-I've added the dependency according to the ducomantaion
-<dependency>
-  <groupId>org.springframework.boot</groupId>
-  <artifactId>spring-boot-starter-actuator</artifactId>
-</dependency>
-
-<dependency>
-  <groupId>io.micrometer</groupId>
-  <artifactId>micrometer-registry-prometheus</artifactId>
-</dependency>
-
-And changed the yml file to:
-management:
-  endpoints:
-    web:
-     base-path: /   
-     exposure:
-        include: prometheus
-      
-
-If I go to http://localhost:9083/actuator
-I'm getting this :
-
-As you can see, there is no prometheus
-In http://localhost:9083/actuator/prometheus
-I'm getting this error:
-
-I've tried everything that is written in those answers:
-Getting ""Whitelabel Error Page"" when opening ""/actuator/mappings""
-Unable to access Spring Boot Actuator ""/actuator"" endpoint
-Spring Boot /actuator returns 404 not found
-Noting seems to be working, any idea?
-","1. After much research and many attempts to solve the problem, I realized that because I already have this definition spring.config.location set in VM arguments, prometheus can not run.  It expects the spring settings to be read and it blocks it.
-I've changed it from spring.config.location to spring.config.additional-location.
-now it is working!
-
-2. Only the health endpoint are exposed through HTTP by default.
-You must expose prometheus as well:
-management.endpoints.web.exposure.include=health,prometheus
-
-or with yaml
-management:
-  endpoints:
-    web:
-      exposure:
-        include: health,prometheus
-
-Please read more in the official documentation:
-https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#actuator.endpoints.exposing
-",Prometheus
-"I am trying to integrate jvm metrics to my akka application. I used prometheus jmx exporter. Instead of using the whole app and running it as java agent, I used only the exporter and integrated to my existing prometheus registry
-import io.prometheus.jmx.JmxCollector
-
-val jmxCollector: JmxCollector = new JmxCollector(getClass.getResourceAsStream(""jmx-config.yaml""))
-jmxCollector.register(prometheusRegistry)
-
-I am able to see the metrics but few metrics starting with prefix jvm are missing compared to other applications which are running the exporter as java agent. Like thread metrics which are missing
-# HELP jvm_threads_state Current count of threads by state
-# TYPE jvm_threads_state gauge
-jvm_threads_state{state=""TERMINATED"",} 0.0
-jvm_threads_state{state=""RUNNABLE"",} 10.0
-jvm_threads_state{state=""TIMED_WAITING"",} 11.0
-jvm_threads_state{state=""WAITING"",} 37.0
-jvm_threads_state{state=""NEW"",} 0.0
-jvm_threads_state{state=""BLOCKED"",} 0.0
-
-My metrics config is bare minimum and looks like this in both the applications
----
-startDelaySeconds: 10
-ssl: false
-lowercaseOutputName: false
-lowercaseOutputLabelNames: false
-
-Could you please help me understand what could be the difference that is causing this problem.
-","1. This is the expected behaviour
-
-This exporter is intended to be run as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can be also run as a standalone HTTP server and scrape remote JMX targets, but this has various disadvantages, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage).
-
-The ""being unable to expose process metrics"" is a subtle reference  to the jvm_* metrics like
-
-jvm_classes_loaded
-jvm_classes_loaded_total
-jvm_threads_current
-jvm_threads_daemon
-jvm_memory_used_bytes (There is no such thing as jvm_memory_bytes_used in client_java)
-jvm_memory_pool_used_bytes (There is no such thing as jvm_memory_pool_bytes_used in client_java)
-jvm_memory_pool_allocated_bytes_total (There is no such thing as jvm_memory_pool_allocated_bytes_created in client_java)
-jvm_memory_pool_committed_bytes (There is no such thing as jvm_memory_pool_bytes_committed in client_java )
-jvm_memory_pool_init_bytes (There is no such thing asjvm_memory_pool_bytes_init in client_java)
-jvm_memory_pool_max_bytes (There is no such thing asjvm_memory_pool_bytes_max in client_java)
-(There is no such thing asjvm_memory_pool_bytes_used in client_java)
-jvm_memory_pool_collection_committed_bytes
-jvm_memory_pool_collection_init_bytes
-jvm_memory_pool_collection_max_bytes
-jvm_memory_pool_collection_used_bytes
-jvm_threads_deadlocked
-jvm_threads_deadlocked_monitor
-jvm_threads_peak
-jvm_threads_started_total
-jvm_threads_state
-
-All those metrics are still available when using the httpserver but under a different name that matches the JMX MBean where the actual value is.
-For example, the equivalent to jvm_classes_loaded / jvm_classes_loaded_total in javagent is java_lang_ClassLoading_LoadedClassCount.
-And the equivalent to jvm_threads_current in httpserver is java_lang_Threading_ThreadCount.
-Here is a table with some equivalences
-
-
-
-jmxexporter javaagent
-jmxexporter httpserver
-JMX MBean
-notes
-
-
-
-
-jvm_classes_loaded
-java_lang_ClassLoading_LoadedClassCount
-java.lang:name=null,type=ClassLoading,attribute=LoadedClassCount
-link
-
-
-jvm_classes_loaded_total
-java_lang_ClassLoading_TotalLoadedClassCount
-java.lang:name=null,type=ClassLoading,attribute=TotalLoadedClassCount
-link
-
-
-jvm_threads_current
-java_lang_Threading_ThreadCount
-java.lang:name=null,type=Threading,attribute=ThreadCount
-link
-
-
-jvm_threads_daemon
-java_lang_Threading_DaemonThreadCount
-java.lang:name=null,type=Threading,attribute=DaemonThreadCount
-link
-
-
-
-
-jvm_memory_used_bytes
-java_lang_Memory_HeapMemoryUsage_used
-java.lang:name=null,type=Memory,attribute=used
-link
-
-
-
-
-
-jvm_memory_pool_committed_bytes
-java_lang_G1_Survivor_Space_Usage_committed
-java.lang:name=G1 Survivor Space,type=MemoryPool,attribute=committed
-link
-
-
-jvm_memory_pool_init_bytes
-java_lang_G1_Survivor_Space_Usage_init
-java.lang:name=G1 Survivor Space,type=MemoryPool,attribute=init
-link
-
-
-jvm_memory_pool_max_bytes
-java_lang_G1_Survivor_Space_Usage_max
-java.lang:name=G1 Survivor Space,type=MemoryPool,attribute=max
-link
-
-
-
-
-
-jvm_memory_pool_used_bytes
-java_lang_G1_Survivor_Space_Usage_used
-java.lang:name=G1 Survivor Space,type=MemoryPool,attribute=used
-link
-
-
-
-
-
-jvm_threads_deadlocked
-no equivalent
-java.lang:type=Threading,method=findDeadlockedThreads
-link  jmxexporter uses client_java JvmMetrics / JvmThreadsMetrics which uses the findDeadlockedThreads() method in the MBean (not an attribute)
-
-
-jvm_threads_deadlocked_monitor
-no equivalent
-java.lang:type=Threading,method=findMonitorDeadlockedThreads
-link  jmxexporter uses client_java JvmMetrics / JvmThreadsMetrics which uses the findMonitorDeadlockedThreads() method in the MBean (not an attribute)
-
-
-jvm_threads_peak
-java_lang_Threading_PeakThreadCount
-java.lang:name=null,type=Threading,attribute=PeakThreadCount
-link
-
-
-jvm_threads_started_total
-java_lang_Threading_TotalStartedThreadCount
-java.lang:name=null,type=Threading,attribute=TotalStartedThreadCount
-link
-
-
-jvm_threads_state
-no equivalent
-java.lang:type=Threading
-link  there is no MBean equivalent this is computed from java.lang:type=Threading getAllThreadIds() and getThreadInfo()
-
-
-
-See 14, 15 for more information
-",Prometheus
-"I'm still new to Grafana and I'm trying to extract hourly traffic data using the nginxplus_location_zone_responses metric. Just want to know if I'm using the correct promQL query.
-Any inputs would be greatly appreciated.
-Thanks!
-I'm currently using this query:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1H]))
-
-","1. Your PromQL query is generally correct for extracting hourly traffic data for the nginxplus_location_zone_responses metric. The increase function is used to calculate the increase in the metric over the specified time window (1 hour in this case), and the sum by(location_zone) is used to aggregate the data by the location_zone label.
-Here is your query for reference:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1h]))
-
-This query will sum the increase in the nginxplus_location_zone_responses metric for the specified HTTP response codes (2xx, 4xx, 5xx) over the past hour, grouped by the location_zone.
-To ensure accuracy, double-check that:
-The nginxplus_location_zone_responses metric is available and correctly scraped by your Prometheus instance.
-The labels and their values (like code, location_zone) are correct as per your NGINX Plus Prometheus exporter configuration.
-The time window ([1h]) is appropriate for your needs.
-If you need more detailed insights or to fine-tune the query, consider the following suggestions:
-Filter Specific Codes: If you need to filter specific status codes separately, you can use multiple queries or a more specific regex. For example:
-
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""2xx""}[1h]))
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""4xx""}[1h]))
-sum by(location_zone) (increase(nginxplus_location_zone_responses{code=~""5xx""}[1h]))
-
-Visualization: Ensure that your Grafana panel is configured correctly to visualize the time series data effectively (e.g., using a time series graph or bar chart for hourly data).
-Rate Function: If you need a more continuous rate rather than just the increase over the past hour, you could use the rate function:
-
-sum by(location_zone) (rate(nginxplus_location_zone_responses{code=~""2xx|4xx|5xx""}[1h]))
-
-These tips should help you effectively monitor and visualize your hourly traffic data using Grafana and Prometheus.
-",Prometheus
-"I am creating SQL queries from Grafana into Promscale. There are the metric and the labels. I can not get the correct way to group by some of the labels. I tried:
-SELECT time_bucket('$__interval', ""time"") AS ""time"",
-       AVG(""value"") AS ""used""
-  FROM ""disk_used_percent""
- WHERE $__timeFilter(""time"") AND
-       ""labels"" ? ('host' == '$host_pg')
- GROUP BY 1, ""labels"" --> 'path'
- ORDER BY 1;
-
-as well as:
-SELECT time_bucket('$__interval', ""time"") AS ""time"",
-       AVG(""value"") AS ""used""
-  FROM ""disk_used_percent""
- WHERE $__timeFilter(""time"") AND
-       ""labels"" ? ('host' == '$host_pg')
- GROUP BY 1, ""path_id""
- ORDER BY 1;
-
-but it does not seem the grouping works as expected. What is wrong? Corresponding PromQL query would be:
-avg(disk_used_percent{host=~""$host_prom""}) by(path))
-
-","1. You can use VAL(""<label>_id"") to group on:
-SELECT time_bucket('$__interval', ""time"") AS ""time"",
-       VAL(""path_id"") AS ""path"",
-       AVG(""value"") AS ""used""
-  FROM ""disk_used_percent""
- WHERE $__timeFilter(""time"") AND
-       ""labels"" ? ('host' == '$host_pg')
- GROUP BY 1, 2
- ORDER BY 1;
-
-Side note: also avoid using the $__timeFilter(""time"") templating macro in Grafana because it generates the following predicate:
-""time"" BETWEEN 'time range begin' AND 'time range end'
-
-which may be
-problematic under certain circumstances.
-",Promscale
-"I have installed sensu with chef community cookbook. However, sensu client fails to connect to server. Results in rabbitmq connection error with message timed out while attempting to connect
-Here are detailed client logs
-logs from sensu-client.log
-""timestamp"":""2014-07-08T12:39:33.982647+0000"",""level"":""warn"",""message"":""config file applied changes"",""config_file"":""/etc/sensu/conf.d/config.json"",""changes"":{""rabbitmq"":{""heartbeat"":[null,20]},""client"":[null,{""name"":""girija-sensu-client"",""address"":""test sensu client"",""subscriptions"":[""test-node""]}],""version"":[null,""0.12.6-4""]}}
-{""timestamp"":""2014-07-08T12:39:33.996680+0000"",""level"":""info"",""message"":""loaded extension"",""type"":""mutator"",""name"":""only_check_output"",""description"":""returns check output""}
-{""timestamp"":""2014-07-08T12:39:34.000721+0000"",""level"":""info"",""message"":""loaded extension"",""type"":""handler"",""name"":""debug"",""description"":""outputs json event data""}
-{""timestamp"":""2014-07-08T12:39:34.104300+0000"",""level"":""warn"",""message"":""reconnecting to rabbitmq""}
-{""timestamp"":""2014-07-08T12:39:39.108623+0000"",""level"":""warn"",""message"":""reconnecting to rabbitmq""}
-{""timestamp"":""2014-07-08T12:39:44.111818+0000"",""level"":""warn"",""message"":""reconnecting to rabbitmq""}
-{""timestamp"":""2014-07-08T12:39:49.115250+0000"",""level"":""warn"",""message"":""reconnecting to rabbitmq""}
-{""timestamp"":""2014-07-08T12:39:54.045648+0000"",""level"":""fatal"",""message"":""rabbitmq connection error"",""error"":""timed out while attempting to connect""}
-
-Rabbitmq logs from server show following error
-=INFO REPORT==== 8-Jul-2014::12:39:54 ===
-accepting AMQP connection <0.395.0> (10.254.153.131:42813 -> 10.254.130.25:5672)
-
-=ERROR REPORT==== 8-Jul-2014::12:39:54 ===
-closing AMQP connection <0.395.0> (10.254.153.131:42813 -> 10.254.130.25:5672):
-{bad_header,<<129,15,1,3,3,0,246,0>>}
-
-I am running this on CentOS 6.4 on AWS
-Rabbitmq version 3.0.4
-Erlang_version,
-     ""Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n""},
-bad_header suggests mismatch for client and broker AMQP version. Any help for finding out AMQP version and  fixing this problem
-","1. This issue was caused, in my case, when my client was configured to use ssl authentication, but the rabbitmq server was not properly configured to use ssl and instead was expecting ""plain"" user/pass login with no ssl.
-
-2. I had the same issue when posting messages in rabbitmq from an .http file in Visual Studio Code with REST Client plugin.
-The issue was that the port i was connecting to was 5672 which is the AMQP API one.
-Changing the port to 15672 makes the call to the HTTP API and it works
-",Sensu
-"`
-FROM ubuntu:latest
-RUN apt-get update && apt-get install -y \
-    curl jq bash  
-# Install Nginx
-RUN apt-get install -y nginx
-
-# Install Sensu agent from the official repository
-RUN curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | bash \
-    && apt-get update && apt-get install -y sensu-go-agent
-
-COPY agent.yaml /etc/sensu/agent.yaml
-
-# Define environment variables for Sensu backend connection
-ENV SENSU_BACKEND_URL=ws://34.207.219.74:8081
-ENV SENSU_SUBSCRIPTIONS=linux,system
-
-# Expose Nginx ports
-EXPOSE 80
-
-CMD [""/bin/bash"", ""-c"", ""nginx -g 'daemon off;' & sensu-agent start""]
-
-`
-
-In this dockerfile ,  When i build and run the image nginx is up and running but sensu-agent is not running.  i need to start is manually.
-","1. For complex entrypoint scenarios, it's useful for the entrypoint to be bash entrypoint.sh and do all your complex logix there.
-",Sensu
-"I need suggestion to provide the host IP and port to sensu-go at runtime. Currently, we are using static inventory for our VMs which are created in google cloud which causes issues when those instances get deleted and new gets created.
-So, I'm looking for a solution to provide the dynamic inventory to sensu-go. However, I could not find a way to query the google cloud in sensu-go and get the host IP(s)  by providing tag and project name.
-I'm looking for a suggestion to get the host IPs for a given tag from google cloud without using sensu-go client in each host.
-Thanks very much in advance.
-","1. To get a list of IP's used by instances tagged with, say, 'sensu' you can use gcloud commands.
-To get internal IP:
-gcloud compute instances list --project=PROJECT_NAME --filter=""tags.items=(SENSU)"" --format=""get(networkInterfaces[0].networkIP)""
-
-For external IP:
-gcloud compute instances list --project=PROJECT_NAME --filter=""tags.items=(SENSU)"" --format=""get(networkInterfaces[0].accessConfigs[0].natIP)""
-
-",Sensu
-"Is there any google chat module to send alerts via sensu's google chat handler to more than one room (chat room)?
-Ex: As like in Slack we have multi-slack-handler.rb handler is there to trigger alerts on multiple channels via subscriptions which we defined./
-Is this same kind of thing Possible with google chat??
-In my case, I have created two channels in Sensu (sensu-core open source edition) but I am getting alerts in only one channel after few minutes I am getting alerts on another channel as well but it sends alerts in only one channel at a time. How can we get alerts in all the channels on sensu with google hangout handler?
-","1. I think you should check this module: https://github.com/anandtripathi5/google-chat-handler. It is a simple handler which you can add to your logger and configure the level of chat as required. Adding it multiple times with different Gchat room webhooks will push logs to multiple Gchat rooms simultaneously.
-",Sensu
-"I have a Sentry integration with SvelteKit hooks.server.js.
-Errors are correctly logged in Sentry, but HTTP request metadata like user agent, request URL, IP addresses and such are missing, as seen in the issue screenshot below. These are quite important to troubleshoot many issues.
-
-I am using the following hooks.server.js integration code (based on Sentry's example):
-import { sequence } from '@sveltejs/kit/hooks';
-import * as Sentry from '@sentry/sveltekit';
-import { dev } from '$app/environment';
-import { env } from '$env/dynamic/private';
-
-if(!dev) {
-  Sentry.init({
-    dsn: env.SENTRY_DSN,
-    environment: dev ? ""dev"" : ""production"",
-    // https://github.com/getsentry/sentry-javascript/issues/8925#issue-1876274074
-    ignoreErrors: [
-      ""TypeError: Failed to fetch dynamically imported module"",
-      ""Load failed"",
-      ""TypeError: Load failed"",
-      ""TypeError: Failed to fetch"",
-      ""Failed to fetch"",
-      ""TypeError: NetworkError when attempting to fetch resource"",
-      ""TypeError: Importing a module script failed"",
-      ""TypeError: error loading dynamically imported module"",
-      ""RollupError: Expected unicode escape""
-    ],  
-  });
-}
-
-export const handleError = Sentry.handleErrorWithSentry((async ({ error }) => {  
-    const eventId = Sentry.lastEventId();
-    if (eventId) {
-        return { message: 'Internal Server Error', eventId };
-    }
-}));
-
-
-// prettier-ignore
-let handle;
-
-if(!dev) {
-  handle = sequence(
-    Sentry.sentryHandle(),
-    # ... rest
-  );  
-} else {
-  handle =   handle = sequence(
-    # ... rest
-  );  
-}
-
-
-export default handle;
-
-
-
-What could be the cause that SvelteKit integration for Sentry does not correctly log these?
-
-","1. Hi can you try updating your init to include sendDefaultPII ?
-import * as Sentry from ""@sentry/sveltekit"";
-
-Sentry.init({
-  dsn: YOUR_DSN,
-  integrations: [Sentry.httpClientIntegration()]
-  ....
-  sendDefaultPii: true,
-});
-
-See the docs here
-
-2. Sentry SvelteKit SDK maintainer here!
-Your setup looks correct but Http request data extraction was missing in the SvelteKit SDK.We added the feature recently to the sentryHandle request handler. Once version 8.6.0 is released, you should be getting http request data like url, method and headers.
-",Sentry
-"My test coverage was broken after I added the latest Sentry dependencies (7.6.0). I started receiving these errors for some of my classes:
-[ant:jacocoReport] Classes in bundle 'app' do not match with execution data. For report generation the same class files must be used as at runtime.
-[ant:jacocoReport] Execution data for class *** does not match.
-
-After removing Sentry coverage works as should.
-Anyone faced with this issue?
-Here is my jacoco configuration:
-apply plugin: 'jacoco'
-
-jacoco {
-    toolVersion '0.8.11'
-}
-
-tasks.withType(Test).configureEach {
-    jacoco.includeNoLocationClasses = true
-    jacoco.excludes = ['jdk.internal.*']
-}
-
-project.afterEvaluate {
-    tasks.register(""defaultDebugCoverage"", JacocoReport) {
-        dependsOn(""testDefaultDebugUnitTest"")
-        mustRunAfter('testDefaultDebugUnitTest')
-        group = ""Reporting""
-        description = ""Generate Jacoco coverage reports for the defaultDebug build.""
-
-        reports {
-            html.required.set(true)
-            xml.required.set(true)
-        }
-
-        def excludes = [
-                '**/R.class',
-                '**/R$*.class',
-                '**/BuildConfig.*',
-                '**/Manifest*.*',
-                '**/*_Provide*Factory*.*',
-                '**/*_ViewBinding*.*',
-                '**/AutoValue_*.*',
-                '**/R2.class',
-                '**/R2$*.class',
-                '**/*Directions$*',
-                '**/*Directions.*',
-                '**/*Binding.*'
-        ]
-
-        def jClasses = ""${project.buildDir}/intermediates/javac/defaultDebug/classes""
-        def kClasses = ""${project.buildDir}/tmp/kotlin-classes/defaultDebug""
-        def javaClasses = fileTree(dir: jClasses, excludes: excludes)
-
-        def kotlinClasses = fileTree(dir: kClasses, excludes: excludes)
-
-        classDirectories.from = files([javaClasses, kotlinClasses])
-        def sourceDirs = [""${project.projectDir}/src/main/java"", ""${project.projectDir}/src/main/kotlin"",
-                          ""${project.projectDir}/src/defaultDebug/java"", ""${project.projectDir}/src/defaultDebug/kotlin""]
-
-        sourceDirectories.from = files(sourceDirs)
-
-        executionData.from = files([""${project.buildDir}/jacoco/testDefaultDebugUnitTest.exec""])
-    }
-}
-
-After some research, I found that Sentry can modify my classes. But I don't know how I can fix it.
-","1. Add this configuration in your app's build.gradle:
-sentry {
-  tracingInstrumentation {
-    enabled = false
-  }
-}
-
-A link to further troubleshoot Sentry is in the docs
-",Sentry
-"I have hosted sentry, now sentry report two types of errors :
-1) exceptions that are not handled by any exception view(4XX)
-2) exceptions whose exception view returns a status code of 500
-I only want to receive those exceptions whose exception view returns a status code of 500.
-I am unable to find any option to do it.
-Just for reference, i am using sentry to track issue in Pyramid(python) project.
-","1. guess you have found a solution. If not, and for those who are looking for an answer,
-
-I suppose you have installed sentry-sdk properly.
-And you have configured main() properly. By this, I meant initialising on __init__.py
-Here comes the main solution:
-
-def custom_exception_view(exc, request):
-    # Create a response with a 500 status code
-    response = Response(str(exc), status=500)
-
-    # Check if the exception should be reported to Sentry
-    if response.status_int >= 500 and response.status_int < 600:
-        sentry_sdk.capture_exception(exc)
-
-    # Return the response
-    return response
-
-And here is the main():
- sentry_sdk.init(
-        dsn=""your_sentry_dsn"",
-        integrations=[PyramidIntegration()],
-    )
-    config = Configurator(settings=settings)
-    config.add_view(custom_exception_view)
-
-",Sentry
-"I love Sidekick but I changed my PC so I need to sync sidekick bookmarks in current device to another sidekick or chrome in new device.
-Anyone knows how to do that??
-
-sidekick in current device => sidekick in new device
-Or sidekick in current device => chrome in new device
-
-","1. You can enter bookmark manager by entering this URL sidekick://bookmarks/. At the top right of the screen there is 3-dot option, which contains export-bookmarks feature.
-",Sidekick
-"I want to know the difference between Ajax and Sidekick (Active job).
-These both look same background process system.
-","1. The have almost nothing in common beyond being examples of asyncrony.
-AJAX is an ancient term from the dark days of the browsers wars (early 2000's) that stands for Asyncronous Javascript and XML (which was thought would become the defacto interchange format for the web back then) and currently is used to refer to the XMLHttpRequest api provided by browsers.
-Asyncronous meaning that you can send requests from the client to the server without reloading the page.
-Sidekick is a Ruby gem for queing and running background tasks on the server which lets you perform jobs without making the web thread (and the user) wait for it to complete before sending a response.
-The client-side equivilent is actually more like the Web Workers api which allows you to run scripts in the background on a browser.
-",Sidekick
-"I am using micrometer to publish signalfx metrics.
-I have a count for http_status with http status code as a tag. I am able to see this in the splunk observability cloud dashboard
-E = data('http_status.count', filter=filter('Status', '200')).publish(label='E')
-G = data('http_status.count', filter=filter('Status', '500')).publish(label='G')
-
-
-Now my question is, how do I calculate error rate using signal flow query?
-I want to do this-> plot 4xx/total calls and plot 5xx/total calls
-","1. You can start with getting the total count first without any filter
-A = data('http_status.count').publish(label='A')
-B = data('http_status.count', filter=filter('Status', '4*')).publish(label='B')
-You will see Enter Formula, click on it.
-Add the formula (B/A)*100.
-Remember to switch off the visibility for A & B
-",SignalFX
-"In the current project, I send application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The problem is we use the CloudFrontID as a correlationID to filter logs in Splunk, whereas SignalFx generates and uses the TraceId for logging. I am currently facing challenges in correlating the application logs' correlationID with SignalFx's TraceId.
-I tried to log the TraceId value in application logs using the ""Serilog.Enrichers.Span"" NuGet package. However, no values were logged in Splunk.
-var loggerConfig =
-    new LoggerConfiguration().MinimumLevel.ControlledBy(LogLevel)
-        .Destructure.UsingAttributes()
-        .Enrich.WithSpan(new SpanOptions
-        {
-            IncludeTraceFlags = true,
-            LogEventPropertiesNames = new SpanLogEventPropertiesNames()
-            {
-                ParentId = ""ParentId1"",
-                SpanId = ""SpanId1"",
-                TraceId = ""TraceId1"",
-                OperationName = ""OperationName1""
-            },
-            IncludeBaggage = true,
-            IncludeOperationName = true,
-            IncludeTags = true,
-        })
-        .Enrich.FromLogContext();
-
-How can I access the TraceId generated by the splunk-otel-collector within the ASP.NET web application (Framework version: 4.7.2)?
-","1. 
-To inject trace context fields in logs, enable log correlation by setting the SIGNALFX_LOGS_INJECTION environment variable to true before running your instrumented application.
-
-Reference: https://github.com/signalfx/signalfx-dotnet-tracing/blob/main/docs/correlating-traces-with-logs.md
-After enabling this environment variable: SIGNALFX_LOGS_INJECTION, I was able to see the traceId values in Splunk.
-",SignalFX
-"I'm trying to build a detector with signalfx and I want to make a filter query on a data stream that will fetch me metrics with dimension name ""foo"" and value ""baz"" but also ones that do not have this dimension at all. I've been trying something like this:
-    filter('foo', 'baz', None)
-    filter('foo', 'baz', '')
-
-but it just proudces errors.
-","1. Since my dimension value was a flag I just used a workaround and instead of filtering for value true or None I filter for not True like this:
-not filter('foo' '1')
-and this works since I wanted all items with foo set to '0' or ones that do not have foo at all.
-",SignalFX
-"I am working on a POC where i want to run open telemetry with Quarkus native and use Skywalking for traces and metrics. I have checked Quarkus native with Jager. it works. but i am not sure how to do it with Skywalking. With java agent i am able to see the trace but for that i have to make non-native Quarkus application. Requirement is to use Quarkus native image with Skywalking on Kubernetes
-As Skywalking java agent do not support GrallVM, i am not able to use it with Quarkus native. So at last what i did for trial.
-Added open telemetry dependency in quarkus application
-   <dependency>
-      <groupId>io.quarkus</groupId>
-      <artifactId>quarkus-opentelemetry</artifactId>
-    </dependency>
-
-application.properties
-quarkus.otel.exporter.otlp.traces.endpoint=http://skywalking-otel-collector-service.default.svc.cluster.local:4317
-quarkus.otel.traces.enabled=true
-quarkus.otel.exporter.otlp.enabled=true
-quarkus.http.access-log.pattern=""...traceId=%{X,traceId} spanId=%{X,spanId}"" 
-
-Generated quarkus native image using command
-./mvnw package -Dnative -Dquarkus.native.container-build=true
-
-then deployed a otel-collector.
-Deployment
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: skywalking-otel-collector
-  labels:
-    name: skywalking-otel-collector
-
-spec:
-  revisionHistoryLimit: 2
-  replicas: 1
-  strategy:
-    type: RollingUpdate
-    rollingUpdate:
-      maxSurge: 100%
-      maxUnavailable: 0
-  selector:
-    matchLabels:
-      name: skywalking-otel-collector
-  template:
-    metadata:
-      labels:
-        name: skywalking-otel-collector
-    spec:
-      containers:
-        - command:
-            - ""./otelcol-contrib""
-            - ""--config=/config/otel/otel-collector-config.yaml""
-          image: otel/opentelemetry-collector-contrib
-          name: otel-collector
-          resources:
-            limits:
-              cpu: 300m
-              memory: 1Gi
-            requests:
-              cpu: 300m
-              memory: 1Gi
-          ports:
-            - containerPort: 55679 # Default endpoint for ZPages.
-            - containerPort: 4317  # Default endpoint for OpenTelemetry receiver.
-            - containerPort: 4318  # Http receiver
-            - containerPort: 8888  # Default endpoint for querying metrics.
-          env:
-            - name: MY_POD_IP
-              valueFrom:
-                fieldRef:
-                  apiVersion: v1
-                  fieldPath: status.podIP
-          volumeMounts:
-            - name: otel-collector-configs
-              mountPath: /config/otel
-      volumes:
-        - name: otel-collector-configs
-          configMap:
-            name: otel-collector-configmap
-            items:
-              - key: otel-collector-config
-                path: otel-collector-config.yaml
-
-
-configmap
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: otel-collector-configmap
-data:
-  otel-collector-config: |
-    receivers:
-      otlp:
-        protocols:
-          grpc:
-            endpoint: 0.0.0.0:4317
-          http:
-            endpoint: 0.0.0.0:4318
-    processors:
-      batch:
-    exporters:
-      logging:
-        loglevel: info
-      otlp:
-        endpoint: skywalking-skywalking-helm-oap.skywalking.svc.cluster.local:11800
-        tls:
-          insecure: true
-    extensions:
-      health_check:
-    service:
-      extensions: [health_check]
-      pipelines:
-        metrics:
-          receivers: [otlp]
-          processors: [batch]
-          exporters: [logging, otlp]
-        traces:
-          receivers: [otlp]
-          processors: [batch]
-          exporters: [logging,otlp]
-        logs:
-          receivers: [otlp]
-          processors: [batch]
-          exporters: [otlp]
-
-
-service
-apiVersion: v1
-kind: Service
-metadata:
-  name: skywalking-otel-collector-service
-spec:
-  ports:
-    - name: grpc
-      port: 4317
-      protocol: TCP
-      targetPort: 4317
-    - name: http
-      port: 4318
-      protocol: TCP
-      targetPort: 4318
-  selector:
-    name: skywalking-otel-collector
-  type: NodePort
-
-I have already installed skywalking using helm on my machine
-helm install ""${SKYWALKING_RELEASE_NAME}"" \
-  skywalking-helm \
-  --version ""${SKYWALKING_RELEASE_VERSION}"" \
-  -n ""${SKYWALKING_RELEASE_NAMESPACE}"" \
-  --set oap.image.tag=9.2.0 \
-  --set oap.storageType=elasticsearch \
-  --set ui.image.tag=9.2.0
-
-It is not working. No service is listed in my skywalking ui. Not able to see any trace. My requirement is to work with native quarkus and use skywalking.
-Any help will be appriciated
-","1. OpenTelemetry trace could be supported on SkyWalking, and only queriable on LensUI or SkyWalking's host Lens widget. https://skywalking.apache.org/docs/main/latest/en/setup/backend/otlp-trace/
-About lens UI, it is bundled in the SkyWalking UI, you could add it like a widget, https://skywalking.apache.org/docs/main/latest/en/setup/backend/zipkin-trace/#lens-ui or access upstream Zipkin UI site.
-
-This is a screenshot from demo.skywalking.apache.org, which shows how a bundled Zipkin lens UI looks like.
-Meanwhile no service would be listed from the trace.
-If you want to add new metrics work, you need to learn MAL(Meter Analysis Language) and activate scripts.
-
-https://skywalking.apache.org/docs/main/latest/en/setup/backend/opentelemetry-receiver/
-https://skywalking.apache.org/docs/main/latest/en/concepts-and-designs/mal/
-
-",SkyWalking
-"Hello everyone There is an application on node.js, how do I build and display a dashboard based on endpoint usage statistics with separate api keys? My app on node.js 16 + postgres + skywalking.
-For example, requests:
-
-GET /some-endpoint (headers: {x-api-key: ""apiKey_1})
-GET /some-endpoint (headers: {x-api-key: ""apiKey_1})
-GET /some-endpoint (headers: {x-api-key: ""apiKey_1})
-GET /some-endpoint (headers: {x-api-key: ""apiKey_2}) `
-
-Desired statistics:
-
-/some-endpoint (apiKey_1) 3
-/some-endpoint (apiKey_2) 1
-
-i just tried research information and did not found any.
-","1. There is not an out-of-box way to do so. But, you could custom the plugin codes to read the header, and build your style of operation name of the span, then the OAP backend would run the statistics according to the new names.
-",SkyWalking
-"Using queries for a metric I use, I am trying to sum all of the Max values grouped by a field task_id.
-I have tried a few variations, e.g.
-metric=flink_taskmanager_job_task_operator_KafkaProducer_record_send_total namespace=""flink"" deployment=""flink-job"" | sum by task_id, and none gives as a result the sum of a specific task_id Max values.
-This screenshot is to illustrate what I'm trying to achieve. For the marked task_id (starts with 6cdc), I would like to have one row with the sum of all max values: 8800+8669+0+6217+7277=30963.
-How can it be done?
-
-","1. Try:
-... | max by task_id, task_name | sum by task_id
-
-(note, I am just guessing the task_name field in the first aggregation based on your screenshot)
-The somewhat informal explanation is that the Min | Max | Latest | Avg | ... columns you can see in the Time Series tab are not coming from your query. They are extra aggregate information displayed on top of your query results.
-Thus, if you | sum by task_id you never take the max of anything.
-
-Disclaimer: I am currently employed by Sumo Logic.
-",Sumo Logic
-"I am trying to perform aggregate queries using SumoLogic APIs as mentioned here.
-Something like:
-_view = <some_view> | where sourceCategory matches \""something\"" | sum(field) by sourceCategory
-
-This works just fine in the Sumo GUI. I get a field in result called ""_sum"" which gives me the desired result.
-However the same doesn't work when I do it using the SUMO APIs. If I create a job with this body:
-{
-    ""query"": ""_view = <some_view> | where sourceCategory matches ""something"" | sum(field) by sourceCategory"",
-    ""from"": ""start_timestamp"",
-    ""to"": ""end_timestamp"",
-    ""timeZone"": ""some_timezone""
-}
-
-I call the ""v1/search/jobs"" POST method with the above body and I do GET ""v1/search/jobs/{job_id}"" till the state is ""DONE GATHERING RESULTS"". Then I do ""v1/search/jobs/{job_id}/messages"". I was expecting to see aggregated values in the result, but instead I see something similar to:
-{
-   ""fields"":[
-      {
-         ""name"":""_messageid"",
-         ""fieldType"":""long"",
-         ""keyField"":false
-      }, ...
-   ],
-   ""messages"":[
-      {
-         ""map"":{
-            ""_receipttime"":""1359407350899"",            
-            ""_size"":""549"",
-            ""_sourcecategory"":""service"",
-            ""_sourceid"":""1640"",
-            ""the_field_i_mentioned"":""not-aggregated-value""
-            ""_messagecount"":""2044""
-         }
-      }, ...
-   ]
-]
-
-Thanks for going through my question. Any advices / work-arounds are appreciated. I don't really want to iterate manually through all items and calculate the sum. I'd prefer to do it on SumoLogic side itself. Thanks Again!
-","1. Explanation
-Similar as in the User Interface, in the API for log searches you get both raw results (also referred to as messages) and the aggregate results (also referred to as records).
-
-(Obviously, the latter are only returned if there's any aggregation in the query. In your case there is.)
-Actual suggestion
-
-Then I do ""v1/search/jobs/{job_id}/messages""
-
-Try /records instead.
-See the docs for ""Paging through the records found by a Search Job""
-Disclaimer: I am currently employed by Sumo Logic.
-",Sumo Logic
-"In sumologic, I have two logging statements of following nature
-2024-04-01 - level:INFO - event_type:APIRequestStart - endpoint:someproject/v1?param=x
-...
-...
-...
-2024-04-01 - level:INFO - event_type:APIRequestEnd - time_taken:2 - endpoint:someproject/v1?param=x
-
-So basically, when the api request starts it adds the field event_type:APIRequestStart and when the api request finishes processing, it adds the fields event_type:APIRequestEnd and time_taken:2 (time spent in processing).
-I want to calculate, number of statements with APIRequestStart, lets call it no_of_api_calls, number of statements with APIRequestEnd, lets call it successful_api_calls and I want to calculate the average of time_taken.
-Essentially I want to create a table like this:
-
-
-
-endpoint
-no_of_api_calls
-successful_api_calls
-success_ratio
-avg_response_time
-
-
-
-
-endpoint/v1
-20
-20
-1
-0.2
-
-
-
-I am able to calculate all these three fields separately, but unable to combine them in a single query. For example
-_source=logs
-| parse ""* - event_type:* - *"" as rest1, event_type, rest2
-| if(event_type=""APIRequestStart, 1, 0) as success 
-| if(event_type=""APIRequestEnd"", 1, 0) as handled
-| sum(success) as no_of_api_calls, sum(handled) as successful_api_calls
-
-Is it possible to calculate all three fields in single query, so that I can create a table view? Appreciate your time and help.
-","1. The query you have pasted looks like a near solution.
-Parse
-Just this line:
-| parse ""* - event_type:* - *""
-
-seem incomplete.
-Try:
-| parse ""* - event_type:* - *"" as irrelevant1, event_type, irrelevant2
-
-Without the addition, the | parse operator will fail to run.
-Also you wouldn't have the ""handle"" on what is parsed out from the log lines.
-Disclaimer: I happen to be employed by Sumo Logic at the moment.
-",Sumo Logic
-"I have two Prometheus metrics,
-First PromQL
-sum by (cluster) (
-    cnp_pg_replication_slots_active{
-       role=""primary"",
-       cluster=""p-vpt7bgc20z""
-    } == 1
-)  
-
-which gives me result like
-{cluster=""p-vpt7bgc20z""}    2
-
-Second PromQL
-sum by (cluster) (
-    cnp_collector_up { 
-        role=""replica"",
-        cluster=""p-vpt7bgc20z""
-    }
-)
-
-which also gives me result like
-{cluster=""p-vpt7bgc20z""}    2
-
-Now I want to return 1 if both results are same or return 0 if any mismatch. how can I archive that?
-If I write
-sum by (cluster) (
-    cnp_pg_replication_slots_active{
-       role=""primary"",
-       cluster=""p-vpt7bgc20z""
-    } == 1
-)  == sum by (cluster) (
-    cnp_collector_up { 
-        role=""replica"",
-        cluster=""p-vpt7bgc20z""
-    }
-)
-
-it gives me result as but I want result as boolean value 1 & 0.
-{cluster=""p-vpt7bgc20z""}    2
-
-","1. In promQL you can use bool modifier after comparison operator to return 0 or 1 instead of filtering.
-For example, metric > bool 100.
-Demo for use in query can be seen here.
-Documentation for this matter here.
-Your query will be
-sum by (cluster) (
-    cnp_pg_replication_slots_active{
-       role=""primary"",
-       cluster=""p-vpt7bgc20z""
-    } == 1
-)  == bool sum by (cluster) (
-    cnp_collector_up { 
-        role=""replica"",
-        cluster=""p-vpt7bgc20z""
-    }
-)
-
-",Thanos
-"I created a vector as below
-Expenditure
- [1] 13.9 15.4 15.8 17.9 18.3 19.9 20.6 21.4 21.7 23.1
-[11] 20.0 20.6 24.0 25.1 26.2 30.0 30.6 30.9 33.8 44.1
-
-Now I picked 10 random samples from Expenditure
-ransomsample <- sample(Expenditure,10)
-ransomsample
- [1] 19.9 21.4 20.0 30.0 17.9 25.1
- [7] 26.2 21.7 33.8 13.9
-
-Now I want to find the remaining items in Expenditure after I created the sample called ransomsample. Any existing function that I can use?
-","1. This should do:
-#generate 20 random numbers
-x <- rnorm(20)
-#sample 10 of them
-randomSample <- sample(x, 10, replace = FALSE)
-
-#we can get the ones we sampled with:
-x[x %in% randomSample]
-
-#Let's confirm this. NOTE - added sort() to easily see they do match
-cbind(sort(randomSample), sort(x[x %in% randomSample]))
-
-#So we want to negate the above
-x[!(x %in% randomSample)]
-
-
-2. The way to approach this depends on how you need to deal with replicates in the vector from which you sample.  If you can be certain there are no duplicates, then the simple approach given by @Chase using x[!(x %in% randomSample)] is perfect.  But, if there are potentially duplicates, then more care is needed.  We can see this clearly in the following:
-# Start with a vector (length=9) replete with replicates
-x <- rep(letters[1:3],3)
-
-# Now sample 8 of its 9 values (leaving one unsampled)
-set.seed(123)
-randomSample <- sample(x, 8, replace = FALSE)
-
-# try using simple method to find which value remains after sampling
-x[!(x %in% randomSample)]
-## character(0)
-
-This simple approach fails because %in% matches all occurrences of the sampled values within x.  If this is what you want then this is the method for you.  But, if you want to know how many of each value remains after sampling then we need to take another line.  
-There are several ways, but probably the most elegant is to subtract the frequency table of the sample from the frequency table of initial vector, to provide a table of the remaining unsampled values.  Then generate a vector of the unsampled values from this table.
-xtab <- as.data.frame(table(x))
-stab <- as.data.frame(table(randomSample))
-xtab[which(xtab$x %in% stab$randomSample),]$Freq <- 
-  xtab[which(xtab$x %in% stab$randomSample),]$Freq - stab$Freq
-rep(xtab$x, xtab$Freq)
-## [1] a
-
-
-3. # This should work with replicates 
-
-# Some arbitrary vector with a replicate
-x <- c('A', 'B', 'C', 'D', 'E', 'A') 
-
-# Make data frame
-x <- data.frame(x)
-
-# Sample the df choosing random rows
-sample.x <- sample(rownames(x), 3)
-
-x.selected <- x[rownames(x) %in% sample.x,] 
-x.notselected<- x[!rownames(x) %in% sample.x,] 
-
-print(x.selected)
-print(x.notselected)
-
-",Vector
-"/*
- The program must accept N integers as the input.  Each integer is
- given a weight.  The program must sort the integers in ascending
- order based on their weight and print the integers along with their
- weights as the output as given in the Example Input/Output
- sections.  The weight of each integer is calculated based on the
- conditions given below.
-
-
-Conditions:
-Weight = 5 if it is a perfect cube.
-Weight = 4 if it is a multiple of 4 and divisible by 6.
-Weight = 3 if it is a prime number.
-
-Hint: Use stable sort (insertion sort, bubble sort or merge sort).
-
-Boundary Conditions:
-1 <= N <= 1000
-
-Input Format:
-The first line contains N.
-The second line contains N integers separated by a space.
-
-Output Format:
-The first line contains integers with their weight as given in the Example Input/Output sections.
-
-Example Input/Output 1:
-Input:
-7
-10 36 54 89 12 216 27
-
-Output:
-<10,0>,<54,0>,<89,3>,<36,4>,<12,4>,<27,5>,<216,9>
-
-Example Input/Output 2:
-Input:
-10
-12 18 16 64 14 30 37 27 343 216
-
-Output:
-<18,0>,<16,0>,<14,0>,<30,0>,<37,3>,<12,4>,<64,5>,<27,5>,<343,5>,<216,9>
-*/
-
-#include <stdio.h>
-#include <math.h>
-#include <stdlib.h>
-
-int perfcube(int n)
-{
-    int cubert = cbrt(n);
-    if (cubert * cubert * cubert == n)
-    {
-        return 1;
-    }
-    else
-        return 0;
-}
-
-int divis(int n)
-{
-    if (n % 4 == 0 && n % 6 == 0)
-    {
-        return 1;
-    }
-    return 0;
-}
-
-int prime(int n)
-{
-    int count = 0;
-    for (int i = 1; i <= n; i++)
-    {
-        if (n % i == 0)
-        {
-            count++;
-        }
-    }
-
-    if (count == 2)
-    {
-        return 1;
-    }
-    else
-    {
-        return 0;
-    }
-}
-
-int main()
-{
-
-    int n;
-    scanf(""%d"", &n);
-
-    int a[n];
-    int b[n][2];
-
-    // scanning n variables into array a
-    for (int i = 0; i < n; i++)
-    {
-        scanf(""%d"", &a[i]);
-    }
-
-    // copying rows of a(1d array) to b(2d array)
-    int l = 0; // variable to traverse 1d array without its own loop
-    // traverse 2d array
-    for (int j = 0; j < n; j++)
-    {
-        for (int k = 0; k < 2; k++)
-        {
-            if (k == 0)
-            {
-                // if k = 0 that is first col then store 1st col value of 1d array to 2d array
-                b[j][k] = a[l++];
-            }
-            else
-            {
-                // if other cols come then skip it
-                continue;
-            }
-        }
-    }
-
-    for (int i = 0; i < n; i++)
-    {
-        for (int j = 0; j < 2; j++)
-        {
-            if (j == 0)
-            {
-                if (perfcube(b[i][j]))
-                {
-                    b[i][j + 1] += 5;
-                }
-                if (divis(b[i][j]))
-                {
-                    b[i][j + 1] += 4;
-                }
-                if (prime(b[i][j]))
-                {
-                    b[i][j + 1] += 3;
-                }
-            }
-        }
-    }
-
-    for (int i = 0; i < n; i++)
-    {
-        for (int j = 0; j < 2; j++)
-        {
-            printf(""<%d,>"", b[i][j]);
-        }
-        printf(""\n"");
-    }
-
-    return (0);
-}
-
-I tried approaching the problem like this and ended up with an output like this.  Please help me proceed from here.
-    Output
-    <10,><0,>
-    <36,><4,>
-    <54,><0,>
-    <89,><3,>
-    <12,><4,>
-    <216,><9,>
-    <27,><5,>
-
-I am new to programming
-I have tried approaching the problem like this and ended up with an output like that.
-Please help me proceed from here.
-I am not allowed to use pointers or functions like qsort
-How do I sort these in that format and print it
-Output of the program that I ended up with.
-The output should match the question.
-","1. Sorting, at its core, centers around the comparison of two items. For example, if A < B, then A should come before B.
-For example, we can reorder
-3 5 2 1 4
-
-to
-1 2 3 4 5
-
-We can see it is correct because each adjacent pair maintains a ≤ relationship.
-This relationship is a comparison function. The one used for your standard sorting looks something like this:
-int compare( int a, int b )
-{
-  if (a < b) return -1;
-  if (a > b) return  1;
-  return 0;
-}
-
- 
-What your homework is asking you to do is change the comparison function to not compare the values directly, but compare the results of a function applied to them:
-int compare( int a, int b )
-{
-  a = weight_function( a );
-  b = weight_function( b );
-  if (a < b) return -1;
-  if (a > b) return  1;
-  return 0;
-}
-
-You must write and implement the weight function, then use it in your sorting algorithm.
-",Vector
-"I met a research paper referred MergeSet from VictoriaMetrics.
-It says
-
-MergeSet is a simplified one-level LSM-tree implemented by
-VictoriaMetrics [6], which concatenates a tag and a posting ID
-together in a key, and the posting list is naturally formed because of
-the sorted order maintained by the LSM-tree.
-
-Although I tried to read the source code from VM yet not sure about what MergeSet is.
-For a common LSM-tree, each element is a key-value pair sorted by the key. If MergeSet follows this paradigm, then my questions are as follows:
-
-Does MergeSet concatenate each tag key-value pair with related TSID and store it as the key of a tuple in the MergeSet.table?
-If the tag key-value pair and TSID are both embedded in keys, what does the MergeSet.table value represent?
-
-If not, should I consider MergeSet as a (partially) sorted string set inspired by the LSM-tree, utilizing on-disk capacity?
-","1. The github.com/VictoriaMetrics/VictoriaMetrics/lib/mergeset package implements LSM-like data structure with the following properties:
-
-It stores opaque byte slices in sorted order via Table.AddItems method.
-It processes stored byte slices in blocks of up to 64 kb in size. Every block is compressed before storing to disk. This reduces disk IO and disk space usage.
-It merges similarly-sized parts into bigger parts in background in order to keep the number of parts under control. This  improves compression ratio and query speed.
-It provides O(log(N)) search and O(1) prefix scan over sorted byte slices via TableSearch struct.
-It allows making instant snapshots as described in this article.
-
-The mergeset is used by VictoriaMetrics for storing various indexes, which are collectively named indexdb. These indexes include the following entries:
-
-metricName -> metricID, which allows locating internal id of time series (aka metricID) by canonical name of the time series (aka meteicName) when storing the ingested raw samples into VictoriaMetrics. The canonical name of time series includes metric name plus all the labels of the time series sorted in a particular order.
-
-metricID -> metricName, which allows locating metrics names plus all the labels for time series with the given internal id.
-
-label=value -> metricID, which allows locating time series with the given label=value label. These entries are known as inverted index, and they are used for fast search of time series by the given label filters.
-
-
-How does VictoriaMetrics store these entries in the mergeset, which works only with sorted byte slices? It marshals entries into byte slices in the way they can be searched via prefix scan. It also prepends byte slices for every entry type with an individual prefix, so they do not clash with each other. See currently supported prefixes.
-",VictoriaMetrics
-"I am trying to represent data of a series aggregated by the hour of the day. Specifically, I want the average and standard deviation over a period (preceding weeks) at the same hour of the day as the data. In MySQL, I would accomplish this with such a query :
-Table named ""hitcount"" (ts timestamp, hits float)
- with e1 as (
-  select
-    avg(hits) a,
-    std(hits) s,
-    hour(ts) h
-  from
-    hitcount
-  where
-     unix_timestamp(ts) >= unix_timestamp(now())-28*86400
-  group by
-    h
-)
-select
-  unix_timestamp(t1.ts) time_sec,
-  t1.hits hits,
-  e1.a average,
-  e1.s stddev
-from
-  hitcount as t1
-  join e1 on hour(t1.ts) = e1.h;
-
-So this would provide rows a ""time_sec, hits, average, stddev""  with ""average"" and ""stddev"" being the hourly average at the same hour of the day as the ""time_sec"" of the value ""hits"" on the preceding 28 days.
-Is there any way do this with MetricsQL/PromQL ?
-I have Googled, searched the docs of VictoriaMetrics and asked ChatGPT before, to no avail.
-","1. Try the following MetricsQL query:
-aggr_over_time((""avg_over_time"",""stddev_over_time"",""last_over_time""), hitcount[1h])
-
-This query uses aggr_over_time function for calculating multiple aggregate functions over the hitcount raw samples over the last hour.
-You can also perform multiple queries for different functions in a single query by using union and alias functions:
-union(
-  alias(avg_over_time(hitcount[1h]), ""avg""),
-  alias(stddev_over_time(hitcount[1h]), ""stddev""),
-  alias(last_over_time(hitcount[1h]), ""last""),
-)
-
-See also rollup_candlestick function, which is very convenient for financial calculations such as OHLC.
-If you want calculating per-hour values for the queries above for the last 28 days, you need to pass this query to /api/v1/query_range together with step=1h and start=-28d query args.
-",VictoriaMetrics
-"I have this config in my vmagent:
-global:
-  scrape_interval: 60s
-  scrape_timeout: 60s
-  external_labels:
-    server_name: vmagent
-
-scrape_configs:
-  - job_name: ""kafka_exporter""
-    file_sd_configs:
-    - files:
-      - kafka_exporter.yml
-    metric_relabel_configs:
-      - if: '{__name__=""kafka_consumergroup_lag_sum""}'
-        target_label: foo
-        replacement: 3
-
-Im trying to add label to only one metric. There it is:
-kafka_consumergroup_lag_sum{consumergroup=""test"",topic=""elk""} 0
-
-But if I search metrics im my VictoriaMetrics (remote write to vmagent), its no-one metrics with this label. Here they are:
-sum by(__name__)({__name__=~"".+"",foo=""3""}):
-
-kafka_consumergroup_lag_sum{}
-kafka_consumergroup_members{}
-kafka_exporter_build_info{}
-kafka_topic_partition_current_offset{}
-kafka_topic_partition_in_sync_replica{}
-kafka_topic_partition_leader{}
-kafka_topic_partition_leader_is_preferred{}
-kafka_topic_partition_oldest_offset{}
-kafka_topic_partition_replicas{}
-kafka_topic_partition_under_replicated_partition{}
-kafka_topic_partitions{}
-process_cpu_seconds_total{}
-process_max_fds{}
-process_open_fds{}
-process_resident_memory_bytes{}
-process_start_time_seconds{}
-process_virtual_memory_bytes{}
-process_virtual_memory_max_bytes{}
-promhttp_metric_handler_requests_in_flight{}
-promhttp_metric_handler_requests_total{}
-
-What am I doing wrong? Why other metrics have same label?
-If I try to do the same for another metric (kafka_topic_partitions), there is no such problem (Its not true! See ""p.s.""). Config fully same:
-      - if: '{__name__=""kafka_topic_partitions""}'
-        target_label: foo
-        replacement: 3
-
-p.s. I found the pattern. Tags are added to all metric after the selected (in example: kafka_consumergroup_lag_sum).
-If I select kafka_topic_partitions, then list is:
-sum by(__name__)({__name__=~"".+"",foo=""3""}):
-
-kafka_topic_partitions{}
-process_cpu_seconds_total{}
-process_max_fds{}
-process_open_fds{}
-process_resident_memory_bytes{}
-process_start_time_seconds{}
-process_virtual_memory_bytes{}
-process_virtual_memory_max_bytes{}
-promhttp_metric_handler_requests_in_flight{}
-promhttp_metric_handler_requests_total{}
-
-It looks as if works once to determine the cutting point.
-How I can add label to only one metric?
-","1. Your config looks correct and should work unless there is a typo in your actual config. You may want to this out if you want (which is more elegant but same as what you did)
-    metric_relabel_configs:
-    - if: 'kafka_consumergroup_lag_sum'
-      target_label: foo
-      replacement: 3
-
-Another thing I noticed is, if statement is misaligned in your example but that should give syntax errors anyways but keep it aligned like this (which is as per the docs) ->
-scrape_configs:
-- job_name: ""kafka_exporter""
-  file_sd_configs:
-  - files:
-    - kafka_exporter.yml
-  metric_relabel_configs:
-  - if: '{__name__=""kafka_consumergroup_lag_sum""}'
-    target_label: foo
-    replacement: 3
-
-",VictoriaMetrics
-"after a restart of the zabbix agent cannot be started again. I checked the logfile:
-Code:
-2021/07/19 11:39:03.032565 Starting Zabbix Agent 2 (5.2.7)
-2021/07/19 11:39:03.033020 OpenSSL library (OpenSSL 1.1.1f  31 Mar 2020) initialized
-2021/07/19 11:39:03.033058 cannot initialize PID file: cannot open PID file [/run/zabbix/zabbix_agent2.pid]: open /run/zabbix/zabbix_agent2.pid: no such file or directory
-
-I have no idea why the pid file can't be created anymore
-Anyone?
-Thanks,
-DexDy
-","1. The Zabbix installation package creates a systemv unit that looks for the pidfile in /var/run. The same package writes the pidfile in /tmp by default (see zabbix_agentd). SystemV will of course kill the agent because it doesn't find the pidfile.
-Checklist:
-
-did you change the default agent configuration to write the PidFile in /var/run, or is it still writing to the default folder /tmp?
-is selinux enforced? selinux will prevent the agent from working if you don't configure its policy
-
-
-2. During install zabbix-agent creating folder /run/zabbix/. When zabbix agent starts PID file automatically creates in this folder.
-You may follow next steps to diagnostic and resolve problem:
-
-Reinstall zabbix-agent.
-In config file /etc/zabbix/zabbix_agentd.conf change directory for PID file in parameter ""PidFile"". Then start zabbix-agent
-
-
-3. I don't know why, but somehow the process file wasn't created upon installing or setup.
-So we need to create the folder, the file and add them to the zabbix user+group.
-cd /var/run
-
-mkdir zabbix
-
-touch zabbix/zabbix_agentd.pid
-
-chown -R zabbix:zabbix zabbix/
-
-systemctl restart zabbix-agent
-
-",Zabbix
-"I have enabled log tracing through micrometer and zipkin. But i am not able to get span id and trace id in my requests.
-Dependencies in pom.xml are as follows:
-`    <dependency>
-            <groupId>io.micrometer</groupId>
-            <artifactId>micrometer-tracing-bridge-brave</artifactId>
-        </dependency>
-        <dependency>
-            <groupId>io.zipkin.reporter2</groupId>
-            <artifactId>zipkin-reporter-brave</artifactId>
-        </dependency>`
-
-zipkin configuration in application.properties is as follows:
-
-# zipkin configurations
-management.tracing.enabled=true
-management.zipkin.tracing.endpoint=http://localhost:9411/zipkin/api/v2/spans
-management.tracing.sampling.probability=1.0
-
-I added the required dependency in pom.xml and properties in application.properties. Is there any other configuration or handling required to achieve the tracing of requests?
-","1. I guess you mean ""log correlation"", if not, I'm not sure what ""log tracing"" is.
-Try to use SLF4J and log something out in one of your Spring Boot controllers.
-Alternatively, you can use this property to test:
-logging.level.org.springframework.web.servlet.DispatcherServlet=DEBUG
-
-If tracing information is not in these logs, you need to upgrade Boot to at least 3.2 but 3.3 is already out, I would use that instead.
-If you don't want to upgrade, you need to set the logging.pattern.level property (see docs):
-logging.pattern.level=""%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]""
-
-",Zipkin
-"We have a lot of services using Spring Boot 2.0.x and io.zipkin.brave.Tracer is used and it works properly. Tracer is used in a class annotated with @Component and it has a constructor with Tracer as its parameter.
-Here's an example snippet:
-@Component
-public class CrmMessagePublisher {
-
-    private static final Logger LOGGER = LoggerFactory.getLogger(CrmMessagePublisher.class);
-
-    private static final String EVENT_NAME_HEADER = ""service.eventName"";
-
-    private static final String EXCHANGE_EVENT = ""service.event"";
-
-    private static String applicationName;
-
-    private RabbitTemplate rabbitTemplate;
-
-    @Autowired
-    private Tracer tracer;
-
-    @Autowired
-    public CrmMessagePublisher(
-            RabbitTemplate rabbitTemplate,
-            @Value(""${spring.application.name}"") final String applicationName,
-            Tracer tracer
-    ) {
-        this.rabbitTemplate = rabbitTemplate;
-        CrmMessagePublisher.applicationName = applicationName;
-        this.tracer = tracer;
-    }
-...
-
-Now I want to write a junit test but I always get 
-Test ignored.
-
-java.lang.IllegalStateException: Failed to load ApplicationContext
-
-    at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:125)
-    at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108)
-    at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190)
-    at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132)
-    at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246)
-    at org.springframework.test.context.junit.jupiter.SpringExtension.postProcessTestInstance(SpringExtension.java:97)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$invokeTestInstancePostProcessors$5(ClassTestDescriptor.java:349)
-    at org.junit.jupiter.engine.descriptor.JupiterTestDescriptor.executeAndMaskThrowable(JupiterTestDescriptor.java:215)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$invokeTestInstancePostProcessors$6(ClassTestDescriptor.java:349)
-    at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
-    at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
-    at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1621)
-    at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
-    at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
-    at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:312)
-    at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
-    at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeTestInstancePostProcessors(ClassTestDescriptor.java:348)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateAndPostProcessTestInstance(ClassTestDescriptor.java:270)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstanceProvider$2(ClassTestDescriptor.java:259)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstanceProvider$3(ClassTestDescriptor.java:263)
-    at java.base/java.util.Optional.orElseGet(Optional.java:362)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstanceProvider$4(ClassTestDescriptor.java:262)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$before$0(ClassTestDescriptor.java:192)
-    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.before(ClassTestDescriptor.java:191)
-    at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.before(ClassTestDescriptor.java:74)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:105)
-    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
-    at java.base/java.util.ArrayList.forEach(ArrayList.java:1507)
-    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
-    at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
-    at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
-    at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
-    at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
-    at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
-    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
-    at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
-    at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
-    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
-    at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
-    at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69)
-    at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
-    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
-    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
-Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'crmMessagePublisher': Unsatisfied dependency expressed through constructor parameter 2; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'brave.Tracer' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
-    at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
-    at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
-    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1341)
-    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1187)
-    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
-    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
-    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
-    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
-    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
-    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
-    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
-    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
-    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
-    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
-    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
-    at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
-    at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:120)
-    at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99)
-    at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117)
-    ... 48 more
-Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'brave.Tracer' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
-    at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1662)
-    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1221)
-    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1175)
-    at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857)
-    at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760)
-    ... 66 more
-
-Here's the test class:
-//@ExtendWith(SpringExtension.class)
-@WebAppConfiguration
-@ContextConfiguration(classes = {RabbitMqTest2.RabbitTestConfig.class, CrmMessagePublisher.class, Tracer.class})
-////@EnableRabbit
-@SpringBootTest
-@TestPropertySource(""classpath:application.properties"")
-@TestInstance(TestInstance.Lifecycle.PER_CLASS)
-public class RabbitMqTest2 {
-
-    private final String QUEUE_NAME = ""crm-test-service"";
-
-    @Autowired
-    CachingConnectionFactory connectionFactory;
-
-    @Autowired
-    private RabbitTemplate rabbitTemplate;
-
-    private RabbitAdmin rabbitAdmin;
-
-    private Binding binding;
-
-    private Queue queue;
-
-    @Autowired
-    private CrmMessagePublisher publisher;
-//    private Tracer tracer;
-
-    @BeforeAll
-    void beforeAll() {
-        rabbitAdmin = new RabbitAdmin(this.connectionFactory);
-    }
-
-    @BeforeEach
-    void beforeEachTestCase() {
-        // TODO: Get values from test configuration
-        connectionFactory.setUsername(""admin"");
-        connectionFactory.setPassword(""admin"");
-        connectionFactory.setHost(""localhost"");
-        connectionFactory.setPort(5672);
-
-        rabbitTemplate = new RabbitTemplate(connectionFactory);
-
-        rabbitTemplate.setDefaultReceiveQueue(QUEUE_NAME);  // for receiving messages
-        rabbitTemplate.setRoutingKey(QUEUE_NAME); // for sending messages
-
-        Properties queueProps = rabbitAdmin.getQueueProperties(QUEUE_NAME);
-        if( queueProps == null ) {
-            queue = new Queue(QUEUE_NAME, false, false, true);
-            rabbitAdmin.declareQueue(queue);
-
-            binding = BindingBuilder.bind(queue).to(new FanoutExchange(""service.event""));
-            rabbitAdmin.declareBinding(binding);
-        }
-
-        queueProps = rabbitAdmin.getQueueProperties(QUEUE_NAME);
-        Assert.assertEquals(""More messages than expected are in the queue."", 0,
-                Integer.parseInt(queueProps.getProperty(""QUEUE_MESSAGE_COUNT"") == null ? ""0"" : queueProps.getProperty(""QUEUE_MESSAGE_COUNT"")));
-    }
-
-    @AfterEach
-    void afterEachTestCase() {
-        rabbitAdmin.removeBinding(binding);
-        rabbitAdmin.deleteQueue(QUEUE_NAME);
-    }
-
-    @Test
-    void sendMessageToQueue() throws JsonProcessingException {
-        final CrmMessageModel message = new CrmMessageModel();
-        message.setCustomerId(1L);
-        final AmqpAdmin rabbitAdmin = new RabbitAdmin(this.rabbitTemplate.getConnectionFactory());
-        final ObjectMapper om = new ObjectMapper();
-
-        rabbitTemplate.convertAndSend(""service.event"", ""contract-service"", om.writeValueAsString(message));
-
-        final Properties queueProps = rabbitAdmin.getQueueProperties(QUEUE_NAME);
-        Assert.assertEquals(""Not exactly ONE message is in the queue."", 1,
-                Integer.parseInt(queueProps.get(""QUEUE_MESSAGE_COUNT"").toString()));
-    }
-
-    @Configuration
-    public static class RabbitTestConfig extends ResourceServerConfigurerAdapter {
-
-        @Bean
-        public CachingConnectionFactory connectionFactory() {
-            return new CachingConnectionFactory();
-        }
-
-        @Bean
-        public RabbitTemplate crmRabbitTemplate() {
-            return new RabbitTemplate(connectionFactory());
-        }
-
-        @Bean
-        public RestTemplate crmRestTemplate() {
-            return new RestTemplate();
-        }
-
-    }
-}
-
-So: What do I have to do to run the test successful?
-","1. I hit the same problem; fixed it by ensuring that spring.sleuth.enabled=true when running tests.
-
-2. The problem is that the tracer Tracer.class in your @ContextConfiguration does actually nothing:
-
-it's not a @Configuration class
-it does not have a constructor with arguments, which spring could somehow autowire
-
-You should use org.springframework.cloud.sleuth.autoconfig.TraceAutoConfiguration.class instead, which provides a working instance of Tracer, which could be autowired.
-
-3. Please try change your testing class line from:
-@ContextConfiguration(classes = {RabbitMqTest2.RabbitTestConfig.class, CrmMessagePublisher.class, Tracer.class})
-
-
-to
-@ContextConfiguration(classes = {RabbitMqTest2.RabbitTestConfig.class, CrmMessagePublisher.class, TraceAutoConfiguration.class})
-
-
-",Zipkin
-"I have pom.xml
-<dependency>
-    <groupId>org.springframework.boot</groupId>
-    <artifactId>spring-boot-starter-actuator</artifactId>
-    <version>3.2.1</version>
-</dependency>
-<dependency>
-    <groupId>io.micrometer</groupId>
-    <artifactId>micrometer-tracing</artifactId>
-    <version>1.2.1</version>
-</dependency>
-<dependency>
-    <groupId>io.micrometer</groupId>
-    <artifactId>micrometer-registry-prometheus</artifactId>
-</dependency>
-<dependency>
-    <groupId>io.micrometer</groupId>
-    <artifactId>micrometer-tracing-bridge-brave</artifactId>
-</dependency>
-<dependency>
-    <groupId>io.zipkin.reporter2</groupId>
-    <artifactId>zipkin-reporter-brave</artifactId>
-</dependency>
-<dependency>
-    <groupId>io.zipkin.reporter2</groupId>
-    <artifactId>zipkin-sender-kafka</artifactId>
-</dependency>
-
-So, micrometer create traceId, send it to kafka topic and zipkin get it from kafka topic.
-But in prometheus I have an error:
-metric name http_server_requests_seconds_count does not support exemplars
-It worked in spring-boot 2.x. And work if exclude zipkin and brave dependencies.
-","1. Fist of all, please do not define versions that are defined by Boot, also micrometer-tracing is pulled in by micrometer-tracing-bridge-brave so you can delete the micrometer-tracing dependency from your POM.
-It seems you are using an unsupported version of Prometheus server. Exemplars support for all time series was added in #11982. It was released in Prometheus 2.43.0, please upgrade.
-If you want to hack this (please don't and use a supported version of Prometheus instead), If you create a SpanContextSupplier @Bean that always says isSampled false and it returns null for the spanId and traceId, you will not see those exemplars.
-In Spring Boot 3.3.0 and Micrometer 1.13.0, we added support to the Prometheus 1.x Java Client which supports conditionally enabling exemplars on all time series (i.e.: _count), see PrometheusConfig in Micrometer so that you can disable this.
-",Zipkin
-"I wish to monitor all the APIs that I created on one of my docker containers. That Docker container is using Django REST framework for its services.. and I am running it on Azure. I want to monitor my API by means of if it is working or if there are too many requests it will throw an alert.. what is its request per second something like that.
-We are using sysdig for monitoring our containers but I don't think it has the capability to monitor all our APIs of our Django Rest Framework
-","1. To monitor your API performance and downtime, you could create custom scripts to ping your API and alert you if there's downtime, or you could use a third-party service to monitor remotely. This is the simpler option, as it doesn't require writing and maintaining code.
-One third-party service you could use is mine, https://assertible.com. They provide frequent health checks (1/5/15 minute), deep data validation, integrations with other services like Slack and GitHub, and a nice way to view/manage test failures.
-If you want to integrate with your own code or scripts, you can use Trigger URLs and/or the Deployments API to programatically run your tests whenever and wherever:
-$ curl 'https://assertible.com/apis/{API_ID}/run?api_token=ABC'
-[{
-  ""runId"": ""test_fjdmbd"",
-  ""result"": ""TestPass"",
-  ""assertions"": {
-      ""passed"": [{...}],
-      ""failed"": [{...}]
-  },
-  ...
-}]
-
-Hope it helps!
-
-2. You can use the monitoring functionality from Postman. For more information check out the following link [1].
-[1] https://learning.getpostman.com/docs/postman/monitors/intro_monitors/
-
-3. Since you're running on Azure, you should take a look at Application Insights:
-
-Application Insights is an extensible Application Performance
-  Management (APM) service for web developers on multiple platforms. Use
-  it to monitor your live web application. It will automatically detect
-  performance anomalies. It includes powerful analytics tools to help
-  you diagnose issues and to understand what users actually do with your
-  app. It's designed to help you continuously improve performance and
-  usability. It works for apps on a wide variety of platforms including
-  .NET, Node.js and J2EE, hosted on-premises or in the cloud. It
-  integrates with your devOps process, and has connection points to a
-  variety of development tools. Source
-
-API monitoring is described here.
-",sysdig
-"I'm using SYSDIG monitoring in IBM Cloud.
-I have these two metrics
-first:
-sum by(container_image_repo,container_image_tag) (sysdig_container_cpu_cores_used)
-
-Which return by repo and tag the total used cpu (in Value_A)
-second:
-count by(container_image_repo,container_image_tag)(sysdig_container_info)
-
-Which return by repo and tag the total number of containers (in Value_B)
-My problem is that I would like to have one single request which returns the two metrics at the same time by repo and tag, i.e.:
-Repo Tag Value_A Value_B
-Any hints?
-I tried joining the two requests,
-sum by(container_image_repo,container_image_tag) (sysdig_container_cpu_cores_used) *on (container_image_repo,container_image_tag) (count by(
-container_image_repo,container_image_tag)(sysdig_container_info))
-
-but I get still one value (which is the multiplication of the two values A*B, grouped by repo and tag. No surprise indeed...)
-Thank you
-","1. This is not related to Sysdig, but to the way how PromQL is designed. In PromQL, when you apply a function over a vector, the resulting vector or scalar does not contain the metric name (since this is not the same metric anymore, but its derivative).
-In your example, these two metrics that you are using denote two different things:
-
-sysdig_container_cpu_cores_used: is the number of cores a particular container occupies
-sysdig_container_info : a set of additional labels for each container. In order not to add all container information to every container metric (such as agent id, container id, the id of the image in the container, image digest value, etc.), when you need it, you can join that container metric with sysdig_container_info to enrich it with these additional labels.
-
-In my opinion, your query gives you all the relevant info you stated you need:
-e.g.
-Repo                                  Tag     Value (CPU used)
-k8s.gcr.io/kops/kops-controller       1.22.3  0.00148
-
-Disclosure:
-I work as an engineer at Sysdig, but my answers/comments are strictly my own.
-",sysdig
-"According to SysDig documentation,
-
-Duration: Specify the time window for evaluating the alert condition in minutes, hour, or day. The alert will be triggered if the query returns data for the specified duration.
-
-I am afraid I do not understand what changing this value will actually do.
-In the example below, I am checking if a cron job has been taking over 10 minutes to execute. Will modifying ""duration"" change alert update frequency, i.e. the condition will be checked every 20 minutes?
-
-","1. The query you're using as an alert will be evaluated depending on the sysdig agent's scrape interval for your account (which is usually less than 1m).
-If during the interval of 20 min, that query returns data, then the alert will be triggered. So you're fine with those 20 min for the duration of the alert.
-",sysdig