Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
β | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
β |
---|---|---|---|
<p>I am trying to follow this tutorial to backup a persistent volume in Azure AKS:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv</a></p>
<p>I can see the volumes doing</p>
<pre><code>az aks get-credentials --resource-group MYRESOURCEGROUP --name HIDDEN --subscription MYSUBSCRIPTION
kubectl get pv
</code></pre>
<p>(Both disk and file, managed-premium and standard storage classes)</p>
<p>but then I do:</p>
<pre><code>az disk list --resource-group MYRESOURCEGROUP --subscription MYSUBSCRIPTION
</code></pre>
<p>and I get an empty list, so I can't know the source full path to perform the snapshot.</p>
<p>Am I missing something?</p>
| icordoba | <p>Upgrade your az cli version.</p>
<p>I was getting this issue with az cli 2.0.75 returning an empty array for the disk list, with an AKS PV.</p>
<p>upgraded to az cli 2.9.1 and same command worked.</p>
| Ian |
<p>My kubernetes ingress is not accepting the self signed certificate and instead when opening the url on firefox the <strong>Kubernetes Ingress Controller Fake Certificate</strong> is added.</p>
<blockquote>
<p>All the things done locally on pc with minikube in Kali Linus. Kali
linus is running in a Virtual Machine by VMWare software.The doc which
I am referring is -
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl</a></p>
</blockquote>
<p>The Ingress Yaml file.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- example.com
secretName: myssl
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
</code></pre>
<p>The "192.168.49.2" is the ingress ip address. So <a href="https://192.68.49.2" rel="nofollow noreferrer">https://192.68.49.2</a> opens my app on the browser.</p>
<p>The certificate is generated with Openssl with the following commands:</p>
<pre><code>openssl genrsa -out s.key 2048
openssl req -new -key s.key -out s.csr -subj "/CN=example.com"
openssl x509 -req -days 365 -in s.csr -signkey s.key -out s.crt
</code></pre>
<p>The certificate is added to k8s secret.</p>
<pre><code>kubectl create secret tls myssl --cert s.crt --key s.key
</code></pre>
<p>The <code>curl -kv https://192.168.49.2</code> command output is:</p>
<pre><code>* Trying 192.168.49.2:443...
* Connected to 192.168.49.2 (192.168.49.2) port 443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server accepted h2
* Server certificate:
* subject: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* start date: Oct 22 09:57:19 2022 GMT
* expire date: Oct 22 09:57:19 2023 GMT
* issuer: O=Acme Co; CN=Kubernetes Ingress Controller Fake Certificate
* SSL certificate verify result: self-signed certificate (18), continuing anyway.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: 192.168.49.2]
* h2h3 [user-agent: curl/7.85.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x561c242ff950)
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET / HTTP/2
> Host: 192.168.49.2
> user-agent: curl/7.85.0
> accept: */*
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* TLSv1.2 (IN), TLS header, Supplemental data (23):
< HTTP/2 200
< date: Sat, 22 Oct 2022 10:05:50 GMT
< content-type: text/html; charset=utf-8
..... html of the page
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Connection #0 to host 192.168.49.2 left intact
</code></pre>
<p>Kindly help.</p>
<p><strong>Update 2 days Later:</strong></p>
<p>I debugged and found I have installed nginx ingress with the command:</p>
<pre><code>minikube addons enable ingress
</code></pre>
<p>It installs the ingress in <code>ingress-nginx</code> namespace while my secret is in <code>default</code> namespace. Can this be the problem, if so what is the solution?</p>
| yogihosting | <p>You have an error in your ingress manifest, here:</p>
<pre><code>rules:
- host: example.com
- http:
paths:
</code></pre>
<p>You have created two rules, the first matches <code>host: example.com</code> but defines no paths or backends; the second matches path <code>/</code> but doesn't set a <code>host</code>. You want:</p>
<pre><code>rules:
- host: example.com
http:
paths:
</code></pre>
<hr />
<blockquote>
<p>It installs the ingress in ingress-nginx namespace while my secret is in default namespace. Can this be the problem, if so what is the solution?</p>
</blockquote>
<p>This is not a problem: it is the expected configuration. Your SSL secrets should be installed in the same namespace as your application and Ingress.</p>
<hr />
<p>I've been playing with this a bit over the past couple of days, and I'm not sure you can get this to operate the way you want without using a hostname. Fortunately, setting up a hostname to use during local development is relatively straightforward.</p>
<p>In most cases you can edit your <code>/etc/hosts</code> file. For example, if your application is hosted on 192.168.49.2, then you would add an entry like this to <code>/etc/hosts</code> to access your application at <code>https://example.com</code>:</p>
<pre><code>192.168.49.2 example.com
</code></pre>
<p>You can add multiple hostname aliases, which allows you to use multiple hostname-based Ingress resources on your cluster:</p>
<pre><code>192.168.49.2 example.com myapp.internal anotherapp.dev
</code></pre>
<p>When you're testing with <code>curl</code>, you can use the <code>--resolve</code> option to accomplish the same thing:</p>
<pre><code>curl --resolve example.com:443:192.168.49.2 -kv https://example.com
</code></pre>
<p>So for example, if I deploy the following Ingress on my local cluster:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami
spec:
tls:
- secretName: myssl
hosts:
- example.com
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
name: http
</code></pre>
<p>With the following entry in <code>/etc/hosts</code>:</p>
<pre><code>$ grep example.com /etc/hosts
193.168.1.200 example.com
</code></pre>
<p>Running <code>curl -skv https://example.com</code> shows that the ingress is using my custom certificate rather than the default ingress certificate:</p>
<pre><code>[...]
* Server certificate:
* subject: CN=example.com
* start date: Oct 23 12:52:45 2022 GMT
* expire date: Oct 23 12:52:45 2023 GMT
* issuer: CN=example.com
[...]
</code></pre>
| larsks |
<p>Yesterday, <a href="https://github.com/naftulikay/katyperry" rel="noreferrer">I built a full-featured example</a> which uses Terraform to create a network and a GKE cluster in Google Cloud Platform. The whole thing runs in Vagrant on a CentOS 7 VM and installs both <code>gcloud</code>, <code>kubectl</code>, and <code>helm</code>. I also <a href="https://github.com/naftulikay/katyperry/blob/master/doc/SPINNAKER.md" rel="noreferrer">extended the example</a> to use Helm to install Spinnaker.</p>
<p>The GKE cluster is called <code>gke-test-1</code>. In my documentation I documented getting <code>kubectl</code> setup:</p>
<pre><code>gcloud container clusters get-credentials --region=us-west1 gke-test-1
</code></pre>
<p>After this, I was able to use various <code>kubectl</code> commands to <code>get nodes</code>, <code>get pods</code>, <code>get services</code>, and <code>get deployments</code>, as well as all other cluster management commands. I was able to also use Helm to install Tiller and ultimately deploy Spinnaker.</p>
<p>However, today, the same process doesn't work for me. I spun up the network, subnet, GKE cluster, and the node pool, and whenever I try to use commands to get various resoures, I get this response:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl get nodes
No resources found.
Error from server (NotAcceptable): unknown (get nodes)
[vagrant@katyperry vagrant]$ kubectl get pods
No resources found.
Error from server (NotAcceptable): unknown (get pods)
[vagrant@katyperry vagrant]$ kubectl get services
No resources found.
Error from server (NotAcceptable): unknown (get services)
[vagrant@katyperry vagrant]$ kubectl get deployments
No resources found.
Error from server (NotAcceptable): unknown (get deployments.extensions)
</code></pre>
<p>Interestingly enough, it seems that some command do work:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl describe nodes | head
Name: gke-gke-test-1-default-253fb645-scq8
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=n1-standard-4
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default
failure-domain.beta.kubernetes.io/region=us-west1
failure-domain.beta.kubernetes.io/zone=us-west1-b
kubernetes.io/hostname=gke-gke-test-1-default-253fb645-scq8
</code></pre>
<p>When I open a shell in Google Cloud console, after running the same login command, I'm able to use <code>kubectl</code> to do all of the above:</p>
<pre><code>naftuli_kay@naftuli-test:~$ gcloud beta container clusters get-credentials gke-test-1 --region us-west1 --project naftuli-test
Fetching cluster endpoint and auth data.
kubeconfig entry generated for gke-test-1.
naftuli_kay@naftuli-test:~$ kubectl get pods
No resources found.
naftuli_kay@naftuli-test:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-gke-test-1-default-253fb645-scq8 Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-253fb645-tfns Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-8bf306fc-n8jz Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-8bf306fc-r0sq Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-aecb57ba-85p4 Ready <none> 40m v1.8.10-gke.0
gke-gke-test-1-default-aecb57ba-n7n3 Ready <none> 40m v1.8.10-gke.0
naftuli_kay@naftuli-test:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.64.1 <none> 443/TCP 43m
naftuli_kay@naftuli-test:~$ kubectl get deployments
No resources found.
</code></pre>
<p>The only difference I can see is the difference between the <code>kubectl</code> version; Vagrant has the latest version, 1.11.0, and the Google Cloud console has 1.9.7.</p>
<p>I will attempt to downgrade.</p>
<p>Is this a known issue and what, if anything, can I do to work around it?</p>
<hr>
<p><strong>EDIT:</strong> This is reproducible and I can't find a way to prevent it from recurring. I tore down all of my infrastructure and then stood it up again. The Terraform is available <a href="https://github.com/naftulikay/katyperry" rel="noreferrer">here</a>. </p>
<p>After provisioning the resources, I waited until the cluster reported being healthy:</p>
<pre><code>[vagrant@katyperry vagrant]$ gcloud container clusters describe \
--region=us-west1 gke-test-1 | grep -oP '(?<=^status:\s).*'
RUNNING
</code></pre>
<p>I then setup my login credentials:</p>
<pre><code>[vagrant@katyperry vagrant]$ gcloud container clusters get-credentials \
--region=us-west1 gke-test-1
</code></pre>
<p>I again attempted to get nodes:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl get nodes
No resources found.
Error from server (NotAcceptable): unknown (get nodes)
</code></pre>
<p>The cluster appears green in the Google Cloud dashboard:</p>
<p><a href="https://i.stack.imgur.com/AiDR7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AiDR7.png" alt="enter image description here"></a></p>
<p>Apparently, this is a reproducible problem, as I'm able to recreate it using the same Terraform and commands. </p>
| Naftuli Kay | <p>After successfully reproducing the issue multiple times by destroying and recreating all the infrastructure, I found <a href="https://gitlab.com/charts/gitlab/merge_requests/227#note_68683418" rel="noreferrer">some arcane post on GitLab</a> that mentions <a href="https://github.com/kubernetes/kubernetes/issues/61943#issuecomment-377586904" rel="noreferrer">a Kubernetes GitHub issue</a> that seems to indicate:</p>
<blockquote>
<p>...in order to maintain compatibility with 1.8.x servers (which are within the supported version skew of +/- one version)</p>
</blockquote>
<p>Emphasis on the "+/- one version."</p>
<p><a href="https://github.com/naftulikay/katyperry/pull/1" rel="noreferrer">Upgrading the masters and the workers to Kubernetes 1.10</a> seems to entirely have addressed the issue, as I can now list nodes and pods with impunity:</p>
<pre><code>[vagrant@katyperry vagrant]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.4-gke.2", GitCommit:"eb2e43842aaa21d6f0bb65d6adf5a84bbdc62eaf", GitTreeState:"clean", BuildDate:"2018-06-15T21:48:39Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
[vagrant@katyperry vagrant]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-gke-test-1-default-5989a78d-dpk9 Ready <none> 42s v1.10.4-gke.2
gke-gke-test-1-default-5989a78d-kh9b Ready <none> 58s v1.10.4-gke.2
gke-gke-test-1-default-653ba633-091s Ready <none> 46s v1.10.4-gke.2
gke-gke-test-1-default-653ba633-4zqq Ready <none> 46s v1.10.4-gke.2
gke-gke-test-1-default-848661e8-cv53 Ready <none> 53s v1.10.4-gke.2
gke-gke-test-1-default-848661e8-vfr6 Ready <none> 52s v1.10.4-gke.2
</code></pre>
<p>It appears that Google Cloud Platform's cloud shell pins to <code>kubectl</code> 1.9, which is within the version gap supported by the ideas expressed above.</p>
<p>Thankfully, the Kubernetes RHEL repository has a bunch of versions to choose from so it's possible to pin:</p>
<pre><code>[vagrant@katyperry gke]$ yum --showduplicates list kubectl
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.usc.edu
* epel: sjc.edge.kernel.org
* extras: mirror.sjc02.svwh.net
* updates: mirror.linuxfix.com
Installed Packages
kubectl.x86_64 1.11.0-0 @kubernetes
Available Packages
kubectl.x86_64 1.5.4-0 kubernetes
kubectl.x86_64 1.6.0-0 kubernetes
kubectl.x86_64 1.6.1-0 kubernetes
kubectl.x86_64 1.6.2-0 kubernetes
kubectl.x86_64 1.6.3-0 kubernetes
kubectl.x86_64 1.6.4-0 kubernetes
kubectl.x86_64 1.6.5-0 kubernetes
kubectl.x86_64 1.6.6-0 kubernetes
kubectl.x86_64 1.6.7-0 kubernetes
kubectl.x86_64 1.6.8-0 kubernetes
kubectl.x86_64 1.6.9-0 kubernetes
kubectl.x86_64 1.6.10-0 kubernetes
kubectl.x86_64 1.6.11-0 kubernetes
kubectl.x86_64 1.6.12-0 kubernetes
kubectl.x86_64 1.6.13-0 kubernetes
kubectl.x86_64 1.7.0-0 kubernetes
kubectl.x86_64 1.7.1-0 kubernetes
kubectl.x86_64 1.7.2-0 kubernetes
kubectl.x86_64 1.7.3-1 kubernetes
kubectl.x86_64 1.7.4-0 kubernetes
kubectl.x86_64 1.7.5-0 kubernetes
kubectl.x86_64 1.7.6-1 kubernetes
kubectl.x86_64 1.7.7-1 kubernetes
kubectl.x86_64 1.7.8-1 kubernetes
kubectl.x86_64 1.7.9-0 kubernetes
kubectl.x86_64 1.7.10-0 kubernetes
kubectl.x86_64 1.7.11-0 kubernetes
kubectl.x86_64 1.7.14-0 kubernetes
kubectl.x86_64 1.7.15-0 kubernetes
kubectl.x86_64 1.7.16-0 kubernetes
kubectl.x86_64 1.8.0-0 kubernetes
kubectl.x86_64 1.8.1-0 kubernetes
kubectl.x86_64 1.8.2-0 kubernetes
kubectl.x86_64 1.8.3-0 kubernetes
kubectl.x86_64 1.8.4-0 kubernetes
kubectl.x86_64 1.8.5-0 kubernetes
kubectl.x86_64 1.8.6-0 kubernetes
kubectl.x86_64 1.8.7-0 kubernetes
kubectl.x86_64 1.8.8-0 kubernetes
kubectl.x86_64 1.8.9-0 kubernetes
kubectl.x86_64 1.8.10-0 kubernetes
kubectl.x86_64 1.8.11-0 kubernetes
kubectl.x86_64 1.8.12-0 kubernetes
kubectl.x86_64 1.8.13-0 kubernetes
kubectl.x86_64 1.8.14-0 kubernetes
kubectl.x86_64 1.9.0-0 kubernetes
kubectl.x86_64 1.9.1-0 kubernetes
kubectl.x86_64 1.9.2-0 kubernetes
kubectl.x86_64 1.9.3-0 kubernetes
kubectl.x86_64 1.9.4-0 kubernetes
kubectl.x86_64 1.9.5-0 kubernetes
kubectl.x86_64 1.9.6-0 kubernetes
kubectl.x86_64 1.9.7-0 kubernetes
kubectl.x86_64 1.9.8-0 kubernetes
kubectl.x86_64 1.10.0-0 kubernetes
kubectl.x86_64 1.10.1-0 kubernetes
kubectl.x86_64 1.10.2-0 kubernetes
kubectl.x86_64 1.10.3-0 kubernetes
kubectl.x86_64 1.10.4-0 kubernetes
kubectl.x86_64 1.10.5-0 google-cloud-sdk
kubectl.x86_64 1.10.5-0 kubernetes
kubectl.x86_64 1.11.0-0 kubernetes
</code></pre>
<hr>
<p><strong>EDIT:</strong> I have found the <a href="https://github.com/kubernetes/kubernetes/pull/61419" rel="noreferrer">actual pull request that mentions this incompatibility</a>. I have also found <a href="https://kubernetes.io/docs/imported/release/notes/" rel="noreferrer">buried in the release notes</a> the following information:</p>
<blockquote>
<p>kubectl: This client version requires the <strong><code>apps/v1</code></strong> API, so it will not work against a cluster version older than v1.9.0. Note that kubectl only guarantees compatibility with clusters that are +/- [one] minor version away. </p>
</blockquote>
<h2>TL;DR</h2>
<p>This entire problem was an incompatibility between <code>kubectl</code> 1.11 and Kubernetes 1.8.</p>
| Naftuli Kay |
<p>Running minio operator on K8S is the solution i need to be able to create many minio tennats (installs) so serve my needs. Its ability to make managing erasure encoding and all the configurations is great.
For the disk under minio it creates PVC on K8S using the storage class you choose.</p>
<p>We currently using Rook-CEPH to provide a distributed file system across the compute nodes. This is using erase encoding.</p>
<p>So Minio is using EC ontop of Rook-Ceph EC - so everything is being storaged many many time, so super inefficient.</p>
<p>Does anybody know os a storage subsystem (CSI) that would allow minio to place its PV on different physical underline drive, as you would for a single minio server on physical hardware ?</p>
<p>Seems the missing key part of the puzzle ...</p>
| Simon Thompson | <p>weeks of look and immediately after i post this i find the answer !!</p>
<p>So DirectCSI would seem to be the answer
<a href="https://github.com/minio/directpv" rel="nofollow noreferrer">https://github.com/minio/directpv</a></p>
<p><a href="https://www.youtube.com/watch?v=iFTQmHpIsfQ" rel="nofollow noreferrer">https://www.youtube.com/watch?v=iFTQmHpIsfQ</a></p>
| Simon Thompson |
<p>I need to update a secret with specific value,(the secret contain additional data) my question is how I can update just the value and not all the secret data (I don't want to override the existing data). I mean if the secret have additional values I donβt want to override them just the entry <code>foo</code></p>
<pre><code>updSec := v1.Secret{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: "d-values",
Namespace: "tervβ,
},
Immutable: nil,
Data: nil,
StringData: nil,
Type: "Opaque",
}
updSec.Data[βfooβ] = newVal
if err := r.Client.Update(ctx, &updSec); err != nil {
return ctrl.Result{}, err
}
</code></pre>
<p>The issue is that the secret is already exist and here im creating new object and not sure how to do it right ...I need for secret that called <code>d-values</code> just update the <code>newVal</code> for key <code>foo</code></p>
<p><strong>update</strong></p>
<p>when trying the code in the answer after I run the</p>
<p>patch, err := yaml.Marshal(updSec)
the data looks like following
and the patch are failed with error, any idea if its related ?</p>
<p>if I try with the c.Client.Update it works but not with <code>Patch</code> but the Patch is the right way as if I've ties before is should keep them..</p>
<p><a href="https://i.stack.imgur.com/De5zu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/De5zu.png" alt="enter image description here" /></a></p>
| tj holwik | <p>I don't think you can update a single key using the <code>Update</code> method, but you can certainly do that using <code>Patch</code> instead. Here's an example that uses a StrategicMergePatch; it will replace the key <code>val2</code> in a secret with the value <code>newval</code>:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"path/filepath"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
var namespace *string
var secretname *string
namespace = flag.String("namespace", "", "namespace of secret")
secretname = flag.String("name", "", "name of secret")
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
if *namespace == "" {
panic(fmt.Errorf("you must specify a namespace"))
}
if *secretname == "" {
panic(fmt.Errorf("you must specify a secret name"))
}
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
secretClient := clientset.CoreV1().Secrets(*namespace)
ctx := context.TODO()
updSec := v1.Secret{
Data: map[string][]byte{
"val2": []byte("newval"),
},
}
payloadBytes, err := json.Marshal(updSec)
if err != nil {
panic(err)
}
if _, err = secretClient.Patch(ctx, *secretname,
types.StrategicMergePatchType, payloadBytes, metav1.PatchOptions{}); err != nil {
panic(err)
}
// Fetch updated secret
sec, err := secretClient.Get(ctx, *secretname, metav1.GetOptions{})
if err != nil {
panic(err)
}
secJson, err := json.MarshalIndent(sec, "", " ")
if err != nil {
panic(err)
}
fmt.Print(string(secJson))
}
</code></pre>
<p>For example, if I create a secret like this:</p>
<pre><code>kubectl create secret generic \
--from-literal val1=key1 \
--from-literal val2=key2 example
</code></pre>
<p>And then run the above code like this:</p>
<pre><code>go run main.go -namespace default -name example
</code></pre>
<p>The code will output the update secret. Looking at the <code>data</code> section, we see:</p>
<pre><code> "data": {
"val1": "a2V5MQ==",
"val2": "bmV3dmFs"
},
</code></pre>
<p>And if we decode <code>val2</code> we see:</p>
<pre><code>$ kubectl get secret example -o json | jq '.data.val2|@base64d'
"newval"
</code></pre>
<h2>Using the Operator SDK</h2>
<p>If you're working with the Operator SDK, you can use <code>Update</code> if you're first reading the existing value, like this:</p>
<pre><code> // Read the existing secret
secret := &corev1.Secret{}
if err := r.Get(ctx, req.NamespacedName, secret); err != nil {
panic(err)
}
// Check if it needs to be modified
val, ok := secret.Data["val2"]
// If yes, update the secret with a new value and then write
// the entire object back with Update
if !ok || !bytes.Equal(val, []byte("val2")) {
ctxlog.Info("needs update", "secret", secret)
secret.Data["val2"] = []byte("newval")
if err := r.Update(ctx, secret); err != nil {
panic(err)
}
}
</code></pre>
<p>You can use the <code>Patch</code> method if you only want to submit a partial update:</p>
<pre><code> if !ok || !bytes.Equal(val, []byte("val2")) {
ctxlog.Info("needs update", "secret", secret)
newVal := corev1.Secret{
Data: map[string][]byte{
"val2": []byte("newval"),
},
}
patch, err := json.Marshal(newVal)
if err != nil {
panic(err)
}
if err := r.Client.Patch(ctx, secret, client.RawPatch(types.StrategicMergePatchType, patch)); err != nil {
panic(err)
}
}
</code></pre>
<p>This is pretty much identical to the earlier example. There are examples of using the client.Patch method <a href="https://v0-18-x.sdk.operatorframework.io/docs/golang/references/client/#patch" rel="nofollow noreferrer">in the docs</a>, but I'll be honest, I don't find the example very clear.</p>
| larsks |
<p>I am running the command</p>
<pre><code>kubectl create -f mypod.yaml --namespace=mynamespace
</code></pre>
<p>as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns</p>
<blockquote>
<p>pod/mypod created</p>
</blockquote>
<p>but <code>kubectl get pods</code> doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.</p>
<p>What may cause this, and how would I diagnose the problem?</p>
| zcleghern | <p>By default, <code>kubectl</code> commands operate in the <code>default</code> namespace. But you created your pod in the <code>mynamespace</code> namespace.</p>
<p>Try one of the following:</p>
<pre><code>kubectl get pods -n mynamespace
kubectl get pods --all-namespaces
</code></pre>
| Oliver Charlesworth |
<p>I need Prometheus to scrape several mongodb exporters one after another in order to compute a valid replication lag. However, the targets are scraped with a difference of several dozen seconds between them, which makes replication lag impossible to compute.</p>
<p>The job yaml is below:</p>
<pre><code>- job_name: mongo-storage
honor_timestamps: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- mongo-1a-exporter.monitor:9216
- mongo-2a-exporter.monitor:9216
- mongo-3a-exporter.monitor:9216
- mongos-exporter.monitor:9216
- mongo-1b-exporter.monitor:9216
- mongo-2b-exporter.monitor:9216
- mongo-3b-exporter.monitor:9216
labels:
cluster: mongo-storage
</code></pre>
| Adrian | <p>This isn't possible, Prometheus makes no guarantees about the phase of scrapes or rule evaluations. Nor is this something you should depend upon, as it'd be very fragile.</p>
<p>I'd aim for knowing the lag within a scrape interval, rather than trying to get it perfect. You generally care if replication is completely broken, rather than if it's slightly delayed. A heartbeat job could also help.</p>
| brian-brazil |
<p>I'm in learning mode here, so please forgive me if this is a stupid question...</p>
<p>I have just installed microk8s on ubuntu following the instructions at <a href="https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s" rel="nofollow noreferrer">https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s</a></p>
<p>Everything works. The "microbot" application gets deployed and exposed and creates a simple web server. But what surprised me is that after I stop microk8s (with "microk8s stop"), the web server is still apparently up and running. It continues to respond to curl with its simple page content.</p>
<p>Is this expected behavior? Do the pods continue to run after the orchestrator has stopped?</p>
<p>Also, I was trying to figure out what microk8s is doing with the network. It fires up its dashboard on 10.152.183.203, but when I look at the interfaces and routing tables on my host, I can't figure out how traffic is being routed to that destination. And if I run tcpdump I can't seem to capture any of the traffic being sent to that address.</p>
<p>Any explanation of what's going on here would be much appreciated!</p>
<ul>
<li>Duncan</li>
</ul>
| Duncan | <blockquote>
<p>But what surprised me is that after I stop microk8s (with "microk8s stop"), the web server is still apparently up and running. It continues to respond to curl with its simple page content.</p>
<p>Is this expected behavior? Do the pods continue to run after the orchestrator has stopped?</p>
</blockquote>
<p>That's not expected behavior, and I can't reproduce it. What I see is that the service is available for a period of several seconds after running <code>microk8s stop</code>, but eventually everything gets shut down.</p>
<blockquote>
<p>Also, I was trying to figure out what microk8s is doing with the network. It fires up its dashboard on 10.152.183.203, but when I look at the interfaces and routing tables on my host, I can't figure out how traffic is being routed to that destination.</p>
</blockquote>
<p>I've deployed Microk8s locally, and the dashboard Service looks like this:</p>
<pre><code>root@ubuntu:~# kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.152.183.151 <none> 443/TCP 14m
</code></pre>
<p>As you note in your question, I can access this at <code>http://10.152.183.151</code>, but there are no local interfaces on that network:</p>
<pre><code>root@ubuntu:~# ip addr |grep 10.152
<no results>
</code></pre>
<p>And there are no meaningful routes to that network. E.g., this shows that access to that ip would be via the default gateway, which doesn't make any sense:</p>
<pre><code>root@ubuntu:~# ip route get 10.152.183.151
10.152.183.151 via 192.168.122.1 dev enp1s0 src 192.168.122.72 uid 0
cache
</code></pre>
<p>What's going on? It turns out that microk8s sets up a bunch of NAT rules in your local firewall configuration. If we look for the dashboard address in the NAT table, we find:</p>
<pre><code>root@ubuntu:~# iptables-legacy -t nat -S | grep 10.152.183.151
-A KUBE-SERVICES -d 10.152.183.151/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard cluster IP" -m tcp --dport 443 -j KUBE-SVC-4HQ2X6RJ753IMQ2F
-A KUBE-SVC-4HQ2X6RJ753IMQ2F ! -s 10.1.0.0/16 -d 10.152.183.151/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
</code></pre>
<p>If we follow the chain, we find that:</p>
<ol>
<li><p>Packets going to <code>10.152.183.151</code> enter the NAT <code>PREROUTING</code> chain, which sends them to the <code>KUBE-SERVICES</code> chain.</p>
</li>
<li><p>In the <code>KUBE-SERVICES</code> chain, packets to the dashboard (for tcp port 443) are sent to the <code>KUBE-SVC-4HQ2X6RJ753IMQ2F</code> chain.</p>
</li>
<li><p>In the <code>KUBE-SVC-4HQ2X6RJ753IMQ2F</code>, packets are first sent to the <code>KUBE-MARK-MASQ</code> chain, which sets a mark on the packet (which is used elsewhere in the configuration), and then gets sent to the <code>KUBE-SEP-SZAWMA3BPGJYVHOD</code> chain:</p>
<pre><code>root@ubuntu:~# iptables-legacy -t nat -S KUBE-SVC-4HQ2X6RJ753IMQ2F
-A KUBE-SVC-4HQ2X6RJ753IMQ2F ! -s 10.1.0.0/16 -d 10.152.183.151/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-4HQ2X6RJ753IMQ2F -m comment --comment "kube-system/kubernetes-dashboard -> 10.1.243.198:8443" -j KUBE-SEP-SZAWMA3BPGJYVHOD
</code></pre>
</li>
<li><p>In the <code>KUBE-SEP-SZAWMA3BPGJYVHOD</code> chain, the packets finally hit a <code>DNAT</code> rule that maps the connection to the IP of the pod:</p>
<pre><code>root@ubuntu:~# iptables-legacy -t nat -S KUBE-SEP-SZAWMA3BPGJYVHOD
-N KUBE-SEP-SZAWMA3BPGJYVHOD
-A KUBE-SEP-SZAWMA3BPGJYVHOD -s 10.1.243.198/32 -m comment --comment "kube-system/kubernetes-dashboard" -j KUBE-MARK-MASQ
-A KUBE-SEP-SZAWMA3BPGJYVHOD -p tcp -m comment --comment "kube-system/kubernetes-dashboard" -m tcp -j DNAT --to-destination 10.1.243.198:8443
</code></pre>
<p>We know that <code>10.1.243.198</code> is the Pod IP because we can see it like this:</p>
<pre><code>kubectl -n kube-system get pod kubernetes-dashboard-74b66d7f9c-plj8f -o jsonpath='{.status.podIP}'
</code></pre>
</li>
</ol>
<p>So, we can reach the dashboard at <code>10.152.183.151</code> because the <code>PREROUTING</code> chain ultimately hits a <code>DNAT</code> rule that maps the "clusterip" of the service to the pod ip.</p>
<blockquote>
<p>And if I run tcpdump I can't seem to capture any of the traffic being sent to that address.</p>
</blockquote>
<p>Based on the above discussion, if we use the pod ip instead, we'll see the traffic we expect. The following shows the result of me running <code>curl -k https://10.152.183.151</code> in another window:</p>
<pre><code>root@ubuntu:~# tcpdump -n -i any -c10 host 10.1.243.198
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
19:20:33.441481 cali7ef1137a66d Out IP 192.168.122.72.33034 > 10.1.243.198.8443: Flags [S], seq 228813760, win 64240, options [mss 1460,sackOK,TS val 3747829344 ecr 0,nop,wscale 7], length 0
19:20:33.441494 cali7ef1137a66d In IP 10.1.243.198.8443 > 192.168.122.72.33034: Flags [S.], seq 3905988324, ack 228813761, win 65160, options [mss 1460,sackOK,TS val 1344719721 ecr 3747829344,nop,wscale 7], length 0
19:20:33.441506 cali7ef1137a66d Out IP 192.168.122.72.33034 > 10.1.243.198.8443: Flags [.], ack 1, win 502, options [nop,nop,TS val 3747829344 ecr 1344719721], length 0
19:20:33.442754 cali7ef1137a66d Out IP 192.168.122.72.33034 > 10.1.243.198.8443: Flags [P.], seq 1:518, ack 1, win 502, options [nop,nop,TS val 3747829345 ecr 1344719721], length 517
19:20:33.442763 cali7ef1137a66d In IP 10.1.243.198.8443 > 192.168.122.72.33034: Flags [.], ack 518, win 506, options [nop,nop,TS val 1344719722 ecr 3747829345], length 0
19:20:33.443004 cali7ef1137a66d In IP 10.1.243.198.8443 > 192.168.122.72.33034: Flags [P.], seq 1:772, ack 518, win 506, options [nop,nop,TS val 1344719722 ecr 3747829345], length 771
19:20:33.443017 cali7ef1137a66d Out IP 192.168.122.72.33034 > 10.1.243.198.8443: Flags [.], ack 772, win 501, options [nop,nop,TS val 3747829345 ecr 1344719722], length 0
19:20:33.443677 cali7ef1137a66d Out IP 192.168.122.72.33034 > 10.1.243.198.8443: Flags [P.], seq 518:582, ack 772, win 501, options [nop,nop,TS val 3747829346 ecr 1344719722], length 64
19:20:33.443680 cali7ef1137a66d In IP 10.1.243.198.8443 > 192.168.122.72.33034: Flags [.], ack 582, win 506, options [nop,nop,TS val 1344719723 ecr 3747829346], length 0
19:20:33.443749 cali7ef1137a66d In IP 10.1.243.198.8443 > 192.168.122.72.33034: Flags [P.], seq 772:827, ack 582, win 506, options [nop,nop,TS val 1344719723 ecr 3747829346], length 55
10 packets captured
38 packets received by filter
0 packets dropped by kernel
</code></pre>
| larsks |
<p>If i use node port in <code>yml</code> file it give a port more than 30000
but when my user want to use it they do not want to remember that port and want to use 80. my <code>kubernetes</code> cluster is on <code>baremetal</code>.
How can i solve that?</p>
| yasin lachini | <p>Kubernetes doesn't allow you to expose low ports via the Node Port service type by design. The idea is that there is a significant chance of a port conflict if users are allowed to set low port numbers for their Node Port services.</p>
<p>If you really want to use port 80, you're going to have to either use a Load Balancer service type, or route your traffic through an Ingress. If you were on a cloud service, then either option would be fairly straight forward. However, since you're on bare metal, both options are going to be very involved. You're going to have to configure the load balancer or ingress functionality yourself in order to use either option, and it's going to be rough, sorry.</p>
<p>If you want to go forward with this, you'll have to read through a bunch of documentation to figure out what you want to implement and how to implement it.</p>
<p><a href="https://www.weave.works/blog/kubernetes-faq-how-can-i-route-traffic-for-kubernetes-on-bare-metal" rel="nofollow noreferrer">https://www.weave.works/blog/kubernetes-faq-how-can-i-route-traffic-for-kubernetes-on-bare-metal</a></p>
| Swiss |
<p>I have a 3 node Kubernetes cluster running at home. I deployed traefik with helm, however, it never gets an external IP. Since this is in the private IP address space, shouldn't I expect the external IP to be something in the same address space? Am I missing something critical here?</p>
<pre><code>$ kubectl describe svc traefik --namespace kube-system
Name: traefik
Namespace: kube-system
Labels: app=traefik
chart=traefik-1.64.0
heritage=Tiller
release=traefik
Annotations: <none>
Selector: app=traefik,release=traefik
Type: NodePort
IP: 10.233.62.160
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31111/TCP
Endpoints: 10.233.86.47:80
Port: https 443/TCP
TargetPort: httpn/TCP
NodePort: https 30690/TCP
Endpoints: 10.233.86.47:8880
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl get svc traefik --namespace kube-system -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik NodePort 10.233.62.160 <none> 80:31111/TCP,443:30690/TCP 133m
</code></pre>
| farhany | <p>Use MetalLB, to get an LB IP. More <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">here</a> on their site. </p>
| farhany |
<p>I am trying to update a helm-deployed deployment so that it uses a secret stored as a k8s secret resource. This <em>must</em> be set as the STORAGE_PASSWORD environment variable in my pod.</p>
<p>In my case, the secret is in secrets/redis and the data item is redis-password:</p>
<pre>
$ kubectl get secret/redis -oyaml
apiVersion: v1
data:
redis-password: XXXXXXXXXXXXXXXX=
kind: Secret
metadata:
name: redis
type: Opaque
</pre>
<p>I have tried:</p>
<pre>
$ kubectl set env --from secret/redis deployment/gateway --keys=redis-password
Warning: key redis-password transferred to REDIS_PASSWORD
deployment.apps/gateway env updated
</pre>
<p>When I look in my updated deployment manifest, I see the variable has been added but (as suggested) the variable has been set to REDIS_PASSWORD:</p>
<pre>
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: redis
</pre>
<p>I have also tried <code>kubectl patch</code> with a <code>replace</code> operation, but I can't get the syntax correct to have the secret inserted.</p>
<p>How do I change the name of the environment variable to STORAGE_PASSWORD?</p>
| Jeff W | <p>Given a deployment that looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
template:
spec:
containers:
- image: alpinelinux/darkhttpd
name: darkhttpd
args:
- --port
- "9991"
ports:
- name: http
protocol: TCP
containerPort: 9991
env:
- name: EXAMPLE_VAR
value: example value
</code></pre>
<p>The syntax for patching in your secret would look like:</p>
<pre><code>kubectl patch deploy/example --patch='
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "darkhttpd",
"env": [
{
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
]
}
]
}
}
}
}
'
</code></pre>
<p>Or using a JSONPatch style patch:</p>
<pre><code>kubectl patch --type json deploy/example --patch='
[
{
"op": "add",
"path": "/spec/template/spec/containers/0/env/-",
"value": {
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
}
]
'
</code></pre>
<p>Neither one is especially pretty because you're adding a complex nested structure to an existing complex nested structure.</p>
| larsks |
<p>I have docker and kubernetes (enable kubernetes checked on docker settings) installed on my local macbook. I create containers using docker and after my machine restarts, these exactly same containers are still present. However, if I create containers within pods using kubernetes and machine is restarted, then I do see the containers but those are like freshly created containers and not the same containers prior to restart.
What changes do I need to make so that even after machine restart my containers within the pods remain the same same before restart.</p>
| Puri | <p>Even at runtime, Kubernetes may move Pods at will (e.g. during scaling, cluster upgrades, adding and removing nodes, etc.).</p>
<p>It is a good idea to try to treat containers as 'cattle, not pets', i.e. don't expect them to be long-lived.</p>
<p>If you need a container to be 'the same' after restart, consider using <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction" rel="nofollow noreferrer">Persisent Volumes</a> to store their state. Depending on your requirements, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> may also be worth considering.</p>
<p>Or consider having them reload / recompute any additional data they need after startup. The mechanism for this will depend on your code.</p>
| Paul J |
<p>I have a K8s cluster created in the context of the Linux Foundation's CKAD course (<a href="https://training.linuxfoundation.org/training/kubernetes-for-developers/" rel="nofollow noreferrer">LFD259</a>). So it is a "bare metal" cluster created with kubeadm.</p>
<p>So I have a metrics-server deployment running on the worker node:</p>
<pre><code>student@master:~$ k get deployments.apps metrics-server -o yaml | grep -A10 args
- args:
- --secure-port=4443
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
student@master:~$ k get pod metrics-server-6894588c69-fpvtt -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-6894588c69-fpvtt 1/1 Running 0 4d15h 192.168.171.98 worker <none> <none>
student@master:~$
</code></pre>
<p>It is my understanding that the pod's process runs inside a container running on the worker node. However, I am completely puzzled by the fact that the linux <code>ps</code> command "sees" it:</p>
<pre><code>student@worker:~$ ps aux | grep kubelet-preferred-address-types
ubuntu 1343092 0.3 0.6 752468 49612 ? Ssl Oct28 20:25 /metrics-server --secure-port=4443 --cert-dir=/tmp --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls
student 3310743 0.0 0.0 8184 2532 pts/0 S+ 17:39 0:00 grep --color=auto kubelet-preferred-address-types
student@worker:~$
</code></pre>
<p>What am I missing?</p>
| mark | <p>A container is just a process running on your host with some isolation features enabled. The isolation only works in one way: a container can't see resources on your host, but your host has access to all the resources running in a container.</p>
<p>Because a container is just a process, it shows up in <code>ps</code> (as do any processes that are spawned inside the container).</p>
<p>See e.g.:</p>
<ul>
<li>"<a href="https://www.redhat.com/en/topics/containers/whats-a-linux-container" rel="nofollow noreferrer">What is a Linux Container?</a>"</li>
</ul>
| larsks |
<p>The following code will throw an <code>ApiException 410 resource too old</code> on the second <code>watch.stream()</code>:</p>
<pre><code># python3 -m venv venv
# source venv/bin/activate
# pip install 'kubernetes==23.3.0'
from kubernetes import client,config,watch
config.load_kube_config(context='my-eks-context')
v1 = client.CoreV1Api()
watcher = watch.Watch()
namespace = 'kube-system'
last_resource_version=0
# this watch will timeout in 5s to have a fast way to simulate a watch that need to be retried
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5):
print(i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
# we retry the watch starting from the last resource version known
# but this will raise a kubernetes.client.exceptions.ApiException: (410)
# Reason: Expired: too old resource version: 379140622 (380367990)
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=last_resource_version, timeout_seconds=5):
print('second loop', i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
</code></pre>
<p>The <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">kubernetes documentation</a> states that:</p>
<blockquote>
<p>If a client watch is disconnected then that client can start a new watch from the last returned resourceVersion</p>
</blockquote>
<p>which is what I intended in the above code, which always gives the following exception:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 24, in <module>
File "/Users/rubelagu/git/python-kubernetes-client/venv/lib/python3.8/site-packages/kubernetes/watch/watch.py", line 182, in stream
raise client.rest.ApiException(
kubernetes.client.exceptions.ApiException: (410)
Reason: Expired: too old resource version: 379164133 (380432814)
</code></pre>
<p>What am I doing wrong?</p>
| RubenLaguna | <p>It seems that in the initial response to the watch (from an EKS cluster 1.21) the events can be returned in any order.</p>
<p>I did two subsequent watches two seconds apart and they contain the same 30 events in completely different ordering.</p>
<p>So it's not guaranteed that the last resource version that you see is actually the actual last and it's not guaranteed that you can resume from that <code>resourceVersion</code>/<code>resource_version</code>. Also you are not allowed to sort/collate those events by <code>resourceVersion</code>, since the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">kubernetes documentation for Resource Version Semantics</a> explicity says:</p>
<blockquote>
<p>Resource versions must be treated as <strong>opaque</strong> [...] You <strong>must not</strong> assume resource versions are <strong>numeric or collatable</strong>.</p>
</blockquote>
<p>You must account for that by catching the <code>resource too old exception</code> and retrying without specifying a resource version, see below for an example:</p>
<pre><code>from kubernetes import client,config,watch
from kubernetes.client.exceptions import ApiException
config.load_kube_config(context='eks-prod')
v1 = client.CoreV1Api()
# v1 = config.new_client_from_config(context="eks-prod").CoreV1Api()
watcher = watch.Watch()
namespace = 'my-namespace'
def list_pods(resource_version=None):
print('start watch from resource version: ', str(resource_version))
try:
for i in watcher.stream(v1.list_namespaced_pod, namespace, resource_version=resource_version, timeout_seconds=2):
print(i['object'].metadata.resource_version)
last_resource_version = i['object'].metadata.resource_version
except ApiException as e:
if e.status == 410: # Resource too old
return list_pods(resource_version=None)
else:
raise
return last_resource_version
last_resource_version = list_pods()
print('last_resource_version', last_resource_version)
list_pods(last_resource_version)
</code></pre>
| RubenLaguna |
<p>I found that we can create subcharts and conditionally include them as described here: <a href="https://stackoverflow.com/questions/54032974/helm-conditionally-install-subchart">Helm conditionally install subchart</a></p>
<p>I have just one template that I want conditionally include in my chart but I could not find anything in the docs. Is there such feature?</p>
| Eduardo | <p>I discovered that empty templates are not loaded. I solved it by wrapping my yaml file content in an <code>if</code> condition.</p>
<pre><code>{{ if .Values.something }}
content of yaml file
{{ end }}
</code></pre>
| Eduardo |
<p>Is there any standard to deploy flask app with gunicorn and nginx on kubernetes cluster
because I am trying to it by on dockerfile like below:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>FROM nginx:latest as base-image
RUN apt update
RUN apt -y install python3 python3-pip
RUN apt -y install build-essential
RUN mkdir /app
WORKDIR /app
COPY src src/
COPY src/.nginx/config /etc/nginx/conf.d/default.conf
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
RUN pip3 install -e ./src
RUN pwd
RUN ls -l /app/src
# RUN pytest
EXPOSE 80
WORKDIR /app/src
CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]</code></pre>
</div>
</div>
</p>
<p>Is the solution is to run two containers for each of gunicorn and nginx inside one pod of kubernetes?</p>
| yahiya ayoub | <blockquote>
<p>Is the solution is to run two containers for each of gunicorn and nginx inside one pod of kubernetes?</p>
</blockquote>
<p>Yes. In Kubernetes or when simply running Docker on your local machine, it is always better to compose multiple containers rather than trying to stuff everything into a single container.</p>
<p>This part of your your Dockerfile:</p>
<pre><code>CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]
</code></pre>
<p>Isn't doing what you seem to expect. These two directives operate in concert:</p>
<ul>
<li>If defined, <code>ENTRYPOINT</code> is the command run by the container when it starts up.</li>
<li>The value of <code>CMD</code> is provided as an argument to the <code>ENTRYPOINT</code> script.</li>
</ul>
<p>You can read more about <code>ENTRYPOINT</code> in the <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer">official documentation</a>.</p>
<p>A better design would be to use the official Python image for your Python app:</p>
<pre><code>FROM python:3.10
WORKDIR /app
COPY src src/
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
RUN pip3 install -e ./src
WORKDIR /app/src
CMD ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]
</code></pre>
<p>And then use the official <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">nginx image</a> to run the nginx service.</p>
<hr />
<p>An example deployment that uses two containers, one for your Python app and one for Nginx, might look like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: flask-example
name: flask-example
spec:
selector:
matchLabels:
app: flask-example
template:
metadata:
labels:
app: flask-example
spec:
containers:
- image: quay.io/larsks/example:flaskapp
name: app
- image: docker.io/nginx:mainline
name: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-config
volumes:
- configMap:
name: nginx-config-c65tttk45k
name: nginx-config
</code></pre>
<p>In the above deployment, we're mounting the configuration for nginx from a ConfigMap.</p>
<p>You can find a complete deployable example of the above <a href="https://github.com/larsks/so-example-73785638/" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>I have seen both <code>serviceAccount</code> and <code>serviceAccountName</code> been used in a pod manifest. What is the difference?</p>
| RubenLaguna | <p>There is no difference.</p>
<p><code>serviceAccount</code> is DEPRECATED and you should use <code>serviceAccountName</code> instead.</p>
<p>Quoting from the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podspec-v1-core" rel="nofollow noreferrer">Kubernetes API docs > pod spec</a>:</p>
<blockquote>
<p>serviceAccount: <strong>Deprecated</strong>ServiceAccount is a deprecated alias for ServiceAccountName: Deprecated: <strong>Use serviceAccountName instead</strong></p>
</blockquote>
| RubenLaguna |
<p>I have a service running in a k8s cluster, which I want to monitor using Prometheus Operator. The service has a <code>/metrics</code> endpoint, which returns simple data like:</p>
<pre><code>myapp_first_queue_length 12
myapp_first_queue_processing 2
myapp_first_queue_pending 10
myapp_second_queue_length 4
myapp_second_queue_processing 4
myapp_second_queue_pending 0
</code></pre>
<p>The API runs in multiple pods, behind a basic <code>Service</code> object:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
ports:
- port: 80
name: myapp-api
targetPort: 80
selector:
app: myapp-api
</code></pre>
<p>I've installed Prometheus using <code>kube-prometheus</code>, and added a <code>ServiceMonitor</code> object:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
selector:
matchLabels:
app: myapp-api
endpoints:
- port: myapp-api
path: /api/metrics
interval: 10s
</code></pre>
<p>Prometheus discovers all the pods running instances of the API, and I can query those metrics from the Prometheus graph. So far so good.</p>
<p>The issue is, those metrics are aggregate - each API instance/pod doesn't have its own queue, so there's no reason to collect those values from every instance. In fact it seems to invite confusion - if Prometheus collects the same value from 10 pods, it looks like the total value is 10x what it really is, unless you know to apply something like <code>avg</code>.</p>
<p>Is there a way to either tell Prometheus "this value is already aggregate and should always be presented as such" or better yet, tell Prometheus to just scrape the values once via the internal load balancer for that service, rather than hitting each pod?</p>
<p><strong>edit</strong></p>
<p>The actual API is just a simple <code>Deployment</code> object:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 2
selector:
matchLabels:
app: myapp-api
template:
metadata:
labels:
app: myapp-api
spec:
imagePullSecrets:
- name: mysecret
containers:
- name: myapp-api
image: myregistry/myapp:2.0
ports:
- containerPort: 80
volumeMounts:
- name: config
mountPath: "app/config.yaml"
subPath: config.yaml
volumes:
- name: config
configMap:
name: myapp-api-config
</code></pre>
| superstator | <p>Prometheus Operator developers are kindly working (as of Jan 2023) on a generic ScrapeConfig CRD that is designed to solve exactly the use case you describe: <a href="https://github.com/prometheus-operator/prometheus-operator/issues/2787" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator/issues/2787</a></p>
<p>In the meantime, you can use the "<a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">additional scrape config</a>" facility of Prometheus Operator to setup a custom scrape target.</p>
<p>The idea is that the configured scrape target will be hit only once per scrape period and the service DNS will load-balance the request to one of the N pods behind the service, thus avoiding duplicate metrics.</p>
<p>In particular, you can override the <code>kube-prometheus-stack</code> Helm values as follows:</p>
<pre><code>prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'myapp-api-aggregates':
metrics_path: '/api/metrics'
scheme: 'http'
static_configs:
- targets: ['myapp-api:80']
</code></pre>
| mrucci |
<p>In my <a href="https://docs.ovh.com/gb/en/kubernetes/" rel="nofollow noreferrer">OVH Managed Kubernetes</a> cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <code><node-ip>:<node-port></code>.</p>
<p>I followed this tutorial: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/#creating-a-service-for-an-application-running-in-two-pods" rel="nofollow noreferrer">Creating a service for an application running in two pods</a>. I can successfully access the service on <code>localhost:<target-port></code> along with <code>kubectl port-forward</code>, but it doesn't work on <code><node-ip>:<node-port></code> (request timeout) (though it works from inside the cluster).</p>
<p>The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.</p>
<p>The security group seems to allow any traffic:</p>
<p><a href="https://i.stack.imgur.com/1AbNG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1AbNG.png" alt="enter image description here" /></a></p>
| sdabet | <p>The solution is to NOT enable "Private network attached" ("rΓ©seau privΓ© attachΓ©") when you create the managed Kubernetes cluster.</p>
<p>If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("rΓ©initialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et rΓ©installer les noeuds") and at the "Private network attached" ("RΓ©seau privΓ© attachΓ©") option, choose "None (public IPs)" ("Aucun (IPs publiques)")</p>
<p>I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:</p>
<blockquote>
<p>By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen</p>
</blockquote>
<p>Now i got my Traefik ingress as a DaemonSet using <code>hostNetwork</code> and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)</p>
| Alex F |
<p>I'm currently building an API for Firebase Admin SDK and I want to store the Admin SDK credential file as a secret in Kubernetes.</p>
<p>This is an example from google on how to use the credential file:</p>
<pre><code>var admin = require("firebase-admin");
var serviceAccount = require("path/to/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "https://test.firebaseio.com"
});
</code></pre>
<p>the credentials are in serviceAccountKey.json.</p>
<p>this is what the content of the file looks like:</p>
<pre><code>{
"type": "service_account",
"project_id": "test",
"private_key_id": "3455dj555599993n5d425j878999339393po6",
"private_key": "-----BEGIN PRIVATE KEY-----\lkjsfdjlsjfsjflksjfklsjkljklfsjfksjkdjskljflk;sjflskjfklsjdljhijshdkjfhsjfhjsb2223b3==\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk@test.iam.gserviceaccount.com",
"client_id": "123334444555556665478884",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk%40test.iam.gserviceaccount.com"
}
</code></pre>
<p>I already have this file I use for my other secrets:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: paisecret
type: Opaque
stringData:
MONGODB_PASSWORD: "sjldkjsjdfklsl"
MONGODB_USERNAME: "prod_user"
MONGODB_HOST: "test.azure.mongodb.net"
</code></pre>
<p>I'll like to add <code>serviceAccountKey.json</code> or its content to the secret file above and if possible I'll like to access it within the API like this: process.env.FIREBASE_ADMIN</p>
| capiono | <p>If the question is just how to include it as a string in the Secret, you could simply add it as a multiline string.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: paisecret
type: Opaque
stringData:
MONGODB_PASSWORD: "sjldkjsjdfklsl"
MONGODB_USERNAME: "prod_user"
MONGODB_HOST: "test.azure.mongodb.net"
FIREBASE_ADMIN: >
{
"type": "service_account",
"project_id": "test",
"private_key_id": "3455dj555599993n5d425j878999339393po6",
"private_key": "-----BEGIN PRIVATE KEY-----\lkjsfdjlsjfsjflksjfklsjkljklfsjfksjkdjskljflk;sjflskjfklsjdljhijshdkjfhsjfhjsb2223b3==\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk@test.iam.gserviceaccount.com",
"client_id": "123334444555556665478884",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk%40test.iam.gserviceaccount.com"
}
</code></pre>
| Karl |
<p>Is it possible to have array as environmental variable in deployment?</p>
<pre><code>kind: Deployment
metadata:
name: array-deployment
namespace: array-deployment
spec:
replicas: 1
selector:
matchLabels:
app: array-deployment
template:
metadata:
labels:
app: array-deployment
spec:
containers:
- name: array-deployment
image: array-deployment:beta
env:
- name: ENV_PROJECTS
value: "project1"
ports:
- containerPort: 80
resources: {}
</code></pre>
<p>For an example, I want to have an array of projects for <code>ENV_PROJECT</code>.</p>
| yesvladdy | <p>Environment variables are plain strings and do not support arrays as input</p>
<p>In order to achieve what you want, you would want to pass the values as a comma separated list. (You might want to use some other separator if your data contains <code>,</code>)</p>
<p>so your yaml manifest would become</p>
<pre><code> - name: ENV_PROJECTS
value: "project1,project2"
</code></pre>
<p>This assumes that your code in the image <code>array-deployment:beta</code> supports reading comma separated values from the env var</p>
| codebreach |
<p>Background : I am trying to learn and experiment a bit on docker and kubernetes in a "development/localhost" environment, that I could later replicate "for real" on some Cloud. But I'm running low on everything (disk capacity, memory, etc.) on my laptop. So I figured out "why not develop from the cloud ?"</p>
<p>I know AWS has some Kubernetes service, but if my understanding is correct, this is mostly to deploy already well configured stacks, and it is not very suited for the development of the stack configuration itself. </p>
<p>After searching a bit, I found out about Minikube, that helps us experiment our configs by running kubernetes deployments on a single machine. <strong>I'd like to setup a kubernetes + Minikube (or equivalent) development environment from an EC2 instance (ideally running Amazon Linux 2 OS).</strong></p>
<p>I'm having a hard time figuring out </p>
<ul>
<li><strong>Is it actually possible to setup Minikube on EC2 ?</strong></li>
<li>(If yes), how do I do it ? I tried following <a href="https://stackoverflow.com/a/46756411/2832282">this answer</a> but I'm getting stuck at registering the Virtualbox Repo and downloading Virtualbox command line tools</li>
</ul>
| Cyril Duchon-Doris | <p>Heres how to do it</p>
<p>Start an ec2 instance with 8gb of ram and a public ip, ensure you can ssh to this box in the normal ways. Ensure its an unbuntu instance (I'm using 16.04).</p>
<p>once ssh'd into the instance run the following to update and install docker</p>
<pre><code>sudo -i
apt-get update -y && apt-get install docker.io
</code></pre>
<p>Install minikube</p>
<pre><code>curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
</code></pre>
<p>Install kube cli </p>
<pre><code>curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
</code></pre>
<p>now verify the version just to make sure you can see it</p>
<pre><code>/usr/local/bin/minikube version
</code></pre>
<p>Add autocompletion to the current shell with</p>
<pre><code>source <(kubectl completion bash)
</code></pre>
<p>Start the cluster with this (note the no vm driver line)</p>
<pre><code>/usr/local/bin/minikube start --vm-driver=none
</code></pre>
<p>Check its up and running with this:</p>
<pre><code>/usr/local/bin/minikube status
</code></pre>
<p>right that should have you a basic cluster running with no extra nodes :)</p>
<p>If you want a nice dashboard do the following (I am using windows here making use of wsl on windows 10, you can do this on mac or linux if you like but the steps are slightly different but as long as you can follow basic steps like setting variables you will be cool)</p>
<p>In order to see the gui on your local box you are going to need to run a dashboard and to do other useful stuff run kubectl locally</p>
<p><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">Please follow this to install kubectl locally</a></p>
<p>On windows you can use chocolatey like so:</p>
<pre><code>choco install kubernetes-cli
</code></pre>
<p>Now download your admin.conf file from the ec2 instance using scp this is located in /etc/kubernetes.</p>
<p>Now set a local variable called <code>KUBECONFIG</code> and point to the file you just downloaded.</p>
<p>Go on to the ec2 instance and use this to install a dashboard.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard-arm.yaml
</code></pre>
<p>This dashboard is a dev dashboard do not use this in production :)</p>
<p>run the following command to find out what ip address the dashboard is running on</p>
<pre><code>/usr/local/bin/kubectl get svc --namespace kube-system
</code></pre>
<p>output should look a bit like this:</p>
<pre><code>root@ip-172-31-39-236:~# /usr/local/bin/kubectl get svc --namespace kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 49m
kubernetes-dashboard NodePort 10.109.248.81 <none> 80:30000/TCP 49m
</code></pre>
<p>now run this on your localbox to tunnel to the dashboard from the local machine</p>
<pre><code>ssh -i ~/.ssh/keyfile.pem -L 8080:10.109.248.81:80 ubuntu@ec2-i-changed-this-for-this-post.eu-west-1.compute.amazonaws.com
</code></pre>
<p>now open a web browser at:</p>
<pre><code>http://localhost:8080
</code></pre>
<p>and you should now be able to see the dashboard. Which looks like this:</p>
<p><a href="https://i.stack.imgur.com/L0QJZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L0QJZ.png" alt="enter image description here"></a></p>
<p>Sorry the post is so long but its pretty involved. also please note this is really only a dev machine if you are going to need a prod instance you need to do this with better security and probably not run stuff as root :)</p>
<p>One other thing, you may note kubectl locally isn't being used in this guide you can use it to hit the remote api if you use (locally)</p>
<pre><code>kubectl proxy
</code></pre>
<p>There is a guide on this on kubernetes homepage <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/" rel="nofollow noreferrer">here</a>
Also note the admin.conf probably has localhost as the server address, it needs to be the address of the ec2 instance and you'll need to make sure the port is accessible from your ip in your security group for the ec2 instance. </p>
<p>If you curl or browser to <code>http://localhost:8001/api</code> you should see this or something like it :)</p>
<pre><code>{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.4.60:6443"
}
]
}
</code></pre>
| krystan honour |
<p>Within my Kubernetes cluster, I manage four deployments:</p>
<ul>
<li>Laravel Queue Workers: Labelled as laravel-kube-worker.</li>
<li>Django Queue Workers: Labelled as django-kube-worker.</li>
<li>Laravel Redis Instance: Labelled as laravel-kube-worker-redis.</li>
<li>Django Redis Instance: Labelled as django-kube-worker-redis.</li>
</ul>
<p>I aim to connect the Laravel worker pods to its corresponding Redis instance to process the queued jobs stored there. Similarly, I'm attempting to do the same for the Django queue worker pods and their Redis counterpart.</p>
<p>Individually, both Laravel and Django deployments work perfectly. Laravel fills and processes its queue seamlessly, and so does Django. However, an issue arises when both sets of deployments run concurrently: the worker pods intermittently fail to connect to their respective Redis pods. This inconsistency is puzzling.</p>
<p>An example of me occasionally establishing a connection successfully, but it's not consistent or reliable. (In a time span of around 15 seconds).
<a href="https://i.stack.imgur.com/0QFhP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0QFhP.png" alt="enter image description here" /></a></p>
<p>I've also noticed no conflicting behaviors between the two Redis configurations. Below are the configuration files for the Laravel Redis deployment. Anything enclosed in ((....)) represents equivalent configurations for the Django Redis deployment.</p>
<p>Laravel redis deployment.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: laravel-kube-worker-redis (( django-kube-worker-redis ))
labels:
tier: backend
layer: redis
spec:
ports:
- port: 6379 (( 6380 ))
targetPort: 6379 (( 6380 ))
nodePort: 32379 (( 32380 ))
protocol: TCP
selector:
tier: backend
layer: redis
type: NodePort
</code></pre>
<p>Laravel redis persisten-volume-claim.yml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: laravel-kube-worker-redis (( django-kube-worker-redis ))
spec:
storageClassName: do-block-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Laravel redis statefulset.yml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: laravel-kube-worker-redis (( django-kube-worker-redis ))
labels:
tier: backend
layer: redis
spec:
serviceName: laravel-kube-worker-redis (( django-kube-worker-redis ))
selector:
matchLabels:
tier: backend
layer: redis
replicas: 1
template:
metadata:
labels:
tier: backend
layer: redis
spec:
containers:
- name: redis
image: redis:alpine
command: ["redis-server", "--appendonly", "yes", "--bind", "0.0.0.0"]
ports:
- containerPort: 6379 (( 6380 ))
name: web
volumeMounts:
- name: redis-aof
mountPath: /data
resources:
requests:
memory: "1048Mi" # Set the minimum memory request
limits:
memory: "4096Mi" # Set the maximum memory limit
volumes:
- name: redis-aof
persistentVolumeClaim:
claimName: laravel-kube-worker-redis (( django-kube-worker-redis ))
</code></pre>
<p>It's maybe worth noting that all labels in the Django Redis equivalent have different key: values.</p>
<p>I also want to point out that for connectivity, inside the Laravel worker pods, I utilize laravel-kube-worker-redis:6379 to connect to the Redis environment.</p>
<p>Furthermore, upon inspecting the pods, services, and statefulsets statuses, everything seems in order. There are no forced restarts for the Redis pods, and all configurations match what's described in my examples.</p>
| Delano van der Waal | <pre><code>apiVersion: v1
kind: Service
....
# HERE IS YOUR PROBLEM
selector:
tier: backend
layer: redis
type: NodePort
</code></pre>
<p>Both of your Services are routing to <em>both</em> Redis deployments. A Service uses the labels on Pods to determine whether it should route to them. You are starting both redis deployments with the same labels and also configuring each Service with the same selector...</p>
<p>In short you need:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: laravel-kube-worker-redis
spec:
ports:
- port: 6379
targetPort: 6379
nodePort: 32379
protocol: TCP
selector:
tier: backend
layer: laravel-redis # more specific!
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: laravel-kube-worker-redis
labels:
tier: backend
layer: laravel-redis # more specific!
spec:
serviceName: laravel-kube-worker-redis
selector:
matchLabels:
tier: backend
layer: laravel-redis # more specific!
</code></pre>
<p>See also <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a> for more context</p>
| Karl |
<p>I have setup a kubernetes cluster of elasticsearch in GCP.</p>
<pre><code>kubectl get svc
</code></pre>
<p>gives me</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ipgram-elasticsearch-elasticsearch-svc ClusterIP 10.27.247.26 <none> 9200/TCP,9300/TCP 2h
</code></pre>
<p>How to set an external IP for this ?</p>
| Harshdeep Kanhai | <p>You have to convert the service to be of type <code>LoadBalancer</code> which will assign an external IP to the LB or <code>NodePort</code> and then use the nodes IP.</p>
| codebreach |
<p>I have a productionized Django backend server running as on Kubernetes (Deployment/Service/Ingress) on GCP.
My django is configured with something like</p>
<pre><code>ALLOWED_HOSTS = [BACKEND_URL,INGRESS_IP,THIS_POD_IP,HOST_IP]
</code></pre>
<p>Everything is working as expected.</p>
<hr />
<p>However, my backend server logs intermittent errors like these (about 7 per day)</p>
<pre><code>DisallowedHost: Invalid HTTP_HOST header: 'www.google.com'. You may need to add 'www.google.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'xxnet-f23.appspot.com'. You may need to add 'xxnet-f23.appspot.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'xxnet-301.appspot.com'. You may need to add 'xxnet-301.appspot.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'www.google.com'. You may need to add 'www.google.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'narutobm1234.appspot.com'. You may need to add 'narutobm1234.appspot.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'z-h-e-n-116.appspot.com'. You may need to add 'z-h-e-n-116.appspot.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'www.google.com'. You may need to add 'www.google.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'xxnet-131318.appspot.com'. You may need to add 'xxnet-131318.appspot.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'www.google.com'. You may need to add 'www.google.com' to ALLOWED_HOSTS.
DisallowedHost: Invalid HTTP_HOST header: 'stoked-dominion-123514.appspot.com'. You may need to add 'stoked-dominion-123514.appspot.com' to ALLOWED_HOSTS.
</code></pre>
<hr />
<p>My primary question is: <strong>Why - what are all of these hosts?</strong>.</p>
<p>I certainly don't want to allow those hosts without understanding their purpose.</p>
<p>Bonus question: What's the best way to silence unwanted hosts within my techstack?</p>
| Roman | <blockquote>
<p>My primary question is: Why - what are all of these hosts?.</p>
</blockquote>
<p>Some of them are web crawlers that gather information for various purposes. For example, the <code>www.google.com</code> address is most likely the web crawlers that populate the search engine databases for Google search, etcetera.</p>
<p>Google probably got to your back-end site by accident by following a chain of links from some other page that is searchable; e.g. your front end website. You could try to identify that path. I believe there is also a page where you can request the removal of URLs from search ... though I'm not sure how effective that would be in quieting your logs.</p>
<p>Others may be robots probing your site for vulnerabilities.</p>
<blockquote>
<p>I certainly don't want to allow those hosts without understanding their purpose.</p>
</blockquote>
<p>Well, you can never entirely know their purpose. And in some cases, you may never be able to find out.</p>
<blockquote>
<p>Bonus question: What's the best way to silence unwanted hosts within my techstack?</p>
</blockquote>
<p>One way is to simply block access using a manually managed blacklist or whitelist.</p>
<p>A second way is to have your back-end publish a "/robots.txt" document; see <a href="https://www.robotstxt.org/robotstxt.html" rel="nofollow noreferrer">About /robots.txt</a>. Note that not all crawlers will respect a "robots.txt" page, but the reputable ones will; see <a href="https://developers.google.com/search/docs/crawling-indexing/robots/robots_txt" rel="nofollow noreferrer">How Google interprets the robots.txt specification</a>.</p>
<p>Note that it is easy to craft a "/robots.txt" that says "nobody crawl this site".</p>
<p>Other ways would include putting your backend server behind a firewall or giving it a private IP address. (It seems a bit of an odd decision to expose your back-end services to the internet.)</p>
<p>Finally, the sites you are seeing are already being blocked, and Django is telling you that. Perhaps what you should be asking is how to mute the log messages for these events.</p>
| Stephen C |
<p>I have created kubernetes cluster on digitalocean. and I have deployed k6 as a job on kubernetes cluster.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: benchmark
spec:
template:
spec:
containers:
- name: benchmark
image: loadimpact/k6:0.29.0
command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=./test.json", "/etc/k6-config/script.js"]
volumeMounts:
- name: config-volume
mountPath: /etc/k6-config
restartPolicy: Never
volumes:
- name: config-volume
configMap:
name: k6-config
</code></pre>
<p>this is how my k6-job.yaml file look like. After deploying it in kubernetes cluster I have checked the pods logs. it is showing permission denied error.
level=error msg="open ./test.json: permission denied"
how to solve this issue?</p>
| Devi varalakshmi Tangilla | <p>The k6 Docker image runs as an unprivileged user, but unfortunately the default work directory is set to <code>/</code>, so it has no permission to write there.</p>
<p>To work around this consider changing the JSON output path to <code>/home/k6/out.json</code>, i.e.:</p>
<pre><code>command: ["k6", "run", "--vus", "2", "--duration", "5m", "--out", "json=/home/k6/test.json", "/etc/k6-config/script.js"]
</code></pre>
<p>I'm one of the maintainers on the team, so will propose a change to the Dockerfile to set the <code>WORKDIR</code> to <code>/home/k6</code> to make the default behavior a bit more intuitive.</p>
| imiric |
<p>I have deployed the application developed on springboot and used hazelcast . The application is getting compiled well but when it is getting deployed on kubernetes(docker simulated kubernetes) I am getting the error as below.
There is no compilation error but only runtime error.</p>
<pre><code> Node.log(69) - Node creation failed
java.lang.NoClassDefFoundError: com/hazelcast/core/DuplicateInstanceNameException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at com.hazelcast.internal.util.ServiceLoader.selectClassLoaders(ServiceLoader.java:177)
at com.hazelcast.internal.util.ServiceLoader.getServiceDefinitions(ServiceLoader.java:85)
at com.hazelcast.internal.util.ServiceLoader.classIterator(ServiceLoader.java:80)
at com.hazelcast.instance.impl.NodeExtensionFactory.create(NodeExtensionFactory.java:69)
at com.hazelcast.instance.impl.DefaultNodeContext.createNodeExtension(DefaultNodeContext.java:61)
at com.hazelcast.instance.impl.Node.<init>(Node.java:232)
at com.hazelcast.instance.impl.HazelcastInstanceImpl.createNode(HazelcastInstanceImpl.java:148)
at com.hazelcast.instance.impl.HazelcastInstanceImpl.<init>(HazelcastInstanceImpl.java:117)
at com.hazelcast.instance.impl.HazelcastInstanceFactory.constructHazelcastInstance(HazelcastInstanceFactory.java:211)
at com.hazelcast.instance.impl.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:190)
at com.hazelcast.instance.impl.HazelcastInstanceFactory.newHazelcastInstance(HazelcastInstanceFactory.java:128)
at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:91)
at org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration$HazelcastConfigFileConfiguration.hazelcastInstance(HazelcastAutoConfiguration.java:67)
at org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration$HazelcastConfigFileConfiguration$$EnhancerBySpringCGLIB$$69b9e8c2.CGLIB$hazelcastInstance$0(<generated>)
at org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration$HazelcastConfigFileConfiguration$$EnhancerBySpringCGLIB$$69b9e8c2$$FastClassBySpringCGLIB$$fc9ed413.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:356)
at org.springframework.boot.autoconfigure.hazelcast.HazelcastAutoConfiguration$HazelcastConfigFileConfiguration$$EnhancerBySpringCGLIB$$69b9e8c2.hazelcastInstance(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1123)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1018)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:207)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:1214)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1054)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1019)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:467)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1123)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1018)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:207)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.findAutowireCandidates(DefaultListableBeanFactory.java:1214)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1054)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1019)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:566)
at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88)
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:349)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1214)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:543)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:296)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:381)
at org.springframework.cache.annotation.ProxyCachingConfiguration$$EnhancerBySpringCGLIB$$85a1f3d9.cacheInterceptor(<generated>)
at org.springframework.cache.annotation.ProxyCachingConfiguration.cacheAdvisor(ProxyCachingConfiguration.java:46)
at org.springframework.cache.annotation.ProxyCachingConfiguration$$EnhancerBySpringCGLIB$$85a1f3d9.CGLIB$cacheAdvisor$0(<generated>)
at org.springframework.cache.annotation.ProxyCachingConfiguration$$EnhancerBySpringCGLIB$$85a1f3d9$$FastClassBySpringCGLIB$$abbff18f.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:356)
at org.springframework.cache.annotation.ProxyCachingConfiguration$$EnhancerBySpringCGLIB$$85a1f3d9.cacheAdvisor(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1123)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1018)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:92)
at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:101)
at org.springframework.aop.aspectj.annotation.AnnotationAwareAspectJAutoProxyCreator.findCandidateAdvisors(AnnotationAwareAspectJAutoProxyCreator.java:85)
at org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator.shouldSkip(AspectJAwareAdvisorAutoProxyCreator.java:103)
at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessBeforeInstantiation(AbstractAutoProxyCreator.java:249)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:988)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.resolveBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:959)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:472)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:372)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1123)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1018)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:108)
at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:634)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:145)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1143)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1046)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.aop.framework.autoproxy.BeanFactoryAdvisorRetrievalHelper.findAdvisorBeans(BeanFactoryAdvisorRetrievalHelper.java:92)
at org.springframework.aop.framework.autoproxy.AbstractAdvisorAutoProxyCreator.findCandidateAdvisors(AbstractAdvisorAutoProxyCreator.java:101)
at org.springframework.aop.aspectj.annotation.AnnotationAwareAspectJAutoProxyCreator.findCandidateAdvisors(AnnotationAwareAspectJAutoProxyCreator.java:85)
at org.springframework.aop.aspectj.autoproxy.AspectJAwareAdvisorAutoProxyCreator.shouldSkip(AspectJAwareAdvisorAutoProxyCreator.java:103)
at org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessBeforeInstantiation(AbstractAutoProxyCreator.java:249)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:988)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.resolveBeforeInstantiation(AbstractAutowireCapableBeanFactory.java:959)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:472)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.context.support.PostProcessorRegistrationDelegate.registerBeanPostProcessors(PostProcessorRegistrationDelegate.java:240)
at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:697)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:526)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:369)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:313)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1185)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1174)
at com.lufthansa.cobra.Application.main(Application.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88)
Caused by: java.lang.ClassNotFoundException: com.hazelcast.core.DuplicateInstanceNameException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 150 common frames omitted
</code></pre>
<p>Here is the hazelcast.xml from which I am trying to read the configuration :</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.logging.type">slf4j</property>
<property name="hazelcast.partition.count">29</property>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<!-- <port auto-increment="false">9290</port> -->
<join>
<multicast enabled="false" />
<tcp-ip enabled="false">
<interface>${cache.hazelcast.cluster.members}</interface>
</tcp-ip>
<aws enabled="false" />
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<!--
<properties>
<property name="service-dns">cobrapp.default.endpoints.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
-->
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
</code></pre>
<p>If anyone has any idea what and where I am doing wrong and can help ?</p>
| nee nee | <p>I think that you have a problem with your project's Hazelcast dependencies.</p>
<p>According to the project history on Github (<a href="https://github.com/hazelcast/hazelcast" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast</a>), the class <code>com.hazelcast.core.DuplicateInstanceNameException</code> was dropped in Hazelcast 4.0. So, I suspect that you have mixture of Hazelcast JARs with some older than 4.0 and others at 4.0 or later.</p>
<p>Check your project's dependencies.</p>
| Stephen C |
<p>From a certain PVC, I'm trying to get the volume id from the metadata of the PV associated with the PVC using <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Api</a>.</p>
<p>I'm able to describe PVC with <code>read_namespaced_persistent_volume_claim</code> function and obtain the PV name <code>spec.volume_name</code>. Now I need to go deeper and get the <code>Source.VolumeHandle</code> attribute from the PV metadata to get de EBS Volume Id and obtain the volume status from aws, but I can't find a method to describe pv from the python api.</p>
<p>Any help?</p>
<p>Thanks</p>
| falberto | <p>While <code>PersistentVolumeClaims</code> are namedspaced, <code>PersistentVolumes</code> are not. Looking at the available methods in the V1 API...</p>
<pre><code>>>> v1 = client.CoreV1Api()
>>> print('\n'.join([x for x in dir(v1) if x.startswith('read') and 'volume' in x]))
read_namespaced_persistent_volume_claim
read_namespaced_persistent_volume_claim_status
read_namespaced_persistent_volume_claim_status_with_http_info
read_namespaced_persistent_volume_claim_with_http_info
read_persistent_volume
read_persistent_volume_status
read_persistent_volume_status_with_http_info
read_persistent_volume_with_http_info
</code></pre>
<p>...it looks like <code>read_persistent_volume</code> is probably what we want. Running <code>help(v1.read_persistent_volume)</code> gives us:</p>
<pre><code>read_persistent_volume(name, **kwargs) method of kubernetes.client.api.core_v1_api.CoreV1Api instance
read_persistent_volume
read the specified PersistentVolume
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_persistent_volume(name, async_req=True)
>>> result = thread.get()
:param async_req bool: execute request asynchronously
:param str name: name of the PersistentVolume (required)
:param str pretty: If 'true', then the output is pretty printed.
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: V1PersistentVolume
If the method is called asynchronously,
returns the request thread.
</code></pre>
| larsks |
<p>I've deployed a <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> VirtualService: </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloweb
spec:
hosts:
- 'helloweb.dev'
gateways:
- gateway
http:
- route:
- destination:
host: helloweb.default.svc.cluster.local
port:
number: 3000
</code></pre>
<p>and would like to display on the screen like:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.35.214 <none> 3000/TCP 4h38m
helloweb ClusterIP 10.233.8.173 <none> 3000/TCP 4h38m
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 8h
</code></pre>
<p>How to display the VirtualServices?</p>
| softshipper | <p>Not sure if I understood your question correctly but if you want to just list virtual services you can do this:</p>
<pre><code>kubectl get virtualservices
</code></pre>
<p>VirtualService is just typical CRD.</p>
| tefozi |
<p>How do i convert the below kubectl command into ansible role/command to run on a cluster?</p>
<pre><code>kubectl get openshiftapiserver \
-o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
</code></pre>
<p>Below ansible playbook format doesn't seems to be correct one:</p>
<pre><code>- name: check apiserver
command: kubectl get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
</code></pre>
| devops | <p>The task you've shown seems to do exactly what you want. For example, consider the following playbook:</p>
<pre><code>- hosts: localhost
tasks:
- command: >-
kubectl get openshiftapiserver
-o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'
register: res
- debug:
var: res.stdout
</code></pre>
<p>If you run this, it produces the output:</p>
<pre><code>TASK [debug] **********************************************************************************
ok: [localhost] => {
"res.stdout": "EncryptionDisabled\nEncryption is not enabled"
}
</code></pre>
<p>That's the same output produced by running the <code>kubectl get ...</code> command manually from the command line.</p>
<hr />
<p>You could also parse the response in Ansible to generate slightly nicer output:</p>
<pre><code>- hosts: localhost
tasks:
- command: >-
kubectl get openshiftapiserver
-o=jsonpath='{.items[0].status.conditions[?(@.type=="Encrypted")]}'
register: res
- set_fact:
encrypted: "{{ res.stdout|from_json }}"
- debug:
msg:
- "message: {{ encrypted.message }}"
- "reason: {{ encrypted.reason }}"
</code></pre>
<p>This would produce:</p>
<pre><code>TASK [debug] *********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
"message: Encryption is not enabled",
"reason: EncryptionDisabled"
]
}
</code></pre>
| larsks |
<p>With security in mind, I do not want to allow the <code>create</code> verb on <code>Job</code> and <code>CronJob</code> resources because it would allow someone to create a pod (using any image) and expose sensitive information. But I also want to allow the ability to trigger jobs that have already been created on the cluster.</p>
<p>Is there a way to allow the triggering of <code>Jobs</code> and <code>CronJobs</code> in a Kubernetes cluster without assigning the <code>create</code> verb in a <code>Role</code> or <code>ClusterRole</code> RBAC definition?</p>
<p>If not, is there a way to only allow <code>create</code> when the <code>Job</code> or <code>CronJob</code> already exists on the cluster?</p>
<p>I've simply tried the following RBAC definition and was able to create any pod (dangerous) that I wanted.</p>
<pre><code>apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- create
</code></pre>
| greenboi | <ol>
<li><p>You can't "trigger" a Job. A Job is either pending (waiting to run), running, or completed. If it's completed, you can't re-run it; you can only delete and re-create it.</p>
</li>
<li><p>The only way to manually run a CronJob is by...using it as a template to create a Job (<code>kubectl create job --from=cronjob ...</code>).</p>
</li>
</ol>
<p>So in both situations, you need the ability to <code>create</code> a <code>Job</code>.</p>
<p>Lastly:</p>
<ol>
<li>You can't "allow create when the Job or CronJob already exists", because in that case the resource has already been created. There's nothing to create.</li>
</ol>
| larsks |
<p>Consider there is one <code>rollout</code> in K8s setting and it creates 2 <code>replicaSet</code>s, each replicaSet has a single <code>pod</code>. Each pod include the same application that has a <code>volume</code> to directory with same value (i.e. /dir/logs/)</p>
<pre><code>Rollout
|_____ReplicaSet1
|_______Pod1
|________Volume (/logs/ mount to /dir/logs/)
|_____ReplicaSet2
|_______Pod2
|________Volume (/logs/ mount to /dir/logs/)
</code></pre>
<p>If application in <code>Pod1</code> will output log to file</p>
<pre><code> /logs/application.log
</code></pre>
<p>And application in <code>Pod2</code> has the same definition</p>
<pre><code> /logs/application.log
</code></pre>
<p>is it going to be an <code>locking</code> issue, so that application in <code>pod1</code> can write data in /logs/application.log that application in <code>pod2</code> can't access the file because it is being written by <code>pod1</code>? How to configure <code>K8s</code> so that multiple pods won't facing such issue?</p>
| Dreamer | <p>Include the pod name as an environment variable and then use this as part of your log path:</p>
<pre><code>...
env:
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
command:
- sh
- -c
- |
exec myapplication --logfile /logs/$PODNAME.log
...
</code></pre>
| larsks |
<p>I have a config I need to apply from kubectl, but I don't quite understand what it is:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
data:
localHosting.v1: |
host: "localhost:1234"
containerHost: "container-name:5555"
</code></pre>
<p>I can't figure out how to make this configmap using a <code>kubectl create configmap</code> invocation though. I thought that ConfigMaps were just key-value pairs, and the usage of the <code>--from-literal</code> flag seems to expect that, but this config seems to have an additional level of nesting.</p>
<p>I know you could typically create a YAML file or pipe in the whole YAML file using a heredoc, but that's not a very good solution in my situation. Is there a way to craft <code>kubectl create configmap</code> command to do this?</p>
| Stabby | <blockquote>
<p>I thought that ConfigMaps were just key-value pairs...</p>
</blockquote>
<p>They <em>are</em> just key-value pairs. In your example, <code>localHosting.v1</code> is the key, and the string...</p>
<pre><code>host: "localhost:1234"
containerHost: "container-name:5555"
</code></pre>
<p>...is the value. You can create this from the command line by running something like:</p>
<pre><code>kubectl create cm myconfigmap '--from-literal=localHosting.v1=host: "localhost:1234"
containerHost: "container-name:5555
'
</code></pre>
<p>Which results in:</p>
<pre><code>$ kubectl get cm myconfigmap -o yaml
apiVersion: v1
data:
localHosting.v1: |
host: "localhost:1234"
containerHost: "container-name:5555
kind: ConfigMap
metadata:
creationTimestamp: "2023-07-14T02:28:14Z"
name: myconfigmap
namespace: sandbox
resourceVersion: "57813424"
uid: a126c38f-6836-4875-9a1b-7291910b7490
</code></pre>
<p>As with any other shell command, we use single quotes (<code>'</code>) to quote a multi-line string value.</p>
| larsks |
<p>I am working on practicing for the CKAD exam and ran into an interesting problem with a multi-container pod that I can't seem to find an answer to. Lets say I run this imperative command to create a pod.yaml:</p>
<pre><code>kubectl run busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'some commands' > pod.yaml
</code></pre>
<p>I then edit that yaml definition to add a sidecar nginx container with just a name and image. When I go to create this pod with</p>
<pre><code>kubectl create -f pod.yaml
kubectl get pods
</code></pre>
<p>I get a pod with a single nginx container even though the busybox container is still defined in the pod spec yaml. I suspect this is due to something with <code>--dry-run=client</code> and/or running the command combined with dry run but I can't seem to find a good answer to that. Thanks in advance.</p>
<p>EDIT:
pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do echo βHi I am from Main containerβ >> /var/log/index.html; sleep
5; done
image: busybox
name: busybox
volumeMounts:
- mountPath: /var/log
name: log-vol
image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: log-vol
ports:
- containerPort: 80
volumes:
- name: log-vol
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
| humiliatedpenguin | <p>Expanding on my comment:</p>
<p>A list in YAML is a series of items marked with a leading <code>-</code>, like
this list of strings:</p>
<pre><code>- one
- two
- three
</code></pre>
<p>Or this list of dictionaries:</p>
<pre><code>containers:
- image: busybox
name: busybox
- image: nginx
name: nginx
</code></pre>
<p>Or even this list of lists:</p>
<pre><code>outerlist:
-
- item 1.1
- item 1.2
- item 1.3
-
- item 2.1
- item 2.2
- item 2.3
</code></pre>
<p>Your <code>pod.yaml</code> as written has only a single item in your <code>containers</code>
list. You need to mark the second item:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do echo βHi I am from Main containerβ >> /var/log/index.html; sleep
5; done
image: busybox
name: busybox
volumeMounts:
- mountPath: /var/log
name: log-vol
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: log-vol
ports:
- containerPort: 80
volumes:
- name: log-vol
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
</code></pre>
| larsks |
<p>Basically I want to automate this task where I have some namespaces in Kubernetes I need to delete and others that I want to leave alone. These namespaces contain the word nginx. So I was thinking in order to do that I could get the output of <strong>get namespace</strong> using some regex and store those namespaces in an array, then iterate through that array deleting them one by one.</p>
<pre><code>array=($(kubectl get ns | jq -r 'keys[]'))
declare -p array
for n in {array};
do
kubectl delete $n
done
</code></pre>
<p>I tried doing something like this but this is very basic and doesn't even have the regex. But I just left it here as an example to show what I'm trying to achieve. Any help is appreciated and thanks in advance.</p>
| Jones | <p><code>kubectl get ns</code> doesn't output JSON unless you add <code>-o json</code>. This:</p>
<pre><code>array=($(kubectl get ns | jq -r 'keys[]'))
</code></pre>
<p>Should result in an error like:</p>
<pre><code>parse error: Invalid numeric literal at line 1, column 5
</code></pre>
<hr />
<p><code>kubectl get ns -o json</code> emits a JSON response that contains a list of <code>Namespace</code> resources in the <code>items</code> key. You need to get the <code>metadata.name</code> attribute from each item, so:</p>
<pre><code>kubectl get ns -o json | jq -r '.items[].metadata.name'
</code></pre>
<hr />
<p>You only want namespaces that contain the word "nginx". We could filter the above list with <code>grep</code>, or we could add that condition to our <code>jq</code> expression:</p>
<pre><code>kubectl get ns -o json | jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name'
</code></pre>
<p>This will output your desired namespaces. At this point, there's no reason to store this in array and use a <code>for</code> loop; you can just pipe the output to <code>xargs</code>:</p>
<pre><code>kubectl get ns -o json |
jq -r '.items[]|select(.metadata.name|test("nginx"))|.metadata.name' |
xargs kubectl delete ns
</code></pre>
| larsks |
<p>I'm setting up kubernetes cluster with ansible. I get the following error when trying to enable kernel IP routing:</p>
<pre><code>Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
</code></pre>
<p>Is this a bug in ansible or is there something wrong with my playbook?</p>
<pre><code>---
# file: site.yml
# description: Asentaa ja kaynnistaa kubernetes-klusterin riippuvuuksineen
#
# resources:
# - https://kubernetes.io/docs/setup/independent/install-kubeadm/
# - http://michele.sciabarra.com/2018/02/12/devops/Kubernetes-with-KubeAdm-Ansible-Vagrant/
# - https://docs.ansible.com/ansible/latest/modules/
# - https://github.com/geerlingguy/ansible-role-kubernetes/blob/master/tasks/setup-RedHat.yml
# - https://docs.docker.com/install/linux/docker-ce/centos/
#
# author: Tuomas Toivonen
# date: 30.12.2018
- name: Asenna docker ja kubernetes
hosts: k8s-machines
become: true
become_method: sudo
roles:
- common
vars:
ip_modules:
- ip_vs
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
tasks:
- name: Poista swapfile
tags:
- os-settings
mount:
name: swap
fstype: swap
state: absent
- name: Disabloi swap-muisti
tags:
- os-settings
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Konfiguroi verkkoasetukset
tags:
- os-settings
command: modprobe {{ item }}
loop: "{{ ip_modules }}"
- name: Modprobe
tags:
- os-settings
lineinfile:
path: "/etc/modules"
line: "{{ item }}"
create: yes
state: present
loop: "{{ ip_modules }}"
- name: Iptables
tags:
- os-settings
sysctl:
name: "{{ item }}"
value: 1
sysctl_set: yes
state: present
reload: yes
loop:
- 'net.bridge.bridge-nf-call-iptables'
- 'net.bridge.bridge-nf-call-ip6tables'
- name: Salli IP-reititys
sysctl:
name: net.ipv4.ip_forward
value: 1
state: present
reload: yes
sysctl_set: yes
- name: Lisaa docker-ce -repositorio
tags:
- repos
yum_repository:
name: docker-ce
description: docker-ce
baseurl: https://download.docker.com/linux/centos/7/x86_64/stable/
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey:
- https://download.docker.com/linux/centos/gpg
state: present
- name: Lisaa kubernetes -repositorio
tags:
- repos
yum_repository:
name: kubernetes
description: kubernetes
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey:
- https://packages.cloud.google.com/yum/doc/yum-key.gpg
- https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
state: present
- name: Asenna docker-ce -paketti
tags:
- packages
yum:
name: docker-ce
state: present
- name: Asenna NTP -paketti
tags:
- packages
yum:
name: ntp
state: present
- name: Asenna kubernetes -paketit
tags:
- packages
yum:
name: "{{ item }}"
state: present
loop:
- kubelet
- kubeadm
- kubectl
- name: Kaynnista palvelut
tags:
- services
service: name={{ item }} state=started enabled=yes
loop:
- docker
- ntpd
- kubelet
- name: Alusta kubernetes masterit
become: true
become_method: sudo
hosts: k8s-masters
tags:
- cluster
tasks:
- name: kubeadm reset
shell: "kubeadm reset -f"
- name: kubeadm init
shell: "kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8" # TODO
register: kubeadm_out
- set_fact:
kubeadm_join: "{{ kubeadm_out.stdout_lines[-1] }}"
when: kubeadm_out.stdout.find("kubeadm join") != -1
- debug:
var: kubeadm_join
- name: Aseta ymparistomuuttujat
shell: >
cp /etc/kubernetes/admin.conf /home/vagrant/ &&
chown vagrant:vagrant /home/vagrant/admin.conf &&
export KUBECONFIG=/home/vagrant/admin.conf &&
echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc
- name: Konfiguroi CNI-verkko
become: true
become_method: sudo
hosts: k8s-masters
tags:
- cluster-network
tasks:
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- name: Asenna Flannel-plugin
shell: >
export KUBECONFIG=/home/vagrant/admin.conf ;
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- shell: sleep 10
- name: Alusta kubernetes workerit
become: true
become_method: sudo
hosts: k8s-workers
tags:
- cluster
tasks:
- name: kubeadm reset
shell: "kubeadm reset -f"
- name: kubeadm join
tags:
- cluster
shell: "{{ hostvars['k8s-n1'].kubeadm_join }}" # TODO
</code></pre>
<p>Here is the full ansible log</p>
<pre><code>ansible-controller: Running ansible-playbook...
cd /vagrant && PYTHONUNBUFFERED=1 ANSIBLE_NOCOLOR=true ANSIBLE_CONFIG='ansible/ansible.cfg' ansible-playbook --limit="all" --inventory-file=ansible/hosts -v ansible/site.yml
Using /vagrant/ansible/ansible.cfg as config file
/vagrant/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/vagrant/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected
PLAY [Asenna docker ja kubernetes] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-n1]
ok: [k8s-n3]
ok: [k8s-n2]
TASK [common : Testaa] *********************************************************
changed: [k8s-n3] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.54-124542112178019/source", "state": "file", "uid": 0}
changed: [k8s-n2] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.51-240329169302936/source", "state": "file", "uid": 0}
changed: [k8s-n1] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.57-121244542660821/source", "state": "file", "uid": 0}
TASK [common : Asenna telnet] **************************************************
changed: [k8s-n2] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: ftp.funet.fi\n * extras: ftp.funet.fi\n * updates: ftp.funet.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
changed: [k8s-n1] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.gnu.fi\n * extras: centos.mirror.gnu.fi\n * updates: centos.mirror.gnu.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
changed: [k8s-n3] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: ftp.funet.fi\n * extras: ftp.funet.fi\n * updates: ftp.funet.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
TASK [Poista swapfile] *********************************************************
ok: [k8s-n1] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
ok: [k8s-n2] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
ok: [k8s-n3] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
TASK [Disabloi swap-muisti] ****************************************************
changed: [k8s-n3] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.009581", "end": "2019-01-06 07:46:08.414842", "rc": 0, "start": "2019-01-06 07:46:08.405261", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.119638", "end": "2019-01-06 07:46:08.484265", "rc": 0, "start": "2019-01-06 07:46:08.364627", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.133924", "end": "2019-01-06 07:46:08.519646", "rc": 0, "start": "2019-01-06 07:46:08.385722", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [Konfiguroi verkkoasetukset] **********************************************
changed: [k8s-n2] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.036881", "end": "2019-01-06 07:46:10.606797", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.569916", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.036141", "end": "2019-01-06 07:46:10.815043", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.778902", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.035888", "end": "2019-01-06 07:46:10.768267", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.732379", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.005942", "end": "2019-01-06 07:46:12.763004", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.757062", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.006084", "end": "2019-01-06 07:46:12.896763", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.890679", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.006325", "end": "2019-01-06 07:46:12.899750", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.893425", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.006195", "end": "2019-01-06 07:46:14.795507", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.789312", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.007328", "end": "2019-01-06 07:46:14.819072", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.811744", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.007251", "end": "2019-01-06 07:46:14.863192", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.855941", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.007590", "end": "2019-01-06 07:46:16.815226", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.807636", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.006380", "end": "2019-01-06 07:46:16.941470", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.935090", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.006619", "end": "2019-01-06 07:46:16.808432", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.801813", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.007618", "end": "2019-01-06 07:46:18.825593", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.817975", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.008181", "end": "2019-01-06 07:46:18.910050", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.901869", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.007427", "end": "2019-01-06 07:46:18.962850", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.955423", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [Modprobe] ****************************************************************
changed: [k8s-n2] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n2] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
changed: [k8s-n1] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
changed: [k8s-n3] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
TASK [Iptables] ****************************************************************
failed: [k8s-n3] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n1] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n2] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n3] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
failed: [k8s-n2] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
failed: [k8s-n1] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
to retry, use: --limit @/vagrant/ansible/site.retry
PLAY RECAP *********************************************************************
k8s-n1 : ok=7 changed=5 unreachable=0 failed=1
k8s-n2 : ok=7 changed=5 unreachable=0 failed=1
k8s-n3 : ok=7 changed=5 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
</code></pre>
| Tuomas Toivonen | <p>In the playbook, add the following task to load the <code>br_netfilter</code> module:</p>
<pre><code>- name: Ensure br_netfilter is enabled.
modprobe:
name: br_netfilter
state: present
</code></pre>
| geerlingguy |
<p>I have a shell script <code>my-script.sh</code> like:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
while true; do
echo '1'
done
</code></pre>
<p>I can deploy a bash pod in Kubernetes like:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl run my-shell --rm -it --image bash -- bash
</code></pre>
<p>Now, I want to execute the script on bash. How can I pass <code>my-script.sh</code> as input to bash? Something like</p>
<pre class="lang-bash prettyprint-override"><code>kubectl run my-shell --rm -it --image bash -- /bin/bash -c < my-script.sh
</code></pre>
| Vahid | <p>Just drop the <code>-t</code> to <code>kubectl run</code> (because you're reading from stdin, not a terminal) and the <code>-c</code> from bash (because you're passing the script on stdin, not as an argument):</p>
<pre><code>$ kubectl run my-shell --rm -i --image docker.io/bash -- bash < my-script.sh
If you don't see a command prompt, try pressing enter.
1
1
1
1
...
</code></pre>
| larsks |
<p>According to documentation (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a>) I can create cron job in k8s with specify timezone like: <code>"CRON_TZ=UTC 0 23 * * *"</code></p>
<p>My deployment file is:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: scheduler
spec:
schedule: "CRON_TZ=UTC 0 23 * * *"
...
</code></pre>
<p>During the deploy I am getting an error:</p>
<blockquote>
<p>The CronJob "scheduler" is invalid: spec.schedule: Invalid value: "CRON_TZ=UTC 0 23 * * *": Expected exactly 5 fields, found 6: CRON_TZ=UTC 0 23 * * *</p>
</blockquote>
<p>Cron is working without perfectly timezone (<code>schedule: "0 23 * * *"</code>)</p>
<p>Cluster version is: <code>Kubernetes 1.21.2-do.2</code> - digitalocean.</p>
<p>What is wrong?</p>
| yihereg819 | <p>The <code>CRON_TZ=<timezone></code> prefix won't be available yet, not until 1.22. The inclusion in the 1.21 release docs was an error.</p>
<p>Originally, the change adding the syntax was <a href="https://github.com/kubernetes/website/pull/29455/files" rel="nofollow noreferrer">included for 1.22</a>, but it appears someone got confused and <a href="https://github.com/kubernetes/website/pull/29492/files" rel="nofollow noreferrer">moved the documentation over to 1.21</a>.
Supporting the <code>CRON_TZ=<timezone></code> syntax is accidental, purely because the <a href="https://github.com/robfig/cron" rel="nofollow noreferrer">package used to handle the scheduling</a> was <a href="https://github.com/kubernetes/kubernetes/commit/4b36a5cbe95d9e305d67cfda2ffa87bb3a0ccd47" rel="nofollow noreferrer">recently upgraded to version 3</a>, which added support for the syntax. The package is the key component that makes the syntax possible and is only part of 1.22.</p>
<p>As of <a href="https://github.com/kubernetes/website/commit/b5e83e89448bcb69c095b7a056aa3f5fa4dcd4ed" rel="nofollow noreferrer">November 2021</a> the wording in the documentation has been adjusted to state that <code>CRON_TZ</code> is not officially supported:</p>
<blockquote>
<p><strong>Caution</strong>:</p>
<p>The v1 CronJob API does not officially support setting timezone as explained above.</p>
<p>Setting variables such as <code>CRON_TZ</code> or <code>TZ</code> is not officially supported by the Kubernetes project. <code>CRON_TZ</code> or <code>TZ</code> is an implementation detail of the internal library being used for parsing and calculating the next Job creation time. Any usage of it is not recommended in a production cluster.</p>
</blockquote>
<p>If you can upgrade to 1.24, you can instead use the new <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones" rel="nofollow noreferrer"><code>CronJobTimeZone</code> feature gate</a> to enable the new, official, time-zone support added with <a href="https://github.com/kubernetes/enhancements/blob/2853299b8e330f12584624326fee186b56d4614c/keps/sig-apps/3140-TimeZone-support-in-CronJob/README.md" rel="nofollow noreferrer">KEP 3140</a>. Note that this is still an alpha-level feature; hopefully it will reach beta in 1.25. If all goes well, the feature should reach maturity in release 1.27.</p>
<p>With the feature-gate enabled, you can add a <code>timeZone</code> field to your CronJob <code>spec</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: scheduler
spec:
schedule: "0 23 * * *"
timeZone: "Etc/UTC"
</code></pre>
| Martijn Pieters |
<p>I would like to apply HostRegExp with Traefik as ingress Controller. I have something similar like below with docker as provider in Traefik service.
<code>"traefik.http.routers.test.rule=HostRegexp(`{host:.+}`) && PathPrefix(`/test`)"</code></p>
<p>Would like to replicate similar stuff in Kubernetes.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-service
annotations:
#kubernetes.io/ingress.class: traefik
# entryPoints Configuration
traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
traefik.ingress.kubernetes.io/router.tls: "true"
#configuring rules
#traefik.frontend.rule.type: HostRegexp
spec:
rules:
- host: `{host:.+}`
http:
paths:
- path: /rmq
pathType: Prefix
backend:
service:
name: rabbitmq
port:
name: http
</code></pre>
<p>Tried below options. Any inputs ?</p>
| road2victory | <p>I don't know if this functionality is exposed through the <code>Ingress</code> provider, but you can do it using an <code>IngressRoute</code> resource. For example, if I have:</p>
<pre><code>apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: example
spec:
entryPoints:
- web
routes:
- match: "HostRegexp(`{host:foo.*}.using.ingressroute`)"
kind: Rule
services:
- name: example
port: http
</code></pre>
<p>Then I can access the service using the hostnames like:</p>
<ul>
<li><code>foo.using.ingressroute</code></li>
<li><code>foobar.using.ingressroute</code></li>
<li><code>foo.one.two.three.using.ingressroute</code></li>
</ul>
| larsks |
<p>I am trying to deploy an admission controller / mutating webhook</p>
<p>Image: <a href="https://hub.docker.com/layers/247126140/aagashe/label-webhook/1.2.0/images/sha256-acfe141ca782eb8699a3656a77df49a558a1b09989762dbf263a66732fd00910?context=repo" rel="nofollow noreferrer">https://hub.docker.com/layers/247126140/aagashe/label-webhook/1.2.0/images/sha256-acfe141ca782eb8699a3656a77df49a558a1b09989762dbf263a66732fd00910?context=repo</a></p>
<p>Steps are executed in the below order.</p>
<ol>
<li>Created the ca-csr.json and ca-config.json as per below
<strong>ca-config.json</strong></li>
</ol>
<pre><code>{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "175200h"
}
}
}
}
</code></pre>
<p><strong>ca-csr.json</strong></p>
<pre><code>{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "175200h"
}
}
}
}
</code></pre>
<p>Create a docker container and run commands one after the other as below:</p>
<pre><code>docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="label-webhook,label-webhook.default.svc.cluster.local,label-webhook.default.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/label-webhook
ca_pem_b64="$(openssl base64 -A <"/tmp/ca.pem")"
ls -alrth /tmp/
total 32K
drwxr-xr-x 1 root root 4.0K Jul 5 05:07 ..
-rw-r--r-- 1 root root 2.0K Jul 5 05:13 ca.pem
-rw-r--r-- 1 root root 1.8K Jul 5 05:13 ca.csr
-rw------- 1 root root 3.2K Jul 5 05:13 ca-key.pem
-rw-r--r-- 1 root root 2.2K Jul 5 05:17 label-webhook.pem
-rw-r--r-- 1 root root 1.9K Jul 5 05:17 label-webhook.csr
-rw------- 1 root root 3.2K Jul 5 05:17 label-webhook-key.pem
drwxrwxrwt 1 root root 4.0K Jul 5 05:17 .
cp -apvf /tmp/* .
'/tmp/ca-key.pem' -> './ca-key.pem'
'/tmp/ca.csr' -> './ca.csr'
'/tmp/ca.pem' -> './ca.pem'
'/tmp/label-webhook-key.pem' -> './label-webhook-key.pem'
'/tmp/label-webhook.csr' -> './label-webhook.csr'
'/tmp/label-webhook.pem' -> './label-webhook.pem'
pwd
/work
export ca_pem_b64="LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZqakNDQTNhZ0F3SUJBZ0lVVVVCSHcvTUlPak5IVjE1ZHBhMytFb0RtTlE4d0RRWUpLb1pJaHZjTkFRRU4KQlFBd1h6RUxNQWtHQTFVRUJoTUNRVlV4RlRBVEJnTlZCQWdUREdOaFlYTXRaR1YyTFdOaFl6RVNNQkFHQTFVRQpCeE1KVFdWc1ltOTFjbTVsTVF3d0NnWURWUVFLRXdOUVYwTXhGekFWQmdOVkJBc1REa05OVXlCWGIzSnJjM1J5ClpXRnRNQjRYRFRJeU1EY3dOVEExTURnd01Gb1hEVEkzTURjd05EQTFNRGd3TUZvd1h6RUxNQWtHQTFVRUJoTUMKUVZVeEZUQVRCZ05WQkFnVERHTmhZWE10WkdWMkxXTmhZekVTTUJBR0ExVUVCeE1KVFdWc1ltOTFjbTVsTVF3dwpDZ1lEVlFRS0V3TlFWME14RnpBVkJnTlZCQXNURGtOTlV5QlhiM0pyYzNSeVpXRnRNSUlDSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUF1Vmxyd3lLSE5QMVllZUY5MktZMG02YXc0VUhBMEtac0JyNUkKeEZaWnVtM3ZzSHV3eXFBa3BjZHpibWhqSmVGcTZXYXJXUUNSTGxoU1ZRaVcxUnJkOXpxMWVndVZRYjJmN0w1cApkbGFteGZ4UGhSc3RodTZscXVCOC9XbWo3RVVEVnBMMkx3bHJNUm1tOWhrYWxSSUN6cXRLa1Y2MDFJMG9KMEd6ClN4SUFPSnRBS3VxamtuTWtnOTNTVit0WEdVamxLOTFzbGZ3V2Z5UUtjVVZWU1dxUVZiUEdxcjFIblZzeU5TcGYKTERFZGRFRVBNSUZLM3U2eWg3M3R3ME1SR3RyQ0RWSEdyR2xtd0xrZDZENjhzdHJCWWhEdnVVU2NRMG5Wb2VxaQowbVRESENCQ0x3UVptd2piR1UyYzhrMklVMGRkaGUvM2dYb2ErZkxSL3c4RHFPUldFKzVPREFxNFp1aklRQ01WCkdWSVJzdERwRmZscFdvQ0t1RnFDMUk2bFJPcFVJYi9ER0xyV29oeWdRYmxmcFlZd0JkbWtqVWhXaHpOL0N4MTcKeDR2WFM3a0NjVDJDVDZqR0NoUVlZTGRPL2lsTCtFMEhJWE9oRUVWbVZhaTcrUW5qRXVmeTEyUGlHQUEyWnc2dwp6NmpYVjJab1NXQUgxZ0xGSTYxTGRNQTE1Y084RTJERkFHMXdOUmM0TndJYUNmejNQMDRBUzFwbk5yRW5xNE1XCkVqM2ZUSGU4MWlRTTBuMnZ6VlltUDVBcEFwa2JNeUQrRU9ENWxnWXlFa1dTNVpON2RlVWZ5QURZSVQvMFR0USsKQTFzbk94K1RnT0lnTGxnY0xrMWllVnhHNHBLOTJqTWpWMjBGb0RDUmM1SHZCWHZrMWYvSWN2VDhDOENDRXJISwpJWkptdGFrQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0VHTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3CkhRWURWUjBPQkJZRUZQMjJrRm4rZWlOcFJHMkU0VkhoVGlRdFo0TmlNQTBHQ1NxR1NJYjNEUUVCRFFVQUE0SUMKQVFBTlRHSEhCODFwaWxwVnBsamZvVjY3ZTlhWTJFaUNudkRRSmdTWTBnZ0JOY3ZzblVMaFRpKytWZ25qZ0Q5YwpCOGMvQkU1QU0vWGdWVHE3UXpiUS92REhLbE4xbjRMbXdzWWxJc1RTWGhDZCtCTFlLeGEyTlJsVXZHR3h2OWZFCnZTVVpvcDk4MEtiMExlQU5lZ0FuOHlldnRTZ2hRdC9GNkcrVENOWk5GS25ZZFFKenp2ejFXNk1VOURPL0J4cGMKVWovTTZSMFhaeHdJOE5hR281MGRQUzZTVFNTcUdCQ3VIbUEyRDRrUCtWdHZIdVZoS2Izd3pXMVVPL1dCcTBGLwpKU3o2and4c05OUU8vOVN4SXNNOVRMWFY5UjkvNThSTEl1Y3ZObDFUODd2dzd5ZGp0S0c3YUR3N1lxSXplODN0ClF1WW1NQlY3Y0k2STdSRi9RVHhLVUdGbXJ6K3lDTHZzNjViVjJPdThxUm5ocUhTV3kwbkNjakYwR2h6L09hblIKdDFNWWNKTytpQzJBR09adVlGRnJsbUk0cWlCUHBJc204YmxDVGRoT1FhLzI2RTJWQzJXQk9SQmVrU2VWY3ZzUgpQaXFWMkRzV2I3ODc5UzEwWS9lOVQ2WUhIc3Z4TDVjZnhibVBsRDF2dlR0TmI2TjJiYTYyWmZVVlEvU3E3ZmEwClhKbUtpQ2pLbU9oMVhKRm5ZRmpRb25kMUFSNUVzR0drblI5NW5QN0x5ejd5RmpHMjFndkJHSU0xbHV0alg5aW8KVkdpMjlHdFA4THVQait6TDNsMElUTEZqb0RCOVBiNXFFbjR4MGpqMHlHc09kdFQ0ZGYvSGVja2ZHV0xmNkZEawp5ZmNuMTlRWDB0NXl6YklZVG9qRFV3VXlEUFZDYW44Y0JkdWdCNGptZkNjV2pRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
helm upgrade --install rel-label-webhook chart --namespace mutatingwebhook --create-namespace --set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.csr | base64 -w0) --set secret.key=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.cabundle=$(echo "${ca_pem_b64}"|base64 -w0)
</code></pre>
<p>I get an error like below when I check status of pod logs:</p>
<pre><code>k get all
NAME READY STATUS RESTARTS AGE
pod/rel-label-webhook-5575b849dc-d62np 0/1 CrashLoopBackOff 2 (20s ago) 48s
pod/rel-label-webhook-5575b849dc-gg94h 0/1 Error 3 (35s ago) 63s
pod/rel-label-webhook-5575b849dc-zcvc9 0/1 CrashLoopBackOff 2 (19s ago) 48s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rel-label-webhook ClusterIP 10.0.135.138 <none> 8001/TCP 63s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rel-label-webhook 0/3 3 0 64s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rel-label-webhook-5575b849dc 3 3 0 64s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/rel-label-webhook Deployment/rel-label-webhook <unknown>/80%, <unknown>/80% 3 8 3 64s
k logs -f pod/rel-label-webhook-5575b849dc-gg94h
time="2022-07-05T13:37:45Z" level=info msg="listening on :8001"
error: error serving webhook: tls: failed to find "CERTIFICATE" PEM block in certificate input after skipping PEM blocks of the following types: [CERTIFICATE REQUEST]
</code></pre>
<p>What I am doing wrong here?</p>
<p>P.S:</p>
<p><strong>Edit 1.</strong></p>
<p>Tried as per larsks but now getting a new error!</p>
<p><strong>Command</strong></p>
<pre><code>azureuser@ubuntuvm:~/container-label-webhook$ helm upgrade --install rel-label-webhook chart --namespace mutatingwebhook --create-namespace --set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.key=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.pem | base64 -w0) --set secret.cabundle="echo "${ca_pem_b64}"|base64 -w0"
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>azureuser@ubuntuvm:~/container-label-webhook$ k logs -f pod/rel-label-webhook-5575b849dc-44xrn
time="2022-07-06T02:41:12Z" level=info msg="listening on :8001"
error: error serving webhook: tls: found a certificate rather than a key in the PEM for the private key
</code></pre>
| learner | <p>The error seems pretty clear: the code is looking for a <code>CERTIFICATE</code> block in a PEM-encoded file, but it is only find a <code>CERTIFICATE REQUEST</code> block. It looks like you're passing a certificate signing request (csr) where the code expects to find an actual SSL certificate. And in fact, looking at your <code>helm upgrade</code> command, that's exactly what you're doing:</p>
<pre><code>helm upgrade ... \
--set secret.cert=$(cat kubernetes/admissioncontrollers/introduction/tls/label-webhook.csr | base64 -w0) ...
</code></pre>
<p>You should use <code>label-webhook.pem</code> here instead of <code>label-webhook.csr</code>.</p>
| larsks |
<p>I was trying to test one scenario where pod will mount a volume and it will try to write one file to it. Below mentioned yaml works fine when I exclude command and args. However with command and args it fails with "crashloopbackoff".
The describe command is not providing much information for the failure. What's wrong here?</p>
<p>Note: I was running this yaml on katacoda.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: voltest
name: voltest
spec:
replicas: 1
selector:
matchLabels:
run: voltest
template:
metadata:
creationTimestamp: null
labels:
run: voltest
spec:
containers:
- image: nginx
name: voltest
volumeMounts:
- mountPath: /var/local/aaa
name: mydir
command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/local/aaa/testOut.txt"]
volumes:
- name: mydir
hostPath:
path: /var/local/aaa
type: DirectoryOrCreate
</code></pre>
<p>Describe command output:</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/voltest-78678dd56c-h5frs to controlplane
Normal Pulling 19s (x3 over 48s) kubelet, controlplane Pulling image "nginx"
Normal Pulled 17s (x3 over 39s) kubelet, controlplane Successfully pulled image "nginx"
Normal Created 17s (x3 over 39s) kubelet, controlplane Created container voltest
Normal Started 17s (x3 over 39s) kubelet, controlplane Started container voltest
Warning BackOff 5s (x4 over 35s) kubelet, controlplane Back-off restarting failed container
</code></pre>
| harsh | <p>You've configured your pod to run a single shell command:</p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt"]
</code></pre>
<p>This means that the pod starts up, runs <code>echo 'test complete' > /var/testOut.txt</code>, and then <em>immediately exits</em>. From the perspective
of kubernetes, this is a crash.</p>
<p>You've <em>replaced</em> the default behavior of the <code>nginx</code> image ("run
nginx") with a shell command.</p>
<p>If you want the pod to continue running, you'll need to arrange for it
to run some sort of long-running command. A simple solution would be
something like:</p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "echo 'test complete' > /var/testOut.txt; sleep 3600"]
</code></pre>
<p>This will cause the pod to sleep for an hour before exiting, giving
you time to inspect the results of your shell command.</p>
<hr />
<p>Note that your shell command isn't testing anything useful; you've
mounted your <code>mydir</code> volume on <code>/var/local/aaa</code>, but your shell
command is writing to <code>/var/testOut.txt</code>, so it's not making any use
of the volume.</p>
| larsks |
<p>I have a Kubernetes cluster where a resource is available in multiple types.</p>
<p>They are labeled <code>r_Type=r1</code>, <code>r_Type=r2</code>, and so on.</p>
<p>I need to start a pod that can run on a number of those types. For just one type, the corresponding YAML file would be simple:</p>
<pre class="lang-yaml prettyprint-override"><code>nodeSelector:
r_Type: r1
</code></pre>
<p>How do I write that if the pod can run on multiple of these resources? I see in the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement" rel="nofollow noreferrer">Kubernetes documentation</a> on labels that there is an 'in' set notation. But how do I write it? The following code does not look right:</p>
<pre class="lang-yaml prettyprint-override"><code>nodeSelector:
r_Type in (r1, r2)
</code></pre>
<p>That does not seem like valid YAML.</p>
<p>Can someone provide an example of how to achieve this?</p>
| Hans | <p>You can't use the set syntax in a <code>nodeSelector</code>; that field is a simple <code>key: value</code> map:</p>
<pre><code>$ kubectl explain pod.spec.nodeSelector
KIND: Pod
VERSION: v1
FIELD: nodeSelector <map[string]string>
DESCRIPTION:
NodeSelector is a selector which must be true for the pod to fit on a node.
Selector which must match a node's labels for the pod to be scheduled on
that node. More info:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
</code></pre>
<p>The expression <code>map[string]string</code> is a Go type expression, and it means the value is a map of string keys to string values.</p>
<p>If you read through "<a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>", you'll note that there are alternatives to <code>nodeSelector</code> that have more expressive syntax. For example, the <code>nodeAffinity</code> field supports <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="nofollow noreferrer"><code>matchExpressions</code></a>, which do support set-based requirements:</p>
<pre><code>spec:
affinity:
nodeAffinitiy:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: r_Type
operator: In
values:
- r1
- r2
</code></pre>
<p>See also <code>kubectl explain</code>:</p>
<pre><code>$ kubectl explain pod.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions
KIND: Pod
VERSION: v1
RESOURCE: matchExpressions <[]Object>
DESCRIPTION:
A list of node selector requirements by node's labels.
A node selector requirement is a selector that contains values, a key, and
an operator that relates the key and values.
FIELDS:
key <string> -required-
The label key that the selector applies to.
operator <string> -required-
Represents a key's relationship to a set of values. Valid operators are In,
NotIn, Exists, DoesNotExist. Gt, and Lt.
Possible enum values:
- `"DoesNotExist"`
- `"Exists"`
- `"Gt"`
- `"In"`
- `"Lt"`
- `"NotIn"`
values <[]string>
An array of string values. If the operator is In or NotIn, the values array
must be non-empty. If the operator is Exists or DoesNotExist, the values
array must be empty. If the operator is Gt or Lt, the values array must
have a single element, which will be interpreted as an integer. This array
is replaced during a strategic merge patch.
</code></pre>
| larsks |
<p>I have Kubernetes cluster set up and managed by AKS, and I have access to it with the python client.</p>
<p>Thing is that when I'm trying to send patch scale request, I'm getting an error.</p>
<p>I've found information about scaling namespaced deployments from python client in the GitHub docs, but it was not clear what is the body needed in order to make the request work:</p>
<pre><code># Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.AppsV1Api(api_client)
name = 'name_example' # str | name of the Scale
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = None # object |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
field_manager = 'field_manager_example' # str | fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). (optional)
force = True # bool | Force is going to \"force\" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. (optional)
try:
api_response = api_instance.patch_namespaced_deployment_scale(name, namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager, force=force)
pprint(api_response)
except ApiException as e:
print("Exception when calling AppsV1Api->patch_namespaced_deployment_scale: %s\n" % e)
</code></pre>
<p>So when running the code I'm getting <code>Reason: Unprocessable Entity</code></p>
<p>Does anyone have any idea in what format should the body be? For example if I want to scale the deployment to 2 replicas how can it be done?</p>
| Roy Levy | <p>The <code>body</code> argument to the <code>patch_namespaced_deployment_scale</code> can be a <a href="http://jsonpatch.com/" rel="noreferrer">JSONPatch</a> document, as @RakeshGupta shows in the comment, but it can also be a partial resource manifest. For example, this works:</p>
<pre><code>>>> api_response = api_instance.patch_namespaced_deployment_scale(
... name, namespace,
... [{'op': 'replace', 'path': '/spec/replicas', 'value': 2}])
</code></pre>
<p>(Note that the <code>value</code> needs to be an integer, not a string as in the
comment.)</p>
<p>But this also works:</p>
<pre><code>>>> api_response = api_instance.patch_namespaced_deployment_scale(
... name, namespace,
... {'spec': {'replicas': 2}})
</code></pre>
| larsks |
<p>I've read like a dozen tutorials like <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">these</a> and many answers like <a href="https://stackoverflow.com/questions/46535057/how-do-pods-on-the-same-node-communicate-with-each-other">these</a> but they couldn't help me. I have a service like</p>
<ul>
<li>if I have this said service, what do I need to do/use to send the files?</li>
</ul>
<p>For clarification: I have 2 pods <strong>A</strong> and <strong>B</strong> (both have 1 container each with standard Python Docker images) and wish to send files back and forth. So if I have the service, I have the port(s) and then I can use the combination of IP:port to send the files e.g. by creating a (TCP) server and a client Python? Or is there a "more Kubernetes-like" way to do this?</p>
<p>The service I created using Python:</p>
<pre><code>def create_service(core_v1_api):
body = client.V1Service(
api_version="v1",
kind="Service",
metadata=client.V1ObjectMeta(
name="banking-svc"
),
spec=kube_setup.client.V1ServiceSpec(
selector={"app": "my_app"},
type="ClusterIP",
ports=[client.V1ServicePort(
port=6666,
target_port=6666,
protocol="TCP"
)]
)
)
core_v1_api.create_namespaced_service(namespace="default", body =body)
</code></pre>
| topkek | <p>First, I would suggest reading <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">the official documentation</a>, which is a reasonably good introduction to the idea of a service in kubernetes.</p>
<blockquote>
<p>I have only roughly idea what to put into this service file (or these service files?)</p>
</blockquote>
<p>A service is just a pointer to one (or more) network services provided by your pods.</p>
<p>For example, if you have a pod running Postgres on port 5432, you might create a service named "database" that will map connections to <code>database:5432</code> to port 5432 on your pod:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
app: postgres
ports:
- name: postgres
port: 5432
</code></pre>
<p>There are two main parts to <code>service.spec</code>:</p>
<ul>
<li><p><code>selector</code> identifies the pod or pods that actually provide the
service. Here, we're saying that this service will round-robin among
any pods with label <code>app</code> equal to <code>postgres</code>.</p>
</li>
<li><p><code>ports</code> describes the ports on which we listen and the correspond
port in the pod. Here, we're mapping port <code>5432</code> to a port named
<code>postgres</code> in matching pods. That assumes you've set up your ports
with names, like this:</p>
<pre><code>...
ports:
- name: postgres
containerPort: 5432
...
</code></pre>
<p>If you haven't assigned names to your ports, you can use
<code>targetPort</code> instead in your service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
app: postgres
ports:
- targetPort: 5432
port: 5432
</code></pre>
</li>
</ul>
<p>With the above service in place, pods in the same namespace can connect to the host <code>database</code> on port <code>5432</code> in order to interact with Postgres.</p>
<blockquote>
<p>if I have this said service, what do I need to do/use to send the files?</p>
</blockquote>
<p>If you have two pods (let's call them <code>pod1</code> and <code>pod2</code>), and you want each pod to initiate connections to the other pod, create two services, one with a selector that matches <code>pod1</code> and the other with a selector that matches <code>pod2</code>.</p>
<p>The name you give your service is the hostname to which you will connect from inside a pod.</p>
| larsks |
<p>I am a complete newbie to kubernetes and have set up my own cluster on-premises comprising of one master and two workers, using kubeadm, on some virtual machines.
I'd like to deploy a sample web application to get some experience working with docker and containers.</p>
<p>Now, once I finish using this sample application and am ready to set up something of use, is there a clean up involved?
How do I "uninstall" the previous software and remove the old containers/images and "install" a new application on the cluster?</p>
<p>I hope this is clear. I'm not entirely sure if what I'm asking is right so please do correct me. Thanks in advance!</p>
<p>I've only set up the cluster so far and couldn't find anything online about "uninstalling" an app on a cluster. All I found was resources pointing to <code>kubeadm reset</code>.</p>
| gis_iguess | <blockquote>
<p>How do I "uninstall" the previous software and remove the old containers/images and "install" a new application on the cluster?</p>
</blockquote>
<p>How did you deploy the application in the first place? Typically, you do the reverse of whatever you did to deploy your manifests...</p>
<ul>
<li><p>If you used <code>kubectl apply -f <some_file></code> then to delete things you would run <code>kubectl delete -f <some_file></code></p>
</li>
<li><p>If you're using <a href="https://kustomize.io" rel="nofollow noreferrer">kustomize</a> (humble opinion: you should be), then instead of <code>kubectl apply -k .</code> you would <code>kubectl delete -k .</code>.</p>
</li>
<li><p>If you installed something using <code>helm install</code>, you would remove it with <code>helm uninstall</code></p>
</li>
</ul>
<p>Etc.</p>
<p>And of course, you could also delete the namespace into which your application was deployed, which will remove all resources in the namespace.</p>
| larsks |
<p>Can not find any issues.</p>
<p>I have to add several custom http headers to access my dedicate api-server proxy, but no clues available right now. Did I miss something?</p>
| AndyChow | <p>This is a dirty hard coded hack to show you how to get the outcome your looking for it's not a fully vetted solution. This method will compile a new version of kubectl that will add your needed headers. Maybe it will at least give you a idea to run with.</p>
<p>The reason I wanted to do this is because I put my k8s api endpoint on the internet and safeguarded it with Cloudflare Access. To allow Cloudflare access to let me get past the steel wall I needed to pass in two headers one for my client id and the other for client secret. This ended up working like a charm and is one case someone may want to add custom headers.</p>
<p><strong>Steps:</strong></p>
<ul>
<li>I assume you have Go installed and setup, if not go do that now.</li>
<li>git clone <a href="https://github.com/kubernetes/kubernetes.git" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes.git</a> (could take awhile it's pretty big)</li>
<li>cd kubernetes/staging/src/k8s.io/client-go/transport/</li>
<li>Open file round_trippers.go in your favorite code editor</li>
<li>Search for <code>func (rt userAgentRoundTripper) RoundTrip(req http.Request) (*http.Response, error)</code></li>
<li>Add your needed headers by adding lines like this <code>req.Header.Set("Bob-Is", "cool")</code></li>
<li>cd back to root folder kubernetes/</li>
<li>cd cmd/kubectl/</li>
<li>go build custom-kubectl</li>
<li>now test it with ./custom-kubectl get ns --v=9</li>
<li>in that output look for the header you added to the rest calls to K8s api, you should see <code>-H "Bob-Is: cool"</code> in the output</li>
<li>To make this not a hack maybe see if there's a way to add a kubectl plugin you create do to this for you or ask the kind folks in the k8s community how you can make this hacky method a bit cleaner or if there's a reason adding customer headers isn't a good idea. Worst case parameterize your custom kubectl build to pull in a new parameter you add --custom-request-headers and make it a bit more clean.</li>
</ul>
| Kuberchaun |
<p>Sorting by simple keys is straightforward and easy. Both of these work just fine.</p>
<pre><code>kubectl get nodes --sort-by=.metadata.name -o wide
kubectl get nodes --sort-by=.metadata.labels.affinity -o wide
</code></pre>
<p>I'm trying to now sort by complex keys such as .metadata.labels."clm.status" or, more importantly, .metadata.labels."kubernetes.io/roles".</p>
<p>Functionally speaking, I am able to sort by the kubernetes.io/roles using a different way, ie, sort plus other linux commands, so plzzzzz don't give me non-k8s alternatives. I just want to see how to sort using native k8s <em>get command</em> with these complex keys.</p>
<p>I tried many permutations, including but not limited to:</p>
<pre><code>$ kubectl get nodes --sort-by=.metadata.labels.kubernetes.io.roles -o wide
No resources found
$ kubectl get nodes --sort-by=.metadata.labels.'kubernetes.io.roles' -o wide
No resources found
$ kubectl get nodes --sort-by=.metadata.labels.'kubernetes.io/roles' -o wide
No resources found
$ kubectl get nodes --sort-by=.metadata.labels."kubernetes.io/roles" -o wide
No resources found
</code></pre>
| mbsf | <p>It looks like the solution is to escape the dots in your label with <code>\</code>, so for example:</p>
<pre><code>kubectl get nodes \
--sort=by=.metadata.labels.kubernetes\\.io/roles
</code></pre>
<p>Or if you prefer, use single quotes on the outside to avoid having to escape the escape character:</p>
<pre><code>kubectl get nodes \
--sort-by='.metadata.labels.kubernetes\.io/roles`
</code></pre>
| larsks |
<p>I have multiple apps based on the same chart deployed with Helm. Let's imagine you deploy your app multiple times with different configurations:</p>
<pre class="lang-sh prettyprint-override"><code>helm install myapp-01 mycharts/myapp
helm install myapp-02 mycharts/myapp
helm install myapp-03 mycharts/myapp
</code></pre>
<p>And after I update the chart files, I want to update all the releases, or maybe a certain range of releases. I managed to create a PoC script like this:</p>
<pre class="lang-sh prettyprint-override"><code>helm list -f myapp -o json | jq -r '.[].name' | while read i; do helm upgrade ${i} mycharts/myapp; done
</code></pre>
<p>While this works I will need to do a lot of things to have full functionality and error control.
Is there any CLI tool or something I can use in a CI/CD environment to update a big number of releases (say hundreds of them)? I've been investigating Rancher and Autohelm, but I couldn't find such functionality.</p>
| jmservera | <p>Thanks to the tip provided by <a href="https://stackoverflow.com/users/213269/jonas">@Jonas</a> I've managed to create a simple structure to deploy and update lots of pods with the same image base.</p>
<p>I created a folder structure like this:</p>
<pre><code>βββ kustomization.yaml
βββ base
β βββ deployment.yaml
β βββ kustomization.yaml
β βββ namespace.yaml
β βββ service.yaml
βββ overlays
βββ one
β βββ deployment.yaml
β βββ kustomization.yaml
βββ two
βββ deployment.yaml
βββ kustomization.yaml
</code></pre>
<p>So the main trick here is to have a <code>kustomization.yaml</code> file in the main folder that points to every app:</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
- overlays/one
- overlays/two
namePrefix: winnp-
</code></pre>
<p>Then in the <code>base/kustomization.yaml</code> I point to the base files:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
- namespace.yaml
</code></pre>
<p>And then in each app I use namespaces, sufixes and commonLabels for the deployments and services, and a patch to rename the base namespace:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ns-one
nameSuffix: "-one"
commonLabels:
app: vbserver-one
bases:
- ../../base
patchesStrategicMerge:
- deployment.yaml
patches:
- target:
version: v1 # apiVersion
kind: Namespace
name: base
patch: |-
- op: replace
path: /metadata/name
value: ns-one
</code></pre>
<p>Now, with a simple command I can deploy or modify all the apps:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -k .
</code></pre>
<p>So to update the image I only have to change the <code>deployment.yaml</code> file with the new image and run the command again.</p>
<p>I uploaded a full example of what I did in this <a href="https://github.com/jmservera/legacyvb6ink8s/tree/main/aks/kustomize" rel="nofollow noreferrer">GitHub repo</a></p>
| jmservera |
<p>What are the defaults for the Kubernetes <code>CrashLoopBackOff</code>?</p>
<p>Say, I have a pod:</p>
<pre><code>kubectl run mynginx --image nginx -- echo hello
</code></pre>
<p>And I inspect its status:</p>
<pre><code>kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mynginx 0/1 Pending 0 0s
mynginx 0/1 Pending 0 0s
mynginx 0/1 ContainerCreating 0 0s
mynginx 0/1 Completed 0 2s
mynginx 0/1 Completed 1 4s
mynginx 0/1 CrashLoopBackOff 1 5s
mynginx 0/1 Completed 2 20s
mynginx 0/1 CrashLoopBackOff 2 33s
mynginx 0/1 Completed 3 47s
mynginx 0/1 CrashLoopBackOff 3 59s
mynginx 0/1 Completed 4 97s
mynginx 0/1 CrashLoopBackOff 4 109s
</code></pre>
<p>This is "expected". Kubernetes starts a pod, it quits "too fast", Kubernetes schedules it again and then Kubernetes sets the state to <code>CrashLoopBackOff</code>.</p>
<p>Now, if i start a pod slightly differently:</p>
<pre><code>kubectl run mynginx3 --image nginx -- /bin/bash -c "sleep 10; echo hello"
</code></pre>
<p>I get the following</p>
<pre><code>kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mynginx3 0/1 Pending 0 0s
mynginx3 0/1 Pending 0 0s
mynginx3 0/1 ContainerCreating 0 0s
mynginx3 1/1 Running 0 2s
mynginx3 0/1 Completed 0 12s
mynginx3 1/1 Running 1 14s
mynginx3 0/1 Completed 1 24s
mynginx3 0/1 CrashLoopBackOff 1 36s
mynginx3 1/1 Running 2 38s
mynginx3 0/1 Completed 2 48s
mynginx3 0/1 CrashLoopBackOff 2 62s
mynginx3 1/1 Running 3 75s
mynginx3 0/1 Completed 3 85s
mynginx3 0/1 CrashLoopBackOff 3 96s
mynginx3 1/1 Running 4 2m14s
mynginx3 0/1 Completed 4 2m24s
mynginx3 0/1 CrashLoopBackOff 4 2m38s
</code></pre>
<p>This is also expected.</p>
<p>But say I set <code>sleep</code> for 24 hours, would I still get the same <code>CrashLoopBackOff</code> after two pod exits initially and then after each next pod exit?</p>
| Maksim Sorokin | <p>Based on <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">these docs</a>:</p>
<blockquote>
<p>The restartPolicy applies to all containers in the Pod. restartPolicy only refers to restarts of the containers by the kubelet on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, β¦), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container.</p>
</blockquote>
<p>I think that means that anything that executes for longer than 10 minutes before exiting will not trigger a <code>CrashLoopBackOff</code> status.</p>
| larsks |
<p>I'm currently trying to setup heketi on kubernetes, i need to create an endpoint like so (i'm using Ansible):</p>
<pre><code>- hosts: 'masters'
remote_user: kube
become: yes
become_user: kube
vars:
ansible_python_interpreter: /usr/bin/python3
tasks:
- name: "Create gluster endpoints on kubernetes master"
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
labels:
storage.k8s.io/name: glusterfs
storage.k8s.io/part-of: mine
storage.k8s.io/created-by: username
subsets:
- addresses:
- ip: 10.0.0.4
hostname: gluster1
- ip: 10.0.0.5
hostname: gluster2
- ip: 10.0.0.6
hostname: gluster3
- ip: 10.0.0.7
hostname: gluster4
ports:
- port: 1
</code></pre>
<p>When i run ansible playbook on this i am getting this error:</p>
<blockquote>
<p>Failed to create object: Namespace is required for v1.Endpoints</p>
</blockquote>
<p>I can't find any information as to what it's talking about, what is the namespace supposed to be?</p>
| noname | <p>An <code>Endpoints</code> resource (like a <code>Pod</code>, <code>Service</code>, <code>Deployment</code>, etc) is a <em>namespaced</em> resource: it cannot be created globally; it must be created inside a specific <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a>.</p>
<p>We can't answer the question, "what is the namespace supposed to be?", because generally this will be something like "the same namespace as the resources that will rely on this <code>Endpoints</code> resource".</p>
| larsks |
<p>I would like to know if there is a way to launch a pod from a yaml but with a custom name.</p>
<p>The objective of all this is to be able to add to the pod name something like a user id and then be able to delete it when the user requires it, having this way an unequivocal way to find the pod.</p>
<p>It would be something like:</p>
<pre><code>kubectl apply -f ./foo.yaml -name custom-name
</code></pre>
<p>Thanks in advance!</p>
| AdCerros | <p>You can't do that with <code>kubectl</code>, but you can:</p>
<ul>
<li><p>Perform the substitution with something like <code>sed</code> and pipe the result to <code>kubectl apply</code>:</p>
<pre><code>sed 's/PODNAME/custom-name/' foo.yaml |
kubectl apply -f-
</code></pre>
<p>(This assumes that you have <code>name: PODNAME</code> in the original manifest.)</p>
</li>
<li><p>Use a YAML processing tool to make the change. For example, using <a href="https://kislyuk.github.io/yq/" rel="nofollow noreferrer"><code>yq</code></a> (note that there are multiple commands with that name; hence the link).</p>
<pre><code>yq '.metadata.name = "custom-name"' foo.yaml |
kubectl apply -f-
</code></pre>
<p>This has the advantage that you don't need a specific sentinel name like <code>PODNAME</code> in the source file.</p>
</li>
<li><p>Create a Helm template and deploy the manifest(s) with <code>helm</code>, in which case you could set the value on the command line.</p>
<pre><code>helm install <your/repository> --set-string podname=custom-name
</code></pre>
</li>
<li><p>Use some other templating tool, such as Ansible's <a href="https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_module.html" rel="nofollow noreferrer"><code>k8s</code> module</a>.</p>
</li>
</ul>
| larsks |
<p>I use <code>.kube/config</code> to access Kubernetes api on a server. I am wondering does the token in config file ever get expired? How to prevent it from expire?</p>
| cometta | <p>the solution is to use kubernetes service account</p>
| cometta |
<p>I am trying to contact from a customized helm chart to a fully managed Postgres service on azure, and then I have to put the url connection string according to the app I want to deploy.</p>
<p>I want to ask which value should be the <code>DATABASE_URL</code> at the helm chart deployment?
My situation is the following:</p>
<ul>
<li>I want to use an external Azure managed PostgreSQL and no the PostgreSQL container that comes with the helm chart.
So in consequence, I modified the <code>DATABASE_URL</code> value, <a href="https://github.com/pivotal/postfacto/blob/master/deployment/helm/templates/deployment.yaml#L72-L73" rel="nofollow noreferrer">given here</a> to connect to the container inside K8s, I've modified in this way:</li>
</ul>
<pre><code> name: DATABASE_URL
# value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>but I am getting this error</p>
<pre><code>/usr/local/lib/ruby/2.7.0/uri/generic.rb:208:in `initialize': the scheme postgres does not accept registry part: nmbrs@postgresql-nmb-psfc-stag:mypassword*@postgresql-nmb-psfc-stag.postgres.database.azure.com (or bad hostname?) (URI::InvalidURIError)
</code></pre>
<p>Which should be the real <code>DATABASE_URL</code> value if I want to contact to a fully Postgres managed service?</p>
<p>Which is the equivalent value to this?:</p>
<pre><code>value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
</code></pre>
<p>I mean is</p>
<pre><code>postgres//<username>:<my-pg-password>@<WHICH VALUE SHOULD BE HERE?>
</code></pre>
<p>What is the value of <code>{{ .Release.Name }}-postgresql"</code> ?</p>
<p>Just for the record, my customize <code>postfacto/deployment/helm/templates/deployment.yaml</code> is <a href="https://gist.github.com/bgarcial/22ac1722a778cc17cc57f05a20e46ad1" rel="nofollow noreferrer">this</a></p>
<p><strong>UPDATE</strong></p>
<p>I changed the value for this</p>
<pre><code>- name: DATABASE_URL
# value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
# value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com:5432/postfacto-staging-db"
value: "postgres://postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>And I got a different error:</p>
<pre><code>Caused by:
PG::ConnectionBad: FATAL: Invalid Username specified. Please check the Username and retry connection. The Username should be in <username@hostname> format.
FATAL: Invalid Username specified. Please check the Username and retry connection. The Username should be in <username@hostname> format.
</code></pre>
<p>But is not clear how should be the syntax format since <a href="https://medium.com/avmconsulting-blog/how-to-deploy-rails-application-to-kubernetes-da8f23d45c6b" rel="nofollow noreferrer">this article</a> says:</p>
<blockquote>
<p>Next, encode the database credentials. Use the format DB_ADAPTER://USER:PASSWORD@HOSTNAME/DB_NAME. If you are using mysql with a user βdeployβ and a password βsecretβ on 127.0.0.1 and have a database railsapp, run</p>
</blockquote>
<p>The format <code>DB_ADAPTER://USER:PASSWORD@HOSTNAME/DB_NAME</code>, it is the same I was using at the beginning</p>
| bgarcial | <p>I think the problem with your connection string is, its <em>username</em> has a special character <code>@</code>, which might be breaking the connection string format and causing the validation error.</p>
<p>Your value</p>
<pre><code>- name: DATABASE_URL
value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>You can try to do an URL encoding for the <em>username</em> part like</p>
<pre><code>- name: DATABASE_URL
value: "postgres://nmbrs%40postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
| Arun P Johny |
<p>When running <code>kubectl apply -f ./sftest.yml</code>, I get the following error:</p>
<blockquote>
<p>Error from server (BadRequest): error when creating "./sftest.yaml": Service in version "v1" cannot be handled as a Service: json: cannot unmarshal object into Go struct field ServiceSpec.spec.ports of type []v1.ServicePort</p>
</blockquote>
<p>This happens when I'm trying to apply yaml,
Thats my first time working with k8s, this is the final problem that i cant solve</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-sf
labels:
app: nginx-sf
spec:
selector:
matchLabels:
app: nginx-sf-pod
replicas: 3
template:
metadata:
labels:
app: nginx-sf-pod
spec:
containers:
name: 'nginx-sf-container'
image: 'nginx:1.21.1-alpine'
ports:
containerPort: 80
command: ['sh', '-c', 'echo Hello Kubernetes from the Deployment! && sleep 3600']
volumeMounts:
name: config-volume
mountPath: '/etc/nginx/nginx.conf'
volumeMounts:
name: auth_basic
mountPath: '/etc/nginx/conf.d/.htpasswd'
readOnly: true
volumes:
name: config-volume
configMap:
name: nginx.conf
name: auth_basic
secret:
secretName: auth-basic
kind: Service
apiVersion: v1
metadata:
name: sf-webserver
spec:
selector:
app: nginx-sf
ports:
protocol: TCP
port: 80
targetPort: 80
</code></pre>
| mordgren | <p>In your Service manifest, the value of <code>ports</code> needs to be an <em>array</em>, not a mapping; see the example <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">here</a>. That would look like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sf-webserver
spec:
selector:
app: nginx-sf
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<hr />
<p>You may have another problem; it's unclear from your question. If you have both your Deployment and the Service contained in the same file <code>sftest.yml</code>, then you need to put a YAML document separator (<code>---</code>) between them:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-sf
labels:
app: nginx-sf
spec:
selector:
matchLabels:
app: nginx-sf-pod
replicas: 3
template:
metadata:
labels:
app: nginx-sf-pod
spec:
containers:
name: 'nginx-sf-container'
image: 'nginx:1.21.1-alpine'
ports:
containerPort: 80
command: ['sh', '-c', 'echo Hello Kubernetes from the Deployment! && sleep 3600']
volumeMounts:
name: config-volume
mountPath: '/etc/nginx/nginx.conf'
volumeMounts:
name: auth_basic
mountPath: '/etc/nginx/conf.d/.htpasswd'
readOnly: true
volumes:
name: config-volume
configMap:
name: nginx.conf
name: auth_basic
secret:
secretName: auth-basic
---
apiVersion: v1
kind: Service
metadata:
name: sf-webserver
spec:
selector:
app: nginx-sf
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
| larsks |
<p>I'm trying to install Jenkins X on an existing Kubernetes cluster (GKE), using <code>jx boot</code>, but it always gives me the error <code>trying to execute 'jx boot' from a non requirements repo</code></p>
<p>In fact, I have tried to use <code>jx install</code>, and it works, but this command is already marked as <strong><em>deprecated</em></strong>, but I see it's still the method to use on Jenkins X's GitHub page.</p>
<p>Then another detail ... I'm in fact creating the cluster using Terraform because I don't like the idea that Jenkins X creates the cluster for me. And I want to use Terraform to install Jenkins X as well but that would be another question. :)</p>
<p>So how to install using <code>jx boot</code> and what is a <code>non requirements repo</code> ?</p>
<p>Thanks</p>
| Lewen | <p>Are you trying to execute <code>jx boot</code> from within an existing git repository? Try changing into an empty, non-git directory run <code>jx boot</code> from there. </p>
<p><code>jx</code> wants to clone the <a href="https://github.com/jenkins-x/jenkins-x-boot-config" rel="noreferrer">jenkins-x-boot-config</a> and create your dev repository. It cannot do so from within an existing repository.</p>
| Hardy |
<p>The following says "I can" use <code>get</code> SubjectAccessReview, but then it returns a MethodNotAllowed error. Why?</p>
<pre><code>β― kubectl auth can-i get SubjectAccessReview
Warning: resource 'subjectaccessreviews' is not namespace scoped in group 'authorization.k8s.io'
yes
β― kubectl get SubjectAccessReview
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
</code></pre>
<pre><code>β― kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.3+k3s1
</code></pre>
<p>If I cannot <code>get</code>, then can-i should NOT return <code>yes</code>. Right?</p>
| jersey bean | <p><code>kubectl auth can-i</code> is not wrong.</p>
<p>The <code>can-i</code> command is checking cluster RBAC (does there exist a role and rolebinding that grant you access to that operation). It doesn't know or care about "supported methods". Somewhere there is a role that grants you the <code>get</code> verb on those resources...possibly implicitly e.g. via <code>resources: ['*']</code>.</p>
<p>For example, I'm accessing a local cluster with <code>cluster-admin</code> privileges, which means my access is controlled by this role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
</code></pre>
<p>The answer to <code>kubectl auth can-i get <anything></code> is going to be <code>yes</code>, regardless of whether or not that operation makes sense for a given resource.</p>
| larsks |
<p><strong>Heads up</strong>: I need Hyper-V as VM driver because I want to be able to use the ingress addon; using Docker as the driver will not allow the use of addons in Windows.</p>
<p>I am using Minikube v1.11.0 and Kubernetes v1.18.3. When I am trying to create and launch a Minikube cluster according to <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">this tutorial</a> with Hyper-V in PowerShell it keeps hanging on 'Creating hyperv VM:</p>
<pre><code>PS C:\WINDOWS\system32> minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
* minikube v1.11.0 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
- KUBECONFIG=~/.kube/config
* Using the hyperv driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Creating hyperv VM (CPUs=4, Memory=4096MB, Disk=20000MB) ...
</code></pre>
<p>After about 10 minutes it goes further and crashes with this errors:</p>
<pre><code>* Stopping "minikube" in hyperv ...
* Powering off "minikube" via SSH ...
* Deleting "minikube" in hyperv ...
! StartHost failed, but will try again: creating host: create host timed out in 240.000000 seconds
E0605 19:02:43.905739 30748 main.go:106] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube".
At line:1 char:3
+ ( Hyper-V\Get-VM minikube ).state
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException
+ FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
...
Multiple E0605 errors
...
* Failed to start hyperv VM. "minikube start" may fix it: creating host: create host timed out in 240.000000 seconds
*
* [CREATE_TIMEOUT] error provisioning host Failed to start host: creating host: create host timed out in 240.000000 seconds
* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
* Related issue: https://github.com/kubernetes/minikube/issues/7072
</code></pre>
<p>What to do?</p>
| marcuse | <p>Fix On Azure VM in Windows Server 2019:</p>
<ol start="0">
<li><p>Check if minikube VM has an IP Address in Networking tab</p>
<ul>
<li>If <em>NO</em> IP address and no DHCP is running in VM, then go to next step</li>
</ul>
</li>
<li><p>Create <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/nested-virtualization#set-up-internet-connectivity-for-the-guest-virtual-machine" rel="nofollow noreferrer">new InternalNAT switch</a>:</p>
</li>
</ol>
<p>Powershell:</p>
<pre><code>New-VMSwitch -Name "InternalNAT" -SwitchType Internal
Get-NetAdapter
# Take note of the "ifIndex" for the virtual switch you just created, assuming 13
New-NetIPAddress -IPAddress 192.168.0.1 -PrefixLength 24 -InterfaceIndex 13
New-NetNat -Name "InternalNat" -InternalIPInterfaceAddressPrefix 192.168.0.0/24
</code></pre>
<ol start="2">
<li>Create DHCP Server</li>
</ol>
<ul>
<li>Create new scope for 192.168.<em>.</em> (or any range, note down this range)</li>
</ul>
<ol start="3">
<li>Delete and Re-create minikube and use "InternalNat" switch</li>
</ol>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "InternalNAT"
</code></pre>
| Lydon Ch |
<p>Alright, various permutations of this question have been asked and I feel terrible asking; I'm throwing the towel in and was curious if anyone could point me in the right direction (or point out where I'm wrong). I went ahead and tried a number of <a href="https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/rewrite#examples" rel="nofollow noreferrer">examples</a> from the docs, but to no avail (see below).</p>
<p>I'm trying to route traffic to the appropriate location under Kubernetes using an Ingress controller.</p>
<h3>Server Setup</h3>
<p>I have a server, <code>myserver.com</code> and three services running at:</p>
<p><code>myserver.com/services/</code></p>
<p><code>myserver.com/services/service_1/</code></p>
<p><code>myserver.com/services/service_2/</code></p>
<p>Note that I'm not doing anything (purposefully) to <code>myserver.com/</code>.</p>
<p>At each of the three locations, there's a webapp running. For example, <code>myserver.com/services/service_2</code> needs to load css files at <code>myserver.com/services/service_2/static/css</code>, etc...</p>
<h3>Kubernetes Ingress</h3>
<p>To manage the networking, I'm using a Kubernetes Ingress controller, which I've defined below. The CORS annotations aren't <em>super</em> relevant, but I've included them to clear up any confusion.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myServices
namespace: myServices
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: '$http_origin'
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- myserver.com
rules:
- host: myserver.com
http:
paths:
- path: /services
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- path: /services/service_1(/|$)
pathType: Prefix
backend:
service:
name: web-service-1
port:
number: 80
- path: /services/service_2(/|$)
pathType: Prefix
backend:
service:
name: web-service-2
port:
number: 80
</code></pre>
<h3>Targets</h3>
<p>I noticed that one helpful thing to do is give some path examples. From the examples below <em>it looks like the paths aren't that complicated</em>. I <em>think</em> this is what I'm after. Note that I'd like each service to be able to resolve its css and image files.</p>
<pre><code>myserver.com/services -> myserver.com/services
myserver.com/services/xxx/xxx -> myserver.com/services/xxx/xxx
myserver.com/services/service_1 -> myserver.com/services/service_1
myserver.com/services/service_1/xxx/xxx -> myserver.com/services/service_1/xxx/xxx
myserver.com/services/service_2/xxx/xxx -> myserver.com/services/service_2/xxx/xxx
</code></pre>
<h3>Attempts</h3>
<p>I know that this issue has to do a lot with the <code>nginx.ingress.kubernetes.io/rewrite-target</code> rule and its interaction with the paths I've defined.</p>
<p>I <em>know</em> that I don't want <code>nginx.ingress.kubernetes.io/rewrite-target: $1</code> because that gives a 500 when visiting <code>myserver.com/services</code></p>
<p>I <em>know</em> that I don't want <code>nginx.ingress.kubernetes.io/rewrite-target: $1/$2</code> because when I visit <code>myserver.com/services/service_1</code> I actually get part of the content at <code>myserver.com/services</code> rendered on the page.</p>
<h4>SO Attempt 1</h4>
<p>I also attempted to replicate the accepted solution from <a href="https://stackoverflow.com/questions/65703968/kubernetes-ingress-nginx-use-regex-to-match-exact-url-path">this</a> question.</p>
<p>In this attempt I set</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: "/$1"</code> and one of the service paths to</p>
<p><code>- path: /(services/service_1(?:/|$).*)</code></p>
<p>When I visit <code>myserver.com/services/service_1/xyz</code>, the HTML from <code>myserver.com/services/service_1</code> gets rendered.</p>
<h3>Concluding Thoughts</h3>
<p>Something ain't quite right with the path rewrite and paths rules. Any suggestions?</p>
| Thomas | <p>The problem you reported in your most recent comment is resolved by looking at the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite example</a> in the nginx-ingress documentation.</p>
<p>The <code>rewrite-target</code> annotation configures the ingress such that matching paths will be rewritten to that value. Since you've specified a static value of <code>/</code>, anything matching your ingress rules will get rewritten to <code>/</code>, which is exactly the behavior you're seeing.</p>
<p>The solution is to capture the portion of the path we care about, and then use that in the <code>rewrite-target</code> annotation. For example:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myservices
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: '$http_origin'
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: myserver.com
http:
paths:
- path: /services/service_1(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice-service1
port:
number: 80
- path: /services/service_2(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice-service2
port:
number: 80
- path: /services(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice
port:
number: 80
</code></pre>
<p>Here, we've modified the match expression so that they look like:</p>
<pre><code> - path: /services/service_1(/|$)(.*)
</code></pre>
<p>The second capture group <code>(.*)</code> captures everything after the path
portion that matches literally. We then use that capture group (<code>$2</code>,
because it's the second group) in the <code>rewrite-target</code> annotation:</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
<p>With this configuration in place, a request to <code>/services/service_2</code>
results in:</p>
<pre><code>This is service2.
</code></pre>
<p>But a request to <code>/services/service_2/foo/bar</code> results in:</p>
<pre><code><html><head><title>404 Not Found</title></head><body>
<h1>Not Found</h1>
The URL you requested (/foo/bar) was not found.
<hr>
</body></html>
</code></pre>
<p>And looking at the backend server logs, we see:</p>
<pre><code>10.42.0.32 - - [21/Jan/2022:20:33:23 +0000] "GET / HTTP/1.1" 200 211 "" "curl/7.79.1"
10.42.0.32 - - [21/Jan/2022:20:33:45 +0000] "GET /foo/bar HTTP/1.1" 404 311 "" "curl/7.79.1"
</code></pre>
<p>I've updated <a href="https://github.com/larsks/k3s-nginx-example/tree/main" rel="nofollow noreferrer">my example repository</a> to match this configuration.</p>
| larsks |
<p>I would like to create an argoCD application right from the git repository, ie the gitOps way. I already created a CRD file for the application which looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-service
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
syncPolicy:
syncOptions:
- CreateNamespace=true
project: default
source:
path: clusters/helm-chart
repoURL: https://github.com/user/my-repo.git
targetRevision: HEAD
helm:
values: |
image:
repository: user/my-image
pullPolicy: Always
tag: xxx
</code></pre>
<p>My current workflow is to apply this CRD to my cluster with <code>k apply -f application.yaml</code>.</p>
<p><strong>Question:</strong> how can I instruct ArgoCD to go and sync/create the application I have defined at <code>https://github.com/user/my-repo.git</code> without first creating that application "manually"?</p>
| Melkis H. | <p>At some point you have to manually apply a manifest to your ArgoCD instance.</p>
<p>You can limit that to a single manifest if you utilize the <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/" rel="noreferrer">app-of-apps</a> pattern, in which you have a repository that contains all your ArgoCD application manifests.</p>
<p>You can also create <a href="https://argocd-applicationset.readthedocs.io/en/stable/" rel="noreferrer">ApplicationSets</a> to automatically generate ArgoCD applications from templates based on the content of a git repository, the names of clusters registered with ArgoCD, and other data.</p>
| larsks |
<p>In kubernetes pod yaml specification file, you can set a pod to use the host machine's network using <code>hostNetwork:true</code>.</p>
<p>I can't find anywhere a good (suitable for a beginner) explanation of what the <code>hostPID:true</code> and <code>hostIPC:true</code> options mean. Please could someone explain this, assuming little knowledge in linux networking and such. Thanks.</p>
<pre><code>spec:
template:
metadata:
labels:
name: podName
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
</code></pre>
<p>Source: <a href="https://github.com/kubernetes/kubernetes/blob/master/examples/newrelic/newrelic-daemonset.yaml" rel="noreferrer">github link here</a></p>
| mleonard | <p>they're roughly described within the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="noreferrer">Pod Security Policies</a></p>
<blockquote>
<p><code>hostPID</code> - Use the hostβs pid namespace. Optional: Default to false.</p>
<p><code>hostIPC</code> - Use the hostβs ipc namespace. Optional: Default to false.</p>
</blockquote>
<p>Those are related to the <code>SecurityContext</code> of the <code>Pod</code>. You'll find some more information in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/pod-security-context.md" rel="noreferrer">Pod Security design document</a>.</p>
| pagid |
<p>I'm new to Kubernetes and plan to use Google Kubernetes Engine. Hypothetically speaking, let's say I have a K8s cluster with 2 worker nodes. Each node would have its own pod housing the same application. This application will grab a file from some persistent volume and generate an output file that will be pushed back into a different persistent volume. Both pods in my cluster would be doing this continuously until there are no input files in the persistent volume left to be processed. Do the pods inherently know NOT to grab the same file that one pod is already using? If not, how would I be able account for this? I would like to avoid 2 pods using the same input file.</p>
| Adriano Matos | <blockquote>
<p>Do the pods inherently know NOT to grab the same file that one pod is already using?</p>
</blockquote>
<p>Pods are just processes. Two separate processes accessing files from a shared directory are going to run into conflicts unless they have some sort of coordination mechanism.</p>
<h2>Option 1</h2>
<p>Have one process whose job it is to enumerate the available files. Your two workers connect to this process and receive filenames via some sort of queue/message bus/etc. When they finish processing a file, they request the next one, until all files are processed. Because only a single process is enumerating the files and passing out the work, there's no option for conflict.</p>
<h2>Option 2</h2>
<p><em>In general</em>, renaming files is an atomic operation. Each worker creates a subdirectory within your PV. To claim a file, it renames the file into the appropriate subdirectory and then processes it. Because renames are atomic, even if both workers happen to pick the same file at the same time, only one will succeed.</p>
<h2>Option 3</h2>
<p>If your files have some sort of systematic naming convention, you can divide the list of files between your two workers (e.g., "everything that ends in an even number is processed by worker 1, and everything that ends with an odd number is processed by worker 2").</p>
<hr />
<p>Etc. There are many ways to coordinate this sort of activity. The wikipedia entry on <a href="https://en.wikipedia.org/wiki/Synchronization_(computer_science)" rel="nofollow noreferrer">Synchronization</a> may be of interest.</p>
| larsks |
<p>I have a devops pipeline divided in three steps:</p>
<ul>
<li><code>kubectl apply -f configmap.yml</code></li>
<li><code>kubectl apply -f deployment.yml</code></li>
<li><code>kubectl rollout restart deployment/test-service</code></li>
</ul>
<p>I think that when the <code>configmap.yml</code> changes the <code>rollout restart</code> step is useful. But when only the <code>deployment.yml</code> changes, I'm worried that the "extra" <code>rollout restart</code> step is not useful and should be avoided.</p>
<p>Should I execute <code>rollout restart</code> only when the <code>configmap.yml</code> changes or should I don't care about?</p>
| freedev | <p>This isn't a direct answer, but it ended up being too long for a comment and I think it's relevant. If you were to apply your manifests using <a href="https://kustomize.io/" rel="nofollow noreferrer"><code>kustomize</code></a> (<em>aka</em> <code>kubectl apply -k</code>), then you get the following behavior:</p>
<ul>
<li><code>ConfigMaps</code> are generated with a content-based hash appended to their name</li>
<li>Kustomize substitutes the generated name into your <code>Deployment</code></li>
<li>This means the <code>Deployment</code> is only modified when the content of the <code>ConfigMap</code> changes, causing an implicit re-deploy of the pods managed by the <code>Deployment</code>.</li>
</ul>
<p>This largely gets you the behavior you want, but it would require some changes to your deployment pipeline.</p>
| larsks |
<p>I just installed a new centos server with docker</p>
<pre><code>Client:
Version: 1.13.1
API version: 1.26
Package version: <unknown>
Go version: go1.8.3
Git commit: 774336d/1.13.1
Built: Wed Mar 7 17:06:16 2018
OS/Arch: linux/amd64
Server: Version: 1.13.1 API version: 1.26 (minimum
> version 1.12) Package version: <unknown> Go version: go1.8.3
> Git commit: 774336d/1.13.1 Built: Wed Mar 7 17:06:16
> 2018 OS/Arch: linux/amd64 Experimental: false
</code></pre>
<p>And i can use the command oc cluster up to launch a openshift server</p>
<pre><code>oc cluster up --host-data-dir /data --public-hostname master.ouatrahim.com --routing-suffix master.ouatrahim.com
</code></pre>
<p>which gives the output</p>
<pre><code>Using nsenter mounter for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
The server is accessible via web console at:
https://master.ouatrahim.com:8443
You are logged in as:
User: developer
Password: <any value>
To login as administrator:
oc login -u system:admin
</code></pre>
<p>And oc version gives the output </p>
<pre><code>oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://127.0.0.1:8443
openshift v3.9.0+0e3d24c-14
kubernetes v1.9.1+a0ce1bc657
</code></pre>
<p>But when i tried to access to the web console via <a href="https://master.ouatrahim.com:8443/" rel="nofollow noreferrer">https://master.ouatrahim.com:8443/</a> i keep getting a http redirect to 127.0.0.1</p>
<pre><code>https://127.0.0.1:8443/oauth/authorize?client_id=openshift-web-console&response_type=code&state=eyJ0aGVuIjoiLyIsIm5vbmNlIjoiMTUyNTk2NjcwODI1MS0xODg4MTcxMDEyMjU3OTQ1MjM0NjIwNzM5NTQ5ODE0ODk5OTYxMTIxMTI2NDI3ODg3Mjc5MjAwMTgwODI4NTg0MTkyODAxOTA2NTY5NjU2In0&redirect_uri=https%3A%2F%2F127.0.0.1%3A8443%2Fconsole%2Foauth
</code></pre>
<p>I hope someone can help me solve this</p>
| ouatrahim | <p>You can bring up the cluster using your IP address like:
oc cluster up --public-hostname=192.168.122.154 </p>
<p>This way you should be able to access using <a href="https://master.ouatrahim.com:8443/" rel="nofollow noreferrer">https://master.ouatrahim.com:8443/</a></p>
| kumar |
<p>You can simply interact with K8s using its REST API. For example to get pods:</p>
<pre><code>curl http://IPADDR/api/v1/pods
</code></pre>
<p>However I can't find any example of authentication based only on curl or REST. All the examples show the usage of <code>kubectl</code> as proxy or as a way to get credentials.</p>
<p>If I already own the <code>.kubeconfig</code>, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using <code>kubectl</code>?</p>
| collimarco | <p>The <code>kubeconfig</code> file you download when you first install the cluster includes a client certificate and key. For example:</p>
<pre><code>clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
</code></pre>
<p>If you extract the <code>client-certificate-data</code> and <code>client-key-data</code> to
files, you can use them to authenticate with curl. To extract the
data:</p>
<pre><code>$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
</code></pre>
<p>And then using <code>curl</code>:</p>
<pre><code>$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
</code></pre>
<hr />
<p>Alternately, if your <code>.kubeconfig</code> has tokens in it, like this:</p>
<pre><code>[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
</code></pre>
<p>Then you can use that token as a bearer token:</p>
<pre><code>$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
</code></pre>
<p>...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).</p>
| larsks |
<p>I am building postgresql on kubernetes, but I cannot persist postgres data.
The host is a GCP instance with Ubuntu-20.4. The volumes for disk are using GCP volumes, which are mounted after attaching to the instance.</p>
<pre><code>Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 9982728 4252592 5713752 43% /
devtmpfs 4038244 0 4038244 0% /dev
tmpfs 4059524 0 4059524 0% /dev/shm
tmpfs 811908 1568 810340 1% /run
tmpfs 5120 0 5120 0% /run/lock
tmpfs 4059524 0 4059524 0% /sys/fs/cgroup
/dev/loop0 50304 50304 0 100% /snap/core18/2671
/dev/loop2 217856 217856 0 100% /snap/google-cloud-cli/98
/dev/loop3 94208 94208 0 100% /snap/lxd/24065
/dev/loop1 60544 60544 0 100% /snap/core20/1782
/dev/loop4 44032 44032 0 100% /snap/snapd/17885
/dev/nvme0n1p15 99801 6004 93797 7% /boot/efi
tmpfs 811904 0 811904 0% /run/user/1001
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/49ebcf7b449f4b13d52aab6f52c28f139c551f83070f6f21207dbf52315dc264/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/e46f0c6b19e5ccff9bb51fa3f7669a9a6a2e7cfccf54681e316a9cd58183dce4/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/495c80e87521bfdda55827df64cdb84cddad149fb502ac7ee12f3607badd4649/shm
shm 65536 0 65536 0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/443e7b254d02c88873c59edc6d5b0a71e80da382ea81105e8b312ad4122d694a/shm
/dev/nvme0n3 10218772 12 9678088 1% /var/lib/postgresql β» disk for postgres
/dev/nvme0n2 3021608 24 2847916 1% /var/lib/pgadmin β» disk for pgadmin
shm 65536 1052 64484 2% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/bd83982e91b6a3bce7853416d72878d5473174e884c15578c47a8d8952f4e718/shm
</code></pre>
<p>Also, the pod volume is allocated using persistent volume and persistent volume claim.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume # Sets PV's name
labels:
app: postgres
spec:
# storageClassName: local-storage
capacity:
storage: 10Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/postgresql"
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: pgadmin-pv-volume # Sets PV's name
labels:
app: pgadmin
spec:
# storageClassName: local-storage
capacity:
storage: 3Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/var/lib/pgadmin"
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgresql
labels:
app: postgresql
spec:
replicas: 1
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- image: docker.io/yamamuratkr/postgres
name: postgresql
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres
key: database_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres
key: database_password
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>However, when the pod is deleted, the postgres data also disappears with it, and I can confirm that it is not persistent.</p>
<p>If you know the cause of this problem, please let me know.
Thank you in advance.</p>
<p>None of the following worked</p>
<ul>
<li>Using hostpath for pod volumes</li>
<li>Use default PGDATA</li>
</ul>
| ε±±ζζ¦ | <p>The source of your problem is here:</p>
<pre><code>volumeMounts:
- name: postgredb
mountPath: /var/lib/postgresql
</code></pre>
<p>The <code>postgres</code> image itself mounts a volume on <code>/var/lib/postgresql/data</code>. We can see that if we inspect the image:</p>
<pre><code>$ docker image inspect docker.io/postgres:14 | jq '.[0].ContainerConfig.Volumes'
{
"/var/lib/postgresql/data": {}
}
</code></pre>
<p>Your mount on <code>/var/lib/postgresql</code> is effectively a no-op. An ephemeral volume is created on <code>/var/lib/postgresql/data</code> each time the container starts, and since that's the default <code>PGDATA</code> location, your data is effectively discarded when the container exits.</p>
<p>I've put together an example in my local environment to demonstrate this behavior. I've made a few minor changes from your example that shouldn't operationally impact anything.</p>
<p>I've created the following Secret with the postgres credentials; by naming the keys like this we can use a single <code>envFrom</code> block instead of multiple <code>env</code> entries (see the Deployment for details):</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: postgres-env
type: Opaque
stringData:
POSTGRES_USER: example
POSTGRES_PASSWORD: secret
</code></pre>
<p>And in line with my comment I'm injecting a file into <code>/docker-entrypoint-initdb.d</code> from this ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
initdb.sql: |
CREATE TABLE people (
id SERIAL,
name VARCHAR(40),
favorite_color VARCHAR(10)
);
INSERT INTO people (name, favorite_color) VALUES ('bob', 'red');
INSERT INTO people (name, favorite_color) VALUES ('alice', 'blue');
INSERT INTO people (name, favorite_color) VALUES ('mallory', 'purple');
</code></pre>
<p>I'm using this PersistentVolumeClaim:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>And finally I'm tying it all together in this Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- envFrom:
- secretRef:
name: postgres-env
image: docker.io/postgres:14
name: postgresql
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: postgres-config
- mountPath: /var/lib/postgresql
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
- configMap:
name: postgres-config
name: postgres-config
</code></pre>
<p>The significant changes here are:</p>
<ol>
<li><p>I'm using a single <code>envFrom</code> block to set environment variables from the keys in the <code>postgres-env</code> secret.</p>
</li>
<li><p>I'm using the upstream <code>docker.io/postgres:14</code> image rather than building my own custom image.</p>
</li>
<li><p>I'm injecting the contents of <code>/docker-entrypoint-initdb.d</code> from the postgres-config ConfigMap.</p>
</li>
</ol>
<p>Note that this deployment is using the same <code>mountPath</code> as in your example`.</p>
<hr />
<p>If I bring up this environment, I can see that the database initialization script was executed correctly. The <code>people</code> table exists and has the expected data:</p>
<pre><code>$ kubtectl exec -it deploy/postgresql -- psql -U example -c 'select * from people'
id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
(3 rows)
</code></pre>
<p>Let's make a change to the database and see what happens when we restart the pod. First, we add a new row to the table and view the updated table:</p>
<pre><code>$ kubectl exec -it deploy/postgresql -- psql -U example -c "insert into people (name, favorite_color) values ('carol', 'green')"
INSERT 0 1
$ kubectl exec -it deploy/postgresql -- psql -U example -c 'select * from people'
id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
4 | carol | green
(4 rows)
</code></pre>
<p>Now we restart the database pod:</p>
<pre><code>$ kubectl rollout restart deployment/postgresql
</code></pre>
<p>And finally check if our changes survived the restart:</p>
<pre><code>$ kubectl exec -it deploy/postgresql -- psql -U example -c 'select * from people' id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
(3 rows)
</code></pre>
<p>As expected, they did not! Let's change the <code>mountPath</code> in the Deployment so that it looks like this instead:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgresql
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- envFrom:
- secretRef:
name: postgres-env
image: docker.io/postgres:14
name: postgresql
ports:
- containerPort: 5432
name: postgresql
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: postgres-config
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
- configMap:
name: postgres-config
name: postgres-config
</code></pre>
<p>Using this Deployment, with no other changes, we can re-run the previous test and see that our data persists as desired:</p>
<pre><code>$ kubectl exec -it deploy/postgresql -- psql -U example -c "insert into people (name, favorite_color) values ('carol', 'green')"
INSERT 0 1
$ k rollout restart deployment/postgresql
deployment.apps/postgresql restarted
$ k exec -it deploy/postgresql -- psql -U example -c 'select * from people' id | name | favorite_color
----+---------+----------------
1 | bob | red
2 | alice | blue
3 | mallory | purple
4 | carol | green
(4 rows)
</code></pre>
<hr />
<p>An alternative solution would be to mount your volume in a completely different location and then set <code>PGDATA</code> appropriately. E.g.,</p>
<pre><code> ...
env:
- name: PGDATA
value: /data
...
volumeMounts:
- name: postgres-data
mountPath: /data
...
</code></pre>
| larsks |
<p>To preface: I know this is a bit of duplicate (this question has been asked many times here in different versions) but I can't really find a clear answer for how this is handled on bare metal.</p>
<p>The issue: I want to be able to access internal services without needing to port forward for each one. Like accessing loki or traefik dashboard. The setup is running standard K3S on a bare metal server. Most of the answers and guides focus on cloud based load balancers, but obviously that option isn't available to me.</p>
<p>There seems to be a number of ways to tackle this problem, but what is the absolute simplest? A second ingress controller that binds to the VPN interface? A loadbalancer (which one?)</p>
<p>I would really appreciate some guidance!</p>
| stewbert | <p>The simplest way to expose a <code>k3s</code> service on your host is just to create a <code>LoadBalancer</code> service. You don't actually need to install a load balancer of any sort; this will expose your service ports on your host.</p>
<p>For example, start a pod:</p>
<pre><code>k3s kubectl run --image docker.io/alpinelinux/darkhttpd:latest --port 8080 webtest
</code></pre>
<p>Create a service:</p>
<pre><code>k3s kubectl expose pod webtest --target-port 8080 --name webtest --type=LoadBalancer
</code></pre>
<p>That gets us a manifest that looks like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
run: webtest
name: webtest
namespace: default
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.43.234.34
clusterIPs:
- 10.43.234.34
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 31647
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: webtest
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.122.136
</code></pre>
<p>And now on my <code>k3s</code> host, I can:</p>
<pre><code>$ curl localhost:8080
<html>
<head>
<title>/</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
...
</code></pre>
<p>Or from somewhere else on my network:</p>
<pre><code>$ curl 192.168.1212.136:8080
<html>
<head>
<title>/</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
...
</code></pre>
| larsks |
<p>I have the following CronJob to run a backup of my database, and I'd like the backup files to be appended with the date:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.postgresqlBackup.enabled }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: {{ .Values.postgresqlBackup.cron | quote }}
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: IfNotPresent
command:
- pg_dump
- --username=postgres
- --no-password
- --format=custom
- --file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak
- --host={{ include "dragalia-api.fullname" . }}-postgresql
- --verbose
volumeMounts:
- name: data
mountPath: /backups
env:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ include "dragalia-api.fullname" . }}-postgresql
key: postgres-password
optional: false
restartPolicy: Never
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ include "dragalia-api.fullname" . }}-db-backup
{{- end }}
</code></pre>
<p>The job executes successfully but I am left with files like:</p>
<pre><code>docker@minikube:/dragalia-api/db-backup$ ls
'dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak'
</code></pre>
<p>The entire filename is quoted and the string is not evaluated. How can I make it so that the string is evaluated by the shell?</p>
<p>Things I've tried:</p>
<ul>
<li>using backticks: <code>--file=/backups/dragalia-api-1`date +"%Y-%m-%d_%H-%M-%S".bak` </code>: still rendered literally</li>
<li>defining a DATE env var and putting ${DATE} in the string: rendered literally</li>
<li>escaping the % signs e.g. <code>\%Y</code>: rendered literally</li>
<li>passing a multi-line string to <code>sh -c</code>: this caused the job to fail on being unable to connect to the db, so I guess <code>--host</code> was not passed properly</li>
</ul>
<p>The only other thing I can think of is passing in a shell script, but I'd rather not create another resource if possible.</p>
<p>Equivalently, since the date information is stored by the filesystem, if there's some other way to pass a unique string into the filename that would work. Anything so that it keeps rolling backups instead of just the one.</p>
| lordnoob | <p>If you want to use shell substitution, then you need to execute your command <strong>with a shell</strong>. For example:</p>
<pre><code>containers:
- name: postgres
image: postgres:latest
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- >
pg_dump
--username=postgres
--no-password
--format=custom
--file=/backups/dragalia-api-$(date +"%Y-%m-%d_%H-%M-%S").bak
--host={{ include "dragalia-api.fullname" . }}-postgresql
--verbose
</code></pre>
<p>Also, unrelated to your question, you should pin your <code>postgres</code> image to a specific version (<code>postgres:14</code>) or you'll be in for a rude surprise when <code>:latest</code> is unexpectedly a new major version.</p>
| larsks |
<p>I want to run two different commands in two different location inside the kubernetes pod. How can I do it. My approach is below;</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: wso2is-deployment
namespace: tech-ns
labels:
app: wso2is
spec:
replicas: 1
selector:
matchLabels:
app: wso2is
template:
metadata:
labels:
app: wso2is
spec:
containers:
- image: vialogic/wsois:cluster1.0
name: wso2is
imagePullPolicy: Always
command: ["/home/wso2carbon/wso2is-5.11.0/repository/resources/security/; /opt/java/openjdk/lib/security/", "-c"]
args: ["keytool -exportcert -alias wso2carbon -keystore wso2carbon.jks -rfc -file wso2carbon.pem; keytool -import -alias wso2Carbon -file wso2carbon.pem -keystore cacerts"]
resources:
requests:
memory: "2Gi"
cpu: "2000m"
limits:
memory: "4Gi"
cpu: "4000m"
volumeMounts:
...
</code></pre>
<p>let me explain you what I want to achieve in above manifest. I have two commands one <code>keytool -exportcert -alias wso2carbon -keystore wso2carbon.jks -rfc -file wso2carbon.pem</code> i want to execute this command on <code>/home/wso2carbon/wso2is-5.11.0/repository/resources/security/</code>, and the second command is <code>keytool -import -alias wso2Carbon -file wso2carbon.pem -keystore cacerts</code> in location <code>/opt/java/openjdk/lib/security/</code>.</p>
<p>When i apply the manifest it fail to create the container. when I describe the pod. it says.</p>
<pre><code>Message: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "/home/wso2carbon/wso2is-5.11.0/repository/resources/security/; /opt/java/openjdk/lib/security/": stat /home/wso2carbon/wso2is-5.11.0/repository/resources/security/; /opt/java/openjdk/lib/security/: no such file or directory: unknown
</code></pre>
| Feezan Khattak | <p>You want to pass your multiple commands as a shell script embedded in a call to <code>sh -c</code>, like this:</p>
<pre><code>command:
- /bin/sh
- -c
- |
cd /home/wso2carbon/wso2is-5.11.0/repository/resources/security/
keytool -exportcert -alias wso2carbon -keystore wso2carbon.jks -rfc -file wso2carbon.pem
cd /opt/java/openjdk/lib/security/
keytool -import -alias wso2Carbon -file wso2carbon.pem -keystore cacerts
</code></pre>
<p>(Note that we're not using <code>args</code>.)</p>
<p>This will work, but it doesn't entirely make sense; the pod will exit as soon as these commands have completed. For a <code>Deployment</code>, you would typically expect to see your containers executing some sort of long-running command.</p>
| larsks |
<p>I am trying to obtain the list of events from a minikube cluster usigh the Python Kubernetes api using the following code:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import config, client
config.load_kube_config()
api = client.EventsV1beta1Api()
print(api.list_event_for_all_namespaces())
</code></pre>
<p>I am getting the following error:</p>
<pre><code>C:\Users\lameg\kubesense>python test.py
Traceback (most recent call last):
File "C:\Users\lameg\kubesense\test.py", line 6, in <module>
print(api.list_event_for_all_namespaces())
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api\events_v1beta1_api.py", line 651, in list_event_for_all_namespaces
return self.list_event_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api\events_v1beta1_api.py", line 758, in list_event_for_all_namespaces_with_http_info
return self.api_client.call_api(
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 192, in __call_api
return_data = self.deserialize(response_data, response_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 264, in deserialize
return self.__deserialize(data, response_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 280, in __deserialize
return [self.__deserialize(sub_data, sub_kls)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 280, in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 641, in __deserialize_model
instance = klass(**kwargs)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\models\v1beta1_event.py", line 112, in __init__
self.event_time = event_time
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\models\v1beta1_event.py", line 291, in event_time
raise ValueError("Invalid value for `event_time`, must not be `None`") # noqa: E501
ValueError: Invalid value for `event_time`, must not be `None`
</code></pre>
<p>Any ideas ?</p>
| JoΓ£o Pinto | <p>That looks like either a bug in the Python client, or a bug in the OpenAPI specification used to generate the client: clearly, <code>null</code> is a value for <code>eventTime</code> that is supported by the API.</p>
<p>I think the only workaround is to monkey-patch the <code>kubernetes.client</code> module so that it accepts <code>null</code> values. Something like this:</p>
<pre><code>from kubernetes import config, client
config.load_kube_config()
api = client.EventsV1beta1Api()
# This is descriptor, see https://docs.python.org/3/howto/descriptor.html
class FakeEventTime:
def __get__(self, obj, objtype=None):
return obj._event_time
def __set__(self, obj, value):
obj._event_time = value
# Monkey-patch the `event_time` attribute of ` the V1beta1Event class.
client.V1beta1Event.event_time = FakeEventTime()
# Now this works.
events = api.list_event_for_all_namespaces()
</code></pre>
<p>The above code runs successfully against my OpenShift instance, whereas previously it would fail as you describe in your question.</p>
| larsks |
<p>When using Istio with Kubernetes, a number of different manifests require the same environment-specific values. For example, the host address is required by both the Gateway (under <code>spec/servers/hosts</code>) and VirtualService (under <code>spec/hosts)</code>. The typical approach to changing the address for different environments is to apply Kustomize patches. Is it possible to use a single patch to transform/insert the value into each manifest that needs it, or somehow maintain just one copy of the address that gets inserted where needed? The alternative is having to maintain multiple patch files with the same host address, and I would like to avoid duplication.</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foo
spec:
selector:
istio: bar
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: secret
hosts:
- test.acme.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bazz
spec:
hosts:
- test.acme.com
gateways:
- foo
http:
- match:
- uri:
prefix: /api/v1
route:
- destination:
host: blarg
port:
number: 80
</code></pre>
| Boon | <p>This isn't going to be possible with just Kustomize other than by using multiple patches. Because you're looking to change the value in objects of different types, this can't be done with a single patch. So you could do this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifests.yaml
patches:
- target:
kind: Gateway
name: foo
patch: |
- op: replace
path: /spec/servers/0/hosts/0
value: foo.acme.com
- target:
kind: VirtualService
name: bazz
patch: |
- op: replace
path: /spec/hosts/0
value: foo.acme.com
</code></pre>
<p>If you find you need to do this frequently (maybe you have a bunch of similar services), you could move the manifests into a helm chart and then <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">inflate that with kustomize</a>.</p>
| larsks |
<p>I have a working mongoDB deployment on minikube and I have managed to create a database , collection as well as a user (same as the user referenced in yaml) to do backups on that database.</p>
<p>In the yaml file for my backup cron job I need to specify a <strong>MONGODB_URI</strong> parameter and quite frankly I am at a loss as to the exact convention for getting this (where exactly do you get the value).</p>
<p>As a check I have done a <code>kubectl exec -it <pod_name></code> so that I can check <em>if I am going to put the correct URI beforehand</em>. After running kubectl exec -it <pod_name> at the prompt that follows I tried the following :</p>
<p>1.</p>
<pre><code>mongosh mongodb://aaa:abc123@mongodb-service.default.svc.cluster.local:27017/plaformdb/?directConnection=true
</code></pre>
<p>Not working I get error :</p>
<pre><code>Current Mongosh Log ID: 62938b50880f139dad4b19c4
Connecting to: mongodb://mongodb-service.default.svc.cluster.local:27017/platformdb/?directConnection=true&appName=mongosh+1.4.2
MongoServerSelectionError: Server selection timed out after 30000 ms
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>mongosh mongodb://aaa:abc123@mongodb-service.svc.cluster.local:27017/platformdb?directConnection=true
</code></pre>
<p>Not working also I get error:</p>
<pre><code>MongoNetworkError: getaddrinfo ENOTFOUND mongodb-service.svc.cluster.local
</code></pre>
<ol start="3">
<li></li>
</ol>
<pre><code>mongosh mongodb://aaa:abc123@mongodb-deployment-757ffdfdr5-thuiy.mongodb-service.default.svc.cluster.local:27017/platformdb
</code></pre>
<p>Not working I get an error :</p>
<pre><code>Current Mongosh Log ID: 62938c93829ft678h88990
Connecting to: mongodb://mongodb-deployment-757ccdd6y8-thhhh.mongodb-service.default.svc.cluster.local:27017/platformdb?directConnection=true&appName=mongosh+1.4.2
MongoNetworkError: getaddrinfo ENOTFOUND mongodb-deployment-757ffdd5f5-tpzll.mongodb-service.default.svc.cluster.local
</code></pre>
<p>This however is the recommended way according to the :<a href="https://www.mongodb.com/docs/kubernetes-operator/master/tutorial/connect-from-inside-k8s/" rel="nofollow noreferrer">docs</a></p>
<p><strong>Expected:</strong></p>
<p>I should be able to log in to the database once I run that command.</p>
<p>This is how my deployment is defined :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret-amended
key: mongo-root-password
volumeMounts:
- mountPath: /data/db
name: mongodb-vol
volumes:
- name: mongodb-vol
persistentVolumeClaim:
claimName: mongodb-claim
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>And I need to specify MONGODB_URI in this cron job :</p>
<pre><code>---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: mongodump-backup
spec:
schedule: "0 */6 * * *" #Cron job every 6 hours
startingDeadlineSeconds: 60
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
containers:
- name: mongodump-backup
image: golide/backupsdemo
imagePullPolicy: "IfNotPresent"
env:
- name: DB_NAME
value: "microfunctions"
- name: MONGODB_URI
value: mongodb://aaa:abc123@host-mongodb:27017/dbname
volumeMounts:
- mountPath: "/mongodump"
name: mongodump-volume
command: ['sh', '-c',"./dump.sh"]
restartPolicy: OnFailure
volumes:
- name: mongodump-volume
persistentVolumeClaim:
claimName: mongodb-backup
</code></pre>
<p><strong>UPDATE</strong>
I have tried the suggested solutions on my localhost minikube but I am still getting errors :</p>
<pre><code>mongo mongodb://aaa:abc123@mongodb-service:27017/platformdb?
authSource=admi
n
MongoDB shell version v5.0.8
connecting to: mongodb://mongodb-service:27017/platformdb?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server mongodb-service:27017, connection attempt failed: SocketException: Error connecting to mongodb-service:27017 (10.102.216.34:27017) :: caused by :: Connection timed out :
connect@src/mongo/shell/mongo.js:372:17
@(connect):2:6
exception: connect failed
exiting with code 1
</code></pre>
<p>This is giving same error even when I remove the port and use <em><strong>mongodb://aaa:abc123@mongodb-service/platformdb?authSource=admin</strong></em> . I have also tried putting quotes "" around the URL but getting same error.</p>
<p>As a check I tried replicating the exact same scenario on another mongodb deployment with same structure (it also has a headless service). This deploy is on a remote k8s cluster however.
This is what I found out :</p>
<ol>
<li><p>I cannot connect using <em><strong>a user other than the root user</strong></em>. I created a custom user to do the backups:</p>
<pre><code> db.createUser( {user: "aaa", pwd: "abc123", roles: ["userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase","backup"], mechanisms:["SCRAM-SHA-1"]})
</code></pre>
</li>
</ol>
<p><em><strong>NB: I have the same user created also on minikube context</strong></em>.
For this custom user I am getting an Authentication failed error everytime I try to connect :</p>
<pre><code>mongo mongodb://aaa:abc123@mongodb-headless-service:27017/TestDb?
authSource=admin
MongoDB shell version v4.4.7
connecting to: mongodb://mongodb-headless-service:27017/TestDb?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Error: Authentication failed. :
connect@src/mongo/shell/mongo.js:374:17
@(connect):2:6
exception: connect failed
exiting with code 1
</code></pre>
<ol start="2">
<li>I can connect using the root user but the connection attempt is intermittent. I have to exit out of the pod sometimes and re-run the command in order to connect.
<em><strong>This seems to be a bug unless Im missing something else obvious.</strong></em></li>
</ol>
<p>The screen below shows a succesful connection then on the subsequent attempt the exact same connection is failing :
<a href="https://i.stack.imgur.com/NiAFb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NiAFb.png" alt="BUG" /></a>
On the 1st attempt I managed to login and run a show collections command but once I logout and try to connect I get Authentication Failed. The feature seems unstable at best.</p>
| Golide | <p>Given the structure of your <code>Service</code>, you'll need to use the hostname <code>mongodb-service</code> (or <code>mongodb-service.<namesapce>.svc.cluster.local</code>, if you like fully qualified names). The connection URI -- as far as I can tell from <a href="https://www.mongodb.com/docs/manual/reference/connection-string/" rel="nofollow noreferrer">the documentation</a> -- would be:</p>
<pre><code>mongodb://<username>:<password>@mongodb-service/dbname?authSource=admin
</code></pre>
<p>You can also connect successfully like this:L</p>
<pre><code>mongodb://<username>:<password>@mongodb-service/
</code></pre>
<p>Because:</p>
<blockquote>
<p>If [the username and password are] specified, the client will attempt to authenticate the user to the authSource. If authSource is unspecified, the client will attempt to authenticate the user to the defaultauthdb. And if the defaultauthdb is unspecified, to the admin database.</p>
</blockquote>
<hr />
<p>I tested this using a slightly modified version of your <code>Deployment</code> (mostly, I dropped the <code>volumeMounts</code>, because I don't need persistent storage for testing, and I used <code>envFrom</code> because I find that easier in general).</p>
<p>I deployed this using kustomize (<code>kubectl apply -k .</code>) with the following <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: mongo
commonLabels:
app: mongodb
resources:
- deployment.yaml
- service.yaml
secretGenerator:
- name: mongo-credentials
envs:
- mongo.env
</code></pre>
<p>This <code>deployment.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
spec:
replicas: 1
template:
spec:
containers:
- name: mongodb
image: docker.io/mongo:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
envFrom:
- secretRef:
name: mongo-credentials
</code></pre>
<p>This <code>service.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>And this <code>mongo.env</code>:</p>
<pre><code>MONGO_INITDB_ROOT_USERNAME=admin
MONGO_INITDB_ROOT_PASSWORD=secret
</code></pre>
<p>Once everything was up and running, I started a client pod:</p>
<pre><code>kubectl run --image docker.io/mongo:latest mongoc -- sleep inf
</code></pre>
<p>And I was able to start a shell in that pod and connect to the database:</p>
<pre><code>$ kubectl exec -it mongoc -- bash
Current Mongosh Log ID: 6293a4bc534ff40ec737c383
Connecting to: mongodb://<credentials>@mongodb-service.mongo.svc.cluster.local/?directConnection=true&appName=mongosh+1.4.2
Using MongoDB: 5.0.8
Using Mongosh: 1.4.2
[...]
test>
</code></pre>
| larsks |
<p>I have <code>sizeLimit</code> property under <code>emptyDir</code> set to 2Gi in my template base file. I want to remove the <code>sizelimit</code> and just have <code>emptyDir: {}</code>. I've been unable to achieve this using Kustomization overlays. I will detail my folder structure and kustomization yamls below.</p>
<p><strong>Folder Structure:</strong></p>
<pre><code>application
βββ majorbase
β βββ kustomization.yaml
β βββ resources
β βββ web-template.yaml
βββ minorbase
β βββ kustomization.yaml
β βββ resources
βββ myoverlays
βββ kustomization.yaml
βββ resources
βββ my-new-web.yaml
</code></pre>
<p>The folder <code>myoverlays</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>bases:
- ../minorbase
patchesStrategicMerge:
- resources/my-new-web.yaml
</code></pre>
<p>The folder <code>minorbase</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>bases:
- ../majorbase
</code></pre>
<p>The folder <code>majorbase</code> contains the following contents in it's kustomization.yaml file</p>
<pre><code>resources:
- resources/web-template.yaml
</code></pre>
<p>The section I want to edit looks like this in the majorbase/template.</p>
<pre><code>volumes:
- name: test-vol
emptyDir:
sizeLimit: "2Gi"
</code></pre>
<p>The above configuration needs to be updated using overlays as below.</p>
<pre><code>volumes:
- name: test-vol
emptyDir: {}
</code></pre>
<p>This is where my problem lies. Kustomization just picks the 2Gi value mentioned in the base whenever I remove the <code>sizelimit</code> in my overlays. When I mention different value to <code>sizeLimit</code> such as "1Gi" in my overlays file, kustomization is picking up the change. What is the cause of this behaviour? Is it possible to achieve what I'm trying to do here?</p>
| vishal | <p>NB: This answer assumes a recent version of Kustomize (I'm running 4.5.2 locally). Your examples are using deprecated syntax (the <code>bases</code> section was deprecated <a href="https://github.com/kubernetes-sigs/kustomize/blob/661743c7e5bd8c3d9d6866b6bc0a6f0e0b0512eb/site/content/en/blog/releases/v2.1.0.md#resources-expanded-bases-deprecated" rel="nofollow noreferrer">in version 2.1.0</a>, for example).</p>
<hr />
<p>Your problem is that you're using a <code>strategicMerge</code> patch, and you're merging and empty map (<code>{}</code>) with <code>{"sizeLimit": "26gi"}</code>. If you merge an empty map with anything, it's a no-op: you end up with the "anything".</p>
<p>To explicitly delete an element, you have a few choices.</p>
<p>You can use the <code>$patch: replace</code> directive (you can find an example of that <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md" rel="nofollow noreferrer">here</a>) to have Kustomize <em>replace</em> the <code>emptyDir</code> element, rather than merging the contents. That would look like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
volumes:
- name: test-vol
emptyDir:
$patch: replace
</code></pre>
<p>The corresponding <code>kustomization.yaml</code> might look something like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: resources/my-new-web.yaml
</code></pre>
<hr />
<p>Alternately, you can use a JSONPatch patch, which is good for explicitly deleting fields:</p>
<pre><code>- path: /spec/volumes/0/emptyDir/sizeLimit
op: remove
</code></pre>
<p>Where <code>kustomization.yaml</code> would look like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ....//base
patches:
- target:
kind: Pod
name: example
path: resources/my-new-web.yaml
</code></pre>
<hr />
<p>You can find a complete runnable demonstration of this <a href="https://github.com/larsks/so-example-72086055" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>I have a Kubernetes cluster with an install of <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> (Prometheus 2.27.1, kube-state-metrics v2.0.0)</p>
<p>I would like to have a query to return how much time each pod was running, over last 24 hours</p>
<ul>
<li>If a pod is still running, the time from its creation to now</li>
<li>If a post has terminated, the time from creation to completion</li>
</ul>
<p>Importantly, I need exactly the time the pod existed, as opposed to CPU usage.</p>
<p>I can do something like this with:</p>
<pre><code>kube_pod_completion_time - kube_pod_created
</code></pre>
<p>but it returns nothing for pods that are still running. And, since Prometheus does not return metrics that are more than 5 min old, it will not report anything for pods that were terminated and deleted.</p>
<p>How would I query Prometheus without these issues?</p>
| Vladimir Prus | <p>One working solution is this:</p>
<pre><code>sum by(namespace, pod) (
(last_over_time(kube_pod_completion_time[1d])
- last_over_time(kube_pod_created[1d]))
or
(time() - kube_pod_created)
)
</code></pre>
<p>The first part inside <code>sum</code> handles the case of pods that have terminated. We pick the last value of <code>kube_pod_completion_time</code> and <code>kube_pod_stared</code> and compute the difference.</p>
<p>The second part handles the pods that are still running. In that case, there is a fresh value of the <code>kube_pod_created</code> metric, and we can subtract it from the current time.</p>
| Vladimir Prus |
<p>I have a problem. In my kubernetes cluster I am running a GitLab image for my own project. This image requires a .crt and .key as certificates for HTTPS usage. I have setup an Ingress resource with a letsencrypt-issuer, which successfully obtains the certificates. But to use those they need to be named as <code>my.dns.com.crt</code> and <code>my.dns.com.key</code>. So I manually ran the following 3 commands:</p>
<pre><code>kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.crt
kubectl get secret project-gitlab-tls -n project-utility \
-o jsonpath='{.data.tls\.key}' | base64 --decode > /mnt/data/project/gitlab/certs/tls.key
kubectl create secret generic gitlab-registry-certs \
--from-file=gitlab.project.com.crt=/mnt/data/project/gitlab/certs/tls.crt \
--from-file=gitlab.project.com.key=/mnt/data/project/gitlab/certs/tls.key \
--namespace project-utility
</code></pre>
<p>The first 2 commands print the decoded crt/key content in a file, so that the third command can use those files to create a custom mapping to the specific DNS names. Then in the GitLab deployment I mount this <code>gitlab-registry-certs</code> like this:</p>
<pre><code>volumeMounts:
- mountPath: /etc/gitlab/ssl
name: registry-certs
volumes:
- name: registry-certs
secret:
secretName: gitlab-registry-certs
</code></pre>
<p>This all works, but I want this process to be automated, because I am using ArgoCD as deployment tool. I thought about a job, but a job runs a ubuntu version which is not allowed to make changes to the cluster, so I need to call a bash script on the external host. How can I achieve this, because I can only find things about jobs which run an image and not how to execute host commands. If there is a way easier method to use the certificates that I am not seeing please let me know, because I kinda feel weird about this way of using the certificates, but GitLab requires the naming convention of <code><DNS>.crt</code> and <code><DNS>.key</code>, so thats why I am doing the remapping.</p>
<p>So the question is how to automate this remapping process so that on cluster generation a job will be executed after obtaining the certificates but before the deployment gets created?</p>
| A. Vreeswijk | <p>Why are you bothering with this complicated process of creating a new secret? Just rename them in your <code>volumeMounts</code> section by using a <code>subPath</code>:</p>
<pre><code>containers:
- ...
volumeMounts:
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.crt
subPath: tls.crt
- name: registry-certs
mountPath: /etc/gitlab/ssl/my.dns.com.key
subPath: tls.key
volumes:
- name: registry-certs
secret:
secretName: project-gitlab-tls
</code></pre>
<p>More info <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">in the documentation</a>.</p>
| larsks |
<p>I would like to deploy an ssh bastion jumper as a deployment in a Kubernetes cluster. This should receive its sshd_config as well as the authorized_keys via a ConfigMap or Secret. These can of course change over time, so that a reload of the sshd service becomes necessary.</p>
<p>How can I automate this process? Existing ssh connections should not be killed when updating the config or authorized_keys file.</p>
<p>My dockerfile is:</p>
<pre><code>FROM docker.io/alpine:latest
RUN apk add --no-cache openssh-server
EXPOSE 22/tcp
CMD ["/usr/sbin/sshd", "-D", "-e"]
</code></pre>
<p>My deployment looks like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sshd-server
namespace: sshd
spec:
replicas: 2
template:
metadata:
labels:
app: sshd-server
spec:
containers:
- name: my-sshd-server
image: my-sshd-server-image:latest
imagePullPolicy: Always
ports:
- containerPort: 22
volumeMounts:
- mountPath: /etc/ssh/sshd_config
name: sshd_config
- mountPath: /user/.ssh/authorized_keys
name: authorized_keys
...
</code></pre>
| user5580578 | <p>If you mount a <code>ConfigMap</code> as a directory, the directory contents will update when you update the <code>ConfigMap</code> (possibly after a short delay).</p>
<p>That means if you were just concerned about your <code>authorized_keys</code> file, you could do something like this:</p>
<p>Create the following <code>ConfigMap</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: ssh-config
data:
authorized_keys: |
ssh-rsa ...
ssh-rsa ...
sshd_config: |
StrictModes no
AuthorizedKeysFile /config/authorized_keys
</code></pre>
<p>And deploy your ssh pod using something like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sshtest
spec:
replicas: 1
template:
spec:
containers:
- image: quay.io/larsks/alpine-sshd:5
imagePullPolicy: Always
name: sshtest
ports:
- containerPort: 22
name: ssh
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-config
mountPath: /etc/ssh/sshd_config
subPath: sshd_config
- name: ssh-data
mountPath: /etc/ssh
volumes:
- name: ssh-config
configMap:
name: ssh-config
defaultMode: 0440
- name: ssh-data
emptyDir: {}
</code></pre>
<p>Where <code>quay.io/larsks/alpine-sshd:5</code> is simply alpine + sshd + an
<code>ENTRYPOINT</code> that runs <code>ssh-keygen -A</code>. You should build your own
rather than random some random person's image :).</p>
<p>This will work on straight Kubernetes but will <em>not</em> run on OpenShift
without additional work.</p>
<p>With this configuration (and an appropriate <code>Service</code>) you can ssh
into the container <em>as root</em> using the private key that corresponds to
one of the public keys contained in the <code>authorized_keys</code> part of the
<code>ssh-config</code> <code>ConfigMap</code>.</p>
<p>When you update the <code>ConfigMap</code>, the container will eventually see the
updated values, no restarts required.</p>
<hr />
<p>If you really want to respond to changes in <code>sshd_config</code>, that
becomes a little more complicated. <code>sshd</code> itself doesn't have any
built-in facility for responding to changes in the configuration file,
so you'll need to add a sidecar container that watches for config file
updates and then sends the appropriate signal (<code>SIGHUP</code>) to <code>sshd</code>.
Something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sshtest
spec:
replicas: 1
template:
spec:
shareProcessNamespace: true
containers:
- image: docker.io/alpine:latest
name: reloader
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-data
mountPath: /etc/ssh
command:
- /bin/sh
- -c
- |
while true; do
if [ -f /etc/ssh/sshd_config ] && [ -f /etc/ssh/sshd.pid ]; then
if ! diff -q /config/sshd_config /etc/ssh/sshd_config; then
cp /config/sshd_config /etc/ssh/sshd_config
kill -HUP $(cat /etc/ssh/sshd.pid)
fi
fi
sleep 10
done
- image: quay.io/larsks/alpine-sshd:6
imagePullPolicy: Always
name: sshd
ports:
- containerPort: 22
name: ssh
volumeMounts:
- name: ssh-config
mountPath: /config
- name: ssh-data
mountPath: /etc/ssh
volumes:
- name: ssh-config
configMap:
name: ssh-config
defaultMode: 0600
- name: ssh-data
emptyDir: {}
</code></pre>
<p>This requires a slightly modified container image that includes the
following <code>ENTRYPOINT</code> script:</p>
<pre><code>#!/bin/sh
if [ -f /config/sshd_config ]; then
cp /config/sshd_config /etc/ssh/sshd_config
fi
ssh-keygen -A
exec "$@"
</code></pre>
<p>With this configuration, the <code>reloader</code> container watches for changes in the configuration file supplied by the <code>ConfigMap</code>. When it detects a change, it copies the updated file to the correct location and then sends a <code>SIGHUP</code> to <code>sshd</code>, which reloads its configuration.</p>
<p>This does not interrupt existing ssh connections.</p>
| larsks |
<p>So I have a values.yaml file with an string variable representing a database connection string with no quotes looking like this (don't worry, not the real password):</p>
<pre><code>ActionLogsConnectionString: Database=ActionLogs;Server=SQL_DEV;User Id=sa;Password=Y19|yx\dySh53&h
</code></pre>
<p>My goal is to print it inside a ConfigMap resource so that it can then be injected in my pod as a .json configuration file for a dotnet app. I also want to append the application name in the connection string:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "asp.fullname" . }}
labels:
{{- include "asp.labels" . | nindent 4 }}
data:
appsettings.k8s.json: |-
{
"ConnectionStrings": {
"ActionLogsConnectionString": "{{ .Values.ActionLogsConnectionString }};Application Name=Asp;"
}
}
</code></pre>
<p>This produce this result:</p>
<pre><code>"ActionLogsConnectionString": "Database=ActionLogs;Server=SQL_DEV;User Id=sa;Password=Y19|yx\dySh53&h;Application Name=Asp;"
</code></pre>
<p>Look great! And at this point I don't have a quote problem.</p>
<p>Now problem, the slash isn't escaped for the json file format. Good thing, helm provide a toJson function. Unfortunately, it also transform the "&" to unicode value. I then found toRawJson and it gives the expected results.</p>
<p>My problem is that, when using either toJson or toRawJson, it adds extra quotes and break my result:</p>
<p>so this yalm file:</p>
<pre><code>"ActionLogsConnectionString": "{{ .Values.ActionLogsConnectionString | toRawJson }};Application Name=Asp;"
</code></pre>
<p>results in this json file (note the extra quotes):</p>
<pre><code>"ActionLogsConnectionString": ""Database=ActionLogs;Server=SQL_DEV;User Id=sa;Password=Y19|yx\\dySh53&h";Application Name=Asp;"
</code></pre>
<p>I see there's a function called | quote, but this only add some. No way to use toRawJson without adding any?</p>
| Dunge | <p>Using <code>toJson</code> or <code>toRawJson</code> is the wrong solution here, because the JSON representation of a string by definition includes the double quotes. <code>"foo"</code> is a JSON string, <code>foo</code> isn't valid JSON.</p>
<p>But you're only working with a scalar value, so there's not much point in marshaling it to JSON in the first place. I think the following gets you what you want:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "asp.fullname" . }}
labels:
{{- include "asp.labels" . | nindent 4 }}
data:
appsettings.k8s.json: |-
{
"ConnectionStrings": {
"ActionLogsConnectionString": {{ printf "%s;Application Name=asp" .Values.ActionLogsConnectionString | quote }}
}
}
</code></pre>
<p>Here, we're using the <code>printf</code> to produce the desired string (and then passing it to the <code>quote</code> function for proper quoting).</p>
<p>This produces:</p>
<pre><code>---
# Source: example/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-example-fullname
labels:
# This is a test
data:
appsettings.k8s.json: |-
{
"ConnectionStrings": {
"ActionLogsConnectionString": "Database=ActionLogs;Server=SQL_DEV;User Id=sa;Password=Y19|yx\\dySh53&h;Application Name=asp"
}
}
</code></pre>
| larsks |
<p>I have a Kubernetes v1.9.3 (no OpenShift) cluster I'd like to manage with ManageIQ (gaprindashvili-3 running as a Docker container).</p>
<p>I prepared the k8s cluster to interact with ManageIQ following <a href="http://manageiq.org/docs/guides/providers/kubernetes" rel="nofollow noreferrer">these instructions</a>. Notice that I performed the steps listed in the last section only (<strong>Prepare cluster for use with ManageIQ</strong>), as the previous ones were for setting up a k8s cluster and I already had a running one.</p>
<p>I successfully added the k8s container provider to ManageIQ, but the dashboard reports nothing: 0 nodes, 0 pods, 0 services, etc..., while I do have nodes, services and running pods on the cluster. I looked at the content of <code>/var/log/evm.log</code> of ManageIQ and found this error:</p>
<pre><code>[----] E, [2018-06-21T10:06:40.397410 #13333:6bc9e80] ERROR β : [KubeException]: events is forbidden: User βsystem:serviceaccount:management-infra:management-adminβ cannot list events at the cluster scope: clusterrole.rbac.authorization.k8s.io βcluster-readerβ not found Method:[block in method_missing]
</code></pre>
<p>So the ClusterRole <code>cluster-reader</code> was not defined in the cluster. I double checked with <code>kubectl get clusterrole cluster-reader</code> and it confirmed that <code>cluster-reader</code> was missing. </p>
<p>As a solution, I tried to create <code>cluster-reader</code> manually. I could not find any reference of it in the k8s doc, while it is mentioned in the OpenShift docs. So I looked at how <code>cluster-reader</code> was defined in OpenShift v3.9. Its definition changes across different OpenShift versions, I picked 3.9 as it is based on k8s v1.9 which is the one I'm using. So here's what I found in the OpenShift 3.9 doc:</p>
<pre><code>Name: cluster-reader
Labels: <none>
Annotations: authorization.openshift.io/system-only=true
rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
[*] [] [get]
apiservices.apiregistration.k8s.io [] [] [get list watch]
apiservices.apiregistration.k8s.io/status [] [] [get list watch]
appliedclusterresourcequotas [] [] [get list watch]
</code></pre>
<p>I wrote the following yaml definition to create an equivalent ClusterRole in my cluster:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-reader
rules:
- apiGroups: ["apiregistration"]
resources: ["apiservices.apiregistration.k8s.io", "apiservices.apiregistration.k8s.io/status"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["*"]
verbs: ["get"]
</code></pre>
<p>I didn't include <code>appliedclusterresourcequotas</code> among the monitored resources because it's my understanding that is an OpenShift-only resource (but I may be mistaken).</p>
<p>I deleted the old k8s container provider on ManageIQ and created a new one after having created <code>cluster-reader</code>, but nothing changed, the dashboard still displays nothing (0 nodes, 0 pods, etc...). I looked at the content of <code>/var/log/evm.log</code> in ManageIQ and this time these errors were reported:</p>
<pre><code>[----] E, [2018-06-22T11:15:39.041903 #2942:7e5e1e0] ERROR -- : MIQ(ManageIQ::Providers::Kubernetes::ContainerManager::EventCatcher::Runner#start_event_monitor) EMS [kubernetes-01] as [] Event Monitor Thread aborted because [events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope]
[----] E, [2018-06-22T11:15:39.042455 #2942:7e5e1e0] ERROR -- : [KubeException]: events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope Method:[block in method_missing]
</code></pre>
<p>So what am I doing wrong? How can I fix this problem?
If it can be of any use, <a href="https://github.com/matte21/cloud-infrastructure-project-work/blob/master/k8s-cluster-miq-setup/k8-miq-objects.yaml" rel="nofollow noreferrer">here</a> you can find the whole .yaml file I'm using to set up the k8s cluster to interact with ManageIQ (all the required namespaces, service accounts, cluster role bindings are present as well).</p>
| Matteo | <p>For the <code>ClusterRole</code> to take effect it must be bound to the group <code>management-infra</code> or user <code>management-admin</code>.</p>
<p>Example of creating group binding:</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-cluster-state
subjects:
- kind: Group
name: management-infra
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>After applying this file changes will take place immediately. No need to restart cluster.</p>
<p>See more information <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">here</a>.</p>
| lexsys |
<p>I am trying to patch a secret using kubectl</p>
<pre><code>kubectl patch secret operator-secrets --namespace kube-system --context=cluster1 --patch "'{\"data\": {\"FOOBAR\": \"$FOOBAR\"}}'"
</code></pre>
<p>But I receive the error</p>
<blockquote>
<p>Error from server (BadRequest): json: cannot unmarshal string into Go value of type map[string]interface {}</p>
</blockquote>
<p>If I run the command using echo, it seems to be a valid JSON</p>
<pre><code>$ echo "'{\"data\": {\"FOOBAR\": \"$FOOBAR\"}}'"
'{"data": {"FOOBAR": "value that I want"}}'
</code></pre>
<p>What can be?</p>
| Rodrigo | <blockquote>
<p>If I run the command using echo, it seems to be a valid JSON</p>
</blockquote>
<p>In fact, it does not. Look carefully at the first character of the output:</p>
<pre><code>'{"data": {"FOOBAR": "value that I want"}}'
</code></pre>
<p>Your "JSON" string starts with a single quote, which is an invalid character. To get valid JSON, you would need to rewrite your command to look like this:</p>
<pre><code>echo "{\"data\": {\"FOOBAR\": \"$FOOBAR\"}}"
</code></pre>
<p>And we can confirm that's valid JSON using something like the <code>jq</code>
command:</p>
<pre><code>$ echo "{\"data\": {\"FOOBAR\": \"$FOOBAR\"}}" | jq .
{
"data": {
"FOOBAR": "value that i want"
}
}
</code></pre>
<p>Making your patch command look like:</p>
<pre><code>kubectl patch secret operator-secrets \
--namespace kube-system \
--context=cluster1 \
--patch "{\"data\": {\"FOOBAR\": \"$FOOBAR\"}}"
</code></pre>
<p>But while that patch is now valid JSON, it's still going to fail with
a new error:</p>
<pre><code>The request is invalid: patch: Invalid value: "map[data:map[FOOBAR:value that i want]]": error decoding from json: illegal base64 data at input byte 5
</code></pre>
<p>The value of items in the <code>data</code> map must be base64 encoded values.
You can either base64 encode the value yourself:</p>
<pre><code>kubectl patch secret operator-secrets \
--namespace kube-system \
--context=cluster1 \
--patch "{\"data\": {\"FOOBAR\": \"$(base64 <<<"$FOOBAR")\"}}"
</code></pre>
<p>Or use <code>stringData</code> instead:</p>
<pre><code>kubectl patch secret operator-secrets \
--namespace kube-system \
--context=cluster1 \
--patch "{\"stringData\": {\"FOOBAR\": \"$FOOBAR\"}}"
</code></pre>
| larsks |
<p>All namespaces in my cluster are supposed to trust the same root CA. I have a mono repo with all my Kustomize files, and I'm trying to avoid having to add the root CA certificate everywhere.</p>
<p>My idea was to go for something like that in my kustomization files:</p>
<pre><code># [project_name]/[namespace_name]/bases/project/kustomization.yml
configMapGenerator:
- name: trusted-root-ca
files:
- ../../../../root-ca/root-ca.pem
</code></pre>
<p>So at least if I want to update the root CA, I do it in one place. This results to</p>
<pre><code>file sources: [../../../../root-ca/root-ca.pem]: security; file 'root-ca/root-ca.pem' is not in or below [project_name]/[namespace_name]/bases/project/
</code></pre>
<p>So I guess this is not the way to go. (Reading from <a href="https://stackoverflow.com/questions/65150509/common-config-across-multiple-environments-and-applications-with-kustomize">Common config across multiple environments and applications with Kustomize</a> I can see why it's behaving like that and disable that behavior seems to be a bad idea). I'm looking for a better way to do this.</p>
| Yohan Courbe | <p>This seems like a good place to use a <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md" rel="nofollow noreferrer">component</a>. For example, if I have my files organized like this:</p>
<pre><code>.
βββ components
β βββ trusted-root-ca
β βββ kustomization.yaml
β βββ root-ca.pem
βββ projects
βββ kustomization.yaml
βββ project1
β βββ kustomization.yaml
β βββ namespace.yaml
βββ project2
βββ kustomization.yaml
βββ namespace.yaml
</code></pre>
<p>And in <code>components/trusted-root-ca/kustomization.yaml</code> I have:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
configMapGenerator:
- name: trusted-root-ca
files:
- root-ca.pem
generatorOptions:
disableNameSuffixHash: true
</code></pre>
<p>Then in <code>projects/project1/kustomization.yaml</code> I can write:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project1
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
</code></pre>
<p>And similarly in <code>projects/project2/kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: project2
components:
- ../../components/trusted-root-ca
resources:
- namespace.yaml
</code></pre>
<p>And in <code>projects/kustomization.yaml</code> I have:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- project1
- project2
</code></pre>
<p>Then if, from the top directory, I run <code>kustomize build projects</code>, the output will look like:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: project1
spec: {}
---
apiVersion: v1
kind: Namespace
metadata:
name: project2
spec: {}
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project1
---
apiVersion: v1
data:
root-ca.pem: |
...cert data here...
kind: ConfigMap
metadata:
name: trusted-root-ca
namespace: project2
</code></pre>
| larsks |
<p>I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.</p>
<p>Main services are two Python applications <code>MothershipService</code> and <code>Ship</code>. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. <code>Ship</code> is running some (untrusted) user code.</p>
<pre><code>MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
</code></pre>
<p>I can manage fine to get up the ship service</p>
<pre><code>> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
</code></pre>
<p>My question is how do I go about testing this via <code>curl</code> or a browser? These are all backend services so <code>NodePort</code> seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.</p>
<p>ship.yml (pseudo-spec)</p>
<pre><code>kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
</code></pre>
| Pithikos | <p>One possibility is to use the <code>kubectl port-forward</code> command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
</code></pre>
<p>I can expose that on my local system by running:</p>
<pre><code>kubectl port-forward deploy/example 8000:8000
</code></pre>
<p>As long as that <code>port-forward</code> command is running, I can point my browser (or <code>curl</code>) at <code>http://localhost:8000</code> to access the service.</p>
<hr />
<p>Alternately, I can use <code>kubectl exec</code> to run commands (like <code>curl</code> or <code>wget</code>) inside the pod:</p>
<pre><code>kubectl exec -it web -- wget -O- http://127.0.0.1:8000
</code></pre>
| larsks |
<p>I have deployed a java application (backend) in openshift, and i have spun up 3 pods backend-1-abc, backend-1-xyz and backend-1-def. </p>
<p>How can I get the list of all the pod names for this service "backend"? Is it possible to obtain it programatically or is there any endpoint exposed in openshift to obtain this?</p>
| Pramod S | <p>Are you saying you have actually created three separate Pod definitions with those names? Are you not using a DeploymentConfig or StatefulSet?</p>
<p>If you were using StatefulSet the names would be predictable.</p>
<p>Either way, the Pods would usually be set up with labels and could use a command like:</p>
<pre><code>oc get pods --selector app=myappname
</code></pre>
<p>Perhaps have a read of:</p>
<ul>
<li><a href="https://www.openshift.com/deploying-to-openshift/" rel="nofollow noreferrer">https://www.openshift.com/deploying-to-openshift/</a></li>
</ul>
<p>It touches on labelling and querying based on labels.</p>
<p>Please provide more details about how you are creating the deployment if want more details/options.</p>
| Graham Dumpleton |
<p>Inventory file (inventory/k8s.yaml):</p>
<pre><code>plugin: kubernetes.core.k8s
connections:
- kubeconfig: ~/.kube/config
context: 'cluster-2'
</code></pre>
<p>Task file (roles/common/tasks/main.yaml):</p>
<pre><code># Method 1: Using `kubernetes.core` plugin to list the pod names:
- name: Get a list of all pods from any namespace
kubernetes.core.k8s_info:
kind: Pod
register: pod_list
- name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].metadata.name') }} "
# Method 2: Using `shell` command to list the pod names:
- name: Get node names
shell: kubectl get pods
register: pod_list2
- name: Print pod names
debug:
msg: "{{ pod_list2.stdout }}"
</code></pre>
<p>Ansible config (ansible.cfg):</p>
<pre><code>[inventory]
enable_plugins = host_list, auto, yaml, ini, kubernetes.core.k8s
</code></pre>
<p>Main file (main.yaml):</p>
<pre><code>---
- hosts: localhost
gather_facts: false
collections:
- azure.azcollection
- kubernetes.core
roles:
- "common"
</code></pre>
<p>Running command to execute task: <code>ansible-playbook main.yaml -i cluster-2/k8s.yaml -e role=common -e cluster_name=cluster-2</code></p>
<p>Question:
I am running the above configs to run get the pods from the remote cluster mentioned in the inventory file. But, the problem is, I am still getting the pod names from the local cluster and not the cluster-2 in Method 1 and 2.</p>
<p>k8s plugin should get the list of pods from cluster-2 as described in the inventory file. How can I connect to remote kubernetes cluster?</p>
<p>I also checked output with <code>-vvvv</code>:</p>
<pre><code>ansible-playbook [core 2.14.0]
config file = /Users/test/u/apps/ansible.cfg
configured module search path = ['/Users/test/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/test/Library/Python/3.9/lib/python/site-packages/ansible
ansible collection location = /Users/test/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/test/Library/Python/3.9/bin/ansible-playbook
python version = 3.9.12 (main, Mar 26 2022, 15:52:10) [Clang 13.0.0 (clang-1300.0.29.30)] (/usr/local/opt/python@3.9/bin/python3.9)
jinja version = 3.1.2
libyaml = True
Using /Users/test/u/apps/ansible.cfg as config file
setting up inventory plugins
Loading collection kubernetes.core from /Users/test/.ansible/collections/ansible_collections/kubernetes/core
</code></pre>
| RNK | <p>You're trying to use both the kubernetes inventory plugin <strong>and</strong> the <code>k8s_info</code> module, and because of that you're getting conflicting results. The two don't have anything to do with each other.</p>
<h2>The inventory module</h2>
<p>The kubernetes inventory module is -- I think -- a weird beast; it produces an ansible inventory in which the pods in your cluster are presented as Ansible hosts. To see a list of all the pod names in your cluster, you could write a playbook like this:</p>
<pre><code>- hosts: all
gather_facts: false
tasks:
- name: Print pod names
debug:
msg: "{{ inventory_hostname }}"
</code></pre>
<p>This will respect the context you've configured in your kubernetes inventory plugin configuration. For example, if I have in <code>inventory/k8s.yaml</code> the following:</p>
<pre><code>plugin: kubernetes.core.k8s
connections:
- kubeconfig: ./kubeconfig
context: 'kind-cluster2'
</code></pre>
<p>Then the above playbook will list the pod names from <code>kind-cluster2</code>, regardless of the <code>current-context</code> setting in my <code>kubeconfig</code> file. In my test environment, this produces:</p>
<pre><code>PLAY [all] *********************************************************************
TASK [Print pod names] *********************************************************
ok: [kubernetes] => {
"msg": "kubernetes"
}
ok: [coredns-565d847f94-2shl6_coredns] => {
"msg": "coredns-565d847f94-2shl6_coredns"
}
ok: [coredns-565d847f94-md57c_coredns] => {
"msg": "coredns-565d847f94-md57c_coredns"
}
ok: [kube-dns] => {
"msg": "kube-dns"
}
ok: [etcd-cluster2-control-plane_etcd] => {
"msg": "etcd-cluster2-control-plane_etcd"
}
ok: [kube-apiserver-cluster2-control-plane_kube-apiserver] => {
"msg": "kube-apiserver-cluster2-control-plane_kube-apiserver"
}
ok: [kube-controller-manager-cluster2-control-plane_kube-controller-manager] => {
"msg": "kube-controller-manager-cluster2-control-plane_kube-controller-manager"
}
ok: [kube-scheduler-cluster2-control-plane_kube-scheduler] => {
"msg": "kube-scheduler-cluster2-control-plane_kube-scheduler"
}
ok: [kindnet-nc27b_kindnet-cni] => {
"msg": "kindnet-nc27b_kindnet-cni"
}
ok: [kube-proxy-9chgt_kube-proxy] => {
"msg": "kube-proxy-9chgt_kube-proxy"
}
ok: [local-path-provisioner-684f458cdd-925v5_local-path-provisioner] => {
"msg": "local-path-provisioner-684f458cdd-925v5_local-path-provisioner"
}
PLAY RECAP *********************************************************************
coredns-565d847f94-2shl6_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
coredns-565d847f94-md57c_coredns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
etcd-cluster2-control-plane_etcd : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kindnet-nc27b_kindnet-cni : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-apiserver-cluster2-control-plane_kube-apiserver : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-controller-manager-cluster2-control-plane_kube-controller-manager : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-dns : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-proxy-9chgt_kube-proxy : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kube-scheduler-cluster2-control-plane_kube-scheduler : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
kubernetes : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
local-path-provisioner-684f458cdd-925v5_local-path-provisioner : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
<p>The key point here is that your inventory will consist of a list of pods. I've never found this particularly useful.</p>
<h2>The <code>k8s_info</code> module</h2>
<p>The <code>k8s_info</code> queries a kubernetes cluster for a list of objects. It doesn't care about your inventory configuration -- it will run on whichever target host you've defined for your play (probably <code>localhost</code>) and perform the rough equivalent of <code>kubectl get <whatever></code>. If you want to use an explicit context, you need to set that as part of your module parameters. For example, to see a list of pods in <code>kind-cluster2</code>, I could use the following playbook:</p>
<pre><code>- hosts: localhost
gather_facts: false
tasks:
- kubernetes.core.k8s_info:
kind: pod
kubeconfig: ./kubeconfig
context: kind-cluster2
register: pods
- debug:
msg: "{{ pods.resources | json_query('[].metadata.name') }}"
</code></pre>
<p>Which in my test environment produces as output:</p>
<pre><code>PLAY [localhost] ***************************************************************
TASK [kubernetes.core.k8s_info] ************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": [
"coredns-565d847f94-2shl6",
"coredns-565d847f94-md57c",
"etcd-cluster2-control-plane",
"kindnet-nc27b",
"kube-apiserver-cluster2-control-plane",
"kube-controller-manager-cluster2-control-plane",
"kube-proxy-9chgt",
"kube-scheduler-cluster2-control-plane",
"local-path-provisioner-684f458cdd-925v5"
]
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
</code></pre>
<hr />
<p>In conclusion: you probably want to use <code>k8s_info</code> rather than the inventory plugin, and you'll need to configure the module properly by setting the <code>context</code> (and possibly the <code>kubeconfig</code>) parameters when you call the module.</p>
<hr />
<blockquote>
<p>Is there any way I can define context and kubeconfig outside of the tasks (globally) if I am using k8s_info module?</p>
</blockquote>
<p>According to <a href="https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_info_module.html" rel="nofollow noreferrer">the documentation</a>, you could set the <code>K8S_AUTH_KUBECONFIG</code> and <code>K8S_AUTH_CONTEXT</code> environment variables if you want to globally configure the settings for the <code>k8s_info</code> module. You could also write your task like this:</p>
<pre><code> - kubernetes.core.k8s_info:
kind: pod
kubeconfig: "{{ k8s_kubeconfig }}"
context: "{{ k8s_context }}"
register: pods
</code></pre>
<p>And then define the <code>k8s_kubeconfig</code> and <code>k8s_context</code> variables somewhere else in your Ansible configuration (e.g., as group vars). This makes it easy to retarget things to a different cluster with only a single change.</p>
| larsks |
<p>I'm using <code>Kubernetes v1.23.16 cluster (One master and three workers)</code> bare metal based.</p>
<p>I have created couple of services in a separate namespace. The same as follows.</p>
<pre><code>$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
app1 app1-service ClusterIP 10.103.151.235 <none> 80/TCP 19h
app2 app2-service ClusterIP 10.105.88.151 <none> 80/TCP 11d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 11d
</code></pre>
<p>Now I'm having below <code>nginx resource yml</code> to access the service outside. For example i would like access as given below.</p>
<ol>
<li><a href="http://web.example.com/app1" rel="nofollow noreferrer">http://web.example.com/app1</a></li>
<li><a href="http://web.example.com/app2" rel="nofollow noreferrer">http://web.example.com/app2</a></li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: app-ingress
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>When i apply the <code>nginx resource yml</code> file it says <code>service not found</code> error.</p>
<pre><code>$ kubectl describe ingress app-ingress
Name: app-ingress
Labels: <none>
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
web.example.com
/app1 app1-service:80 (<error: endpoints "app1-service" not found>)
/app2 app2-service:80 (<error: endpoints "app2-service" not found>)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
<p>Since my applications services are running in different namespace separately and my <code>nginx resource yml</code> is running in <code>default</code> namespace. So now how do i configure nginx resource file to access both of my service?</p>
| user4948798 | <p>An Ingress resource can only refer to Services in the same namespace as the Ingress. To manage two paths on the same hostname that lead to backends in different namespaces, you will need two separate Ingress resources.</p>
<p>In the <code>app1</code> namespace:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
</code></pre>
<p>And in the <code>app2</code> namespace:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: app2
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>With these resources in place, requests for <a href="http://web.example.com/app1" rel="nofollow noreferrer">http://web.example.com/app1</a> will go to app1-service, and requests for <a href="http://web.example.com/app2" rel="nofollow noreferrer">http://web.example.com/app2</a> will go to app2-service.</p>
<p>NB: I've removed your <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation because you're not doing any regex-based path rewriting in this example.</p>
<hr />
<p>I've put a deployable example online <a href="https://github.com/larsks/so-example-75417732-one-host-two-apps" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>With kubectl, I know i can run below command if I want to see specific resources YAML file</p>
<pre><code>kubectl -n <some namespace> get <some resource> <some resource name> -o yaml
</code></pre>
<p>How would I get this same data using python's kubernetes-client ?
Everything I've found so far only talks about creating a resource from a given yaml file.</p>
<p>In looking at <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">docs</a>, I noticed that each resource type generally has a <strong>get_api_resources()</strong> function which returns a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1APIResourceList.md" rel="nofollow noreferrer">V1ApiResourceList</a>, where each item is a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1APIResource.md" rel="nofollow noreferrer">V1ApiResource</a>. I was hoping there would be a way to get the resource's yaml-output by using a V1ApiResource object, but doesnt appear like that's the way to go about it.</p>
<p>Do you all have any suggestions ? Is this possible with kubernetes-client API ?</p>
| jlrivera81 | <p>If you take a look at the methods available on an object, e.g.:</p>
<pre><code>>>> import kubernetes.config
>>> client = kubernetes.config.new_client_from_config()
>>> core = kubernetes.client.CoreV1Api(client)
>>> res = core.read_namespace('kube-system')
>>> dir(res)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_api_version', '_kind', '_metadata', '_spec', '_status', 'api_version', 'attribute_map', 'discriminator', 'kind', 'local_vars_configuration', 'metadata', 'openapi_types', 'spec', 'status', 'to_dict', 'to_str']
</code></pre>
<p>...you'll see there is a <code>to_dict</code> method. That returns the object as
a dictionary, which you can then serialize to YAML or JSON or
whatever:</p>
<pre><code>>>> import yaml
>>> print(yaml.safe_dump(res.to_dict()))
api_version: v1
kind: Namespace
metadata:
[...]
</code></pre>
| larsks |
<p>I want to use the RestApi to update the deployment.
and I test it with postman, but always got 415 back.</p>
<hr>
<p>the info is as follows:</p>
<p><strong>type:</strong>
PATCH</p>
<p><strong>url:</strong> <a href="https://k8sClusterUrl:6443/apis/extensions/v1beta1/namespaces/ns/deployments/peer0" rel="nofollow noreferrer">https://k8sClusterUrl:6443/apis/extensions/v1beta1/namespaces/ns/deployments/peer0</a></p>
<p><strong>header:</strong> </p>
<pre><code>Authorization: bearer token
Content-Type:application/json
</code></pre>
<p><strong>body:</strong></p>
<pre><code>{
"kind": "Deployment",
"spec":
{
"template":
{
"spec":
{
"containers":[
{
"$setElementOrder/volumeMounts":[{"mountPath":"/host/var/run/"},{"mountPath":"/mnt"}],
"name":"peer0",
"image":"hyperledger/fabric-peer:x86_64-1.1.0"}
]
}
}
}
}
</code></pre>
<p><strong>response:</strong></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
</code></pre>
<hr>
<p>I have muti-containers in this pod, and only want to apply for the specific container: <code>peer0</code>.<br>
Any different for the <code>$setElementOrder</code> var? </p>
| ling | <p><code>415</code> is invalid media type.</p>
<p>In this case, you should be setting the media type as <code>application/json+patch+json</code> (you can see this in the documentation <a href="https://kubernetes.io/docs/reference/federation/extensions/v1beta1/operations/" rel="nofollow noreferrer">here</a>) </p>
| Jeff Foster |
<p>I have an <code>ingress.yaml</code> with two paths; each to one of my microfrontends. However I'm really struggling to get the rewrite-target to work. mf1 loads correctly, but mf2 doesn't. I've done some research and know I need to use <a href="https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/rewrite#examples" rel="nofollow noreferrer">Captured Groups</a>, but can't seem to properly implement this. How do I do that?</p>
<p>This is what my ingress looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mf1
port:
number: 80
- path: /mf2
pathType: Prefix
backend:
service:
name: mf2
port:
number: 80
</code></pre>
| Furkan ΓztΓΌrk | <p>You need to use a regular expression capture group in your <code>path</code> expression, and then reference the capture group in your <code>.../rewrite-target</code> annotation.</p>
<p>That might look like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
spec:
rules:
- http:
paths:
- path: /()(.*)
pathType: Prefix
backend:
service:
name: backend1
port:
name: http
- path: /mf2(/|$)(.*)
pathType: Prefix
backend:
service:
name: backend2
port:
name: http
</code></pre>
<p>We need to ensure that for both rules, capture group <code>$2</code> contains the desired path. For the first rule (<code>path: /</code>), we have an empty group <code>$1</code> (because it's not necessary here), with the entire path captured in <code>$2</code>.</p>
<p>For the second rule, we match either <code>/mf2</code> followed by either a <code>/path...</code> or as the end of the url (this ensures we don't erroneously match <code>/mf2something</code>). Group <code>$1</code> will contain the <code>/</code> (or nothing), and the path goes into <code>$2</code>.</p>
<p>In both cases, the rewritten path (<code>/$2</code>) will have what we want.</p>
| larsks |
<p>I want to create a script that will run <code>port-forwarding</code> for a pod automatically, for specific pod name(app3, I've multiple apps in this namespace and I need to run just for <code>app3</code> ) when I run this script.</p>
<p>e.g.</p>
<p><code>kubectl port-forward pods/app3-86499b66d-dfwf7 8000:8000 -n web</code></p>
<p>I've started with</p>
<p><code>kubectl get pod -n webide-system | grep app3</code></p>
<p>The output is:</p>
<p><code>app3-86499b66d-dfwf7 1/1 Running 0 18h</code></p>
<p>However,Im not sure how to take the output, which is the pod name and run the port forwarding to it
The following in bold is constant</p>
<p><strong>pods/app3</strong>-86499b66d-dfwf7</p>
<p>And this is changing for each deployment</p>
<pre><code>-86499b66d-dfwf7
</code></pre>
<p>Any idea how to make it works with a script?</p>
| Beno Odr | <pre><code>POD_NAME=`kubectl get pod -n webide-system | grep app3 | sed 's/ .*//'`
kubectl port-forward pods/$POD_NAME 8000:8000 -n web
</code></pre>
| Beta |
<p>On a Kubernetes cluster, I have multiple <code>Deployment</code> resources. For security, I am using a sidecar proxy pattern where the <code>Service</code> will proxy traffic to the sidecar, which will ensure authentication before passing on to the deployed application.</p>
<p>I am trying to set up Kustomize to do this. Since the sidecar definition is likely environment specific, I don't want to include the sidecar in my base manifests, but would like it to be an overlay. Since I have multiple deployments that will need to attach that sidecar, it seemed appropriate to have the sidecar specification be a common shared component. This seemed like appropriate use of the Kustomize <code>Component</code> resource, but perhaps I'm wrong.</p>
<p>I have something similar to the following:</p>
<pre><code>.
βββ base
β βββ app1
β β βββ deployment.yaml
β β βββ kustomization.yaml
β βββ app2
β β βββ deployment.yaml
β β βββ kustomization.yaml
β βββ app3
β βββ deployment.yaml
β βββ kustomization.yaml
βββ components
β βββ sidecar
β βββ deployment-sidecar.yaml
β βββ kustomization.yaml
βββ overlays
βββ dev
βββ kustomization.yaml
</code></pre>
<p>I'd like the sidecar component to be applied to the 3 app deployments, but I can't seem to find a way to do this. Am I misusing components here?</p>
<p>My <code>components/sidecar/kustomization.yaml</code> file looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- path: deployment-sidecar.yaml
target:
labelSelector: xxx
</code></pre>
<p>This works, however it specifies the target of the patch in the component, whereas I would like to leave the component more generic and instead specify the target in <code>overlays/dev</code>.</p>
<p>Is there a better way to be handling this? In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to.</p>
| Mike | <blockquote>
<p>In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to.</p>
</blockquote>
<p>My first thought was that you could have a label that means "apply the sidecar patch", and use that in the Component:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- path: deployment-sidecar.yaml
target:
labelSelector: "inject-sidecar=true"
</code></pre>
<p>And then in your overlay(s), use a patch to apply that label to specific deployments:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
components:
- ../../sidecar
patches:
- target:
kind: Deployment
labelSelector: "app=app1"
patch: |
- op: add
path: /metadata/labels/inject-sidecar
value: "true"
</code></pre>
<p>Unfortunately, this won't work because patches are applied <strong>after</strong> processing all resources and components.</p>
<p>We can still do this, but it requires an intermediate stage. We can get that by creating another component inside the <code>dev</code> overlay that is responsible for applying the labels. In <code>overlays/dev/apply-labels/kustomization.yaml</code> you have a <code>kustomization.yaml</code> that contains the logic for applying the <code>inject-sidecar</code> label to specific Deployments (using a label selector, name patterns, or other criteria):</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- target:
kind: Deployment
labelSelector: "app=app1"
patch: |
- op: add
path: /metadata/labels/inject-sidecar
value: "true"
</code></pre>
<p>And then in <code>overlays/dev/kustomization.yaml</code> you have:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
components:
- apply-labels
- ../../components/sidecar
</code></pre>
<p>This gets you what you want:</p>
<ul>
<li>The sidecar patch is specified in a single place</li>
<li>Your overlay determines to which deployments you apply the sidecar patch</li>
</ul>
<p>There's a level of complexity here that is only necessary if:</p>
<ul>
<li>You have multiple overlays</li>
<li>You want to selectively apply the sidecar only to some deployments</li>
<li>You want the overlay to control to which deployments the patch is applied</li>
</ul>
<p>If any of those things aren't true you can simplify the configuration.</p>
| larsks |
<p>In the Base Ingress file I have added the following annotation <code>nginx.ingress.kubernetes.io/auth-snippet</code> and it needs to be removed in one of the environment.</p>
<p>Base Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/auth-snippet: test
</code></pre>
<p>I created a ingress-patch.yml in overlays and added the below</p>
<pre><code>- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet
</code></pre>
<p>But it gives the below error when executing Kustomize Build</p>
<pre><code>Error: remove operation does not apply: doc is missing path: "/metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet": missing value
</code></pre>
| Container-Man | <p>The path <code>/metadata/annotations/nginx.ingress.kubernetes.io/auth-snippet</code> doesn't work because <code>/</code> is the character that JSONPath uses to separate elements in the document; there's no way for a JSONPath parser to know that the <code>/</code> in <code>nginx.ingress.kubernetes.io/auth-snippet</code> means something different from the <code>/</code> in <code>/metadata/annotations</code>.</p>
<p>The <a href="https://www.rfc-editor.org/rfc/rfc6901" rel="nofollow noreferrer">JSON Pointer RFC</a> (which is the syntax used to specify the <code>path</code> component of a patch) tells us that we need to escape <code>/</code> characters using <code>~1</code>. If we have the following in <code>ingress.yaml</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
example-annotation: foo
nginx.ingress.kubernetes.io/auth-snippet: test
</code></pre>
<p>And write our <code>kustomization.yaml</code> like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ingress.yaml
patches:
- target:
kind: Ingress
name: ingress
patch: |
- op: remove
path: /metadata/annotations/nginx.ingress.kubernetes.io~1auth-snippet
</code></pre>
<p>Then the output of <code>kustomize build</code> is:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
example-annotation: foo
name: ingress
</code></pre>
| larsks |
<p>I am testing moleculer microservices framework to setting up an infraestructure. I will to use typescript (<a href="https://github.com/moleculerjs/moleculer-template-project-typescript" rel="nofollow noreferrer">https://github.com/moleculerjs/moleculer-template-project-typescript</a>). My idea is according the documentation is:</p>
<ul>
<li>create one project for with API Gateway => make doker => make k8s deployment with N replicas</li>
<li>create 1 project per microservice => dockerize => make k8s deployment with N replicas</li>
<li>create 2 project per microservice => dockerize => make k8s deployment with N replicas
...</li>
<li>create N project per microservice => dockerize => make k8s deployment with N replicas</li>
</ul>
<p>I will use redis as transporter. I want to use redis also in development.</p>
<p>I have got this doubt, because you can create in the same project all the microservices, but in this way, you are developing a monolitic application (and only in one thread). I think that you need to sepparate each microservice in independent (typescripts) projects to make it after docker images and the make pods in k8s in the deployment phase.</p>
| dlopezgonzalez | <p>You can separate every microservices into a separated projects, but with Moleculer you don't need it. You can put all services into one project. The development will be easy and fast, and at deploying you can control which services will be loaded. So you can generate one docker image and control the loaded services with environment variables.</p>
<p>E.g here you can see the <code>SERVICES</code> env var in docker-compose.yml:
<a href="https://moleculer.services/docs/0.14/deploying.html#Docker-Compose" rel="nofollow noreferrer">https://moleculer.services/docs/0.14/deploying.html#Docker-Compose</a></p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.2"
services:
api:
build:
context: .
image: moleculer-demo
container_name: moleculer-demo-api
env_file: docker-compose.env
environment:
SERVICES: api # Runner will start only the 'api' service in this container
PORT: 3000 # Port of API gateway
greeter:
build:
context: .
image: moleculer-demo
container_name: moleculer-demo-greeter
env_file: docker-compose.env
environment:
SERVICES: greeter # Runner will start only the 'greeter' service in this container
</code></pre>
| Icebob |
<p>I have an environment made of pods that address their target environment based on an environment variable called <code>CONF_ENV</code> that could be <code>test</code>, <code>stage</code> or <code>prod</code>.</p>
<p>The application running inside the Pod has the same source code across environments, the configuration file is picked according to the <code>CONF_ENV</code> environment variable.</p>
<p>I'v encapsulated this <code>CONF_ENV</code> in <code>*.properties</code> files just because I may have to add more environment variables later, but I make sure that each property file contains the expected <code>CONF_ENV</code> e.g.:</p>
<ul>
<li><code>test.properites</code> has <code>CONF_ENV=test</code>,</li>
<li><code>prod.properties</code> has <code>CONF_ENV=prod</code>, and so on...</li>
</ul>
<p>I struggle to make this work with Kustomize overlays, because I want to define a <code>ConfigMap</code> as a shared resource across all the pods within the same overlay e.g. <code>test</code> (each pod in their own directory, along other stuff when needed).</p>
<p>So the idea is:</p>
<ul>
<li><code>base/</code> (shared) with the definition of the <code>Namespace</code>, the <code>ConfigMap</code> (and potentially other shared resources</li>
<li><code>base/pod1/</code> with the definition of pod1 picking from the shared <code>ConfigMap</code> (this defaults to <code>test</code>, but in principle it could be different)</li>
</ul>
<p>Then the overlays:</p>
<ul>
<li><code>overlay/test</code> that patches the base with <code>CONF_ENV=test</code> (e.g. for <code>overlay/test/pod1/</code> and so on)</li>
<li><code>overlay/prod/</code> that patches the base with <code>CONF_ENV=prod</code> (e.g. for <code>overlay/prod/pod1/</code> and so on)</li>
</ul>
<p>Each directory with their own <code>kustomize.yaml</code>.</p>
<p>The above doesn't work because when going into e.g. <code>overlay/test/pod1/</code> and I invoke the command <code>kubectl kustomize .</code> to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys <code>bases:</code> or <code>resources:</code>.</p>
<p>I am trying to share the <code>ConfigMap</code> across the entire <code>CONF_ENV</code> environment in an attempt to <strong>minimize the boilerplate YAML</strong> by leveraging the patching-pattern with Kustomize.</p>
<p>The Kubernetes / Kustomize YAML directory structure works like this:</p>
<pre class="lang-sh prettyprint-override"><code>βββ base
β βββ configuration.yaml # I am trying to share this!
β βββ kustomization.yaml
β βββ my_namespace.yaml # I am trying to share this!
β βββ my-scheduleset-etl-misc
β β βββ kustomization.yaml
β β βββ my_scheduleset_etl_misc.yaml
β βββ my-scheduleset-etl-reporting
β β βββ kustomization.yaml
β β βββ my_scheduleset_etl_reporting.yaml
β βββ test.properties # I am trying to share this!
βββ overlay
βββ test
βββ kustomization.yaml # here I want tell "go and pick up the shared resources in the base dir"
βββ my-scheduleset-etl-misc
β βββ kustomization.yaml
β βββ test.properties # I've tried to share this one level above, but also to add this inside the "leaf" level for a given pod
βββ my-scheduleset-etl-reporting
βββ kustomization.yaml
</code></pre>
<p>The command <code>kubectl</code> with Kustomize:</p>
<ul>
<li>sometimes complains that the shared namespace does not exist:</li>
</ul>
<pre><code>error: merging from generator &{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } <nil>}}}:
id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"my-schedule-set-props", Namespace:""}
does not exist; cannot merge or replace
</code></pre>
<ul>
<li>sometimes doesn't allow to have shared resources inside an overlay:</li>
</ul>
<pre><code>error: loading KV pairs: env source files: [../test.properties]:
security; file '/my/path/to/yaml/overlay/test/test.properties'
is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
</code></pre>
<ul>
<li>sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:</li>
</ul>
<pre><code>error: accumulating resources: accumulation err='accumulating resources from '../':
'/my/path/to/yaml/overlay/test' must resolve to a file':
cycle detected: candidate root '/my/path/to/yaml/overlay/test'
contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
</code></pre>
<p>The overlay <code>kustomization.yaml</code> files inside the pod dirs have:</p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../ # tried with/without this to share the ConfigMap
- ../../../base/my-scheduleset-etl-misc/
</code></pre>
<p>The <code>kustomization.yaml</code> at the root of the overlay has:</p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../../base
</code></pre>
<p>The <code>kustomization.yaml</code> at the base dir contains this configuration for the ConfigMap:</p>
<pre class="lang-yaml prettyprint-override"><code># https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9
configMapGenerator:
- name: my-schedule-set-props
namespace: my-ss-schedules
envs:
- test.properties
vars:
- name: CONF_ENV
objref:
kind: ConfigMap
name: my-schedule-set-props
apiVersion: v1
fieldref:
fieldpath: data.CONF_ENV
configurations:
- configuration.yaml
</code></pre>
<p>With <code>configuration.yaml</code> containing:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
varReference:
- path: spec/confEnv/value
kind: Pod
</code></pre>
<p>How do I do this?</p>
<p>How do I make sure that I minimise the amount of YAML by sharing all the <code>ConfigMap</code> stuff and the Pods definitions as much as I can?</p>
| TPPZ | <p>If I understand your goal correctly, I think you may be grossly over-complicating things. I <em>think</em> you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.</p>
<p>In base, I have:</p>
<pre><code>$ cd base
$ tree
.
βββ example.properties
βββ kustomization.yaml
βββ pod1
βββ kustomization.yaml
βββ pod.yaml
</code></pre>
<p>Where <code>example.properties</code> contains:</p>
<pre><code>SOME_OTHER_VAR=somevalue
CONF_ENV=test
</code></pre>
<p>And <code>kustomization.yaml</code> contains:</p>
<pre><code>resources:
- pod1
configMapGenerator:
- name: example-props
envs:
- example.properties
</code></pre>
<p>I have two overlays defined, <code>test</code> and <code>prod</code>:</p>
<pre><code>$ cd ../overlays
$ tree
.
βββ prod
β βββ example.properties
β βββ kustomization.yaml
βββ test
βββ kustomization.yaml
</code></pre>
<p><code>test/kustomization.yaml</code> looks like this:</p>
<pre><code>resources:
- ../../base
</code></pre>
<p>It's just importing the <code>base</code> without any changes, since the value of <code>CONF_ENV</code> from the <code>base</code> directory is <code>test</code>.</p>
<p><code>prod/kustomization.yaml</code> looks like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
configMapGenerator:
- name: example-props
behavior: merge
envs:
- example.properties
</code></pre>
<p>And <code>prod/example.properties</code> looks like:</p>
<pre><code>CONF_ENV=prod
</code></pre>
<p>If I run <code>kustomize build overlays/test</code>, I get as output:</p>
<pre><code>apiVersion: v1
data:
CONF_ENV: test
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-7245222b9b
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-7245222b9b
image: docker.io/alpine
name: alpine
</code></pre>
<p>If I run <code>kustomize build overlays/prod</code>, I get:</p>
<pre><code>apiVersion: v1
data:
CONF_ENV: prod
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-h4b5tc869g
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-h4b5tc869g
image: docker.io/alpine
name: alpine
</code></pre>
<p>That is, everything looks as you would expect given the configuration in <code>base</code>, but we have provided a new value for <code>CONF_ENV</code>.</p>
<p>You can find all these files <a href="https://github.com/larsks/so-example-71008589" rel="noreferrer">here</a>.</p>
| larsks |
<p>I am trying to deploy a K8s application using Kustomize. Up to now I have done simple implementations where we have a few of the K8s files such as <code>ingress.yaml</code> with something like the following:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
namespace: namespace
labels:
app: appname
spec:
tls:
- hosts:
- $(variable1)
secretName: $(variable2)-tls
</code></pre>
<p>Under my <code>overlays</code> directory for each environment, I then have another <code>kustomize.yaml</code> which gives the values in a configmap:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path
configMapGenerator:
- behavior: merge
literals:
- variable1=foo
- variable2=bar
name: configmapname
images:
- name: imagename
newName: registryurl
</code></pre>
<p>This works well, but now I need to do something more complicated. Say for example I have multiple ingress. Instead of creating multiple base ingress yaml files, how can I have one base yaml file that creates every ingress based on the values in my overlay file? Is that possible?</p>
| nray | <p>Kustomize isn't a templating system and doesn't do variable substitution. It <em>can</em> perform a variety of YAML patching tricks, so one option you have is to start with a base manifest like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
spec:
tls:
- hosts: []
secretName:
</code></pre>
<p>And then patch it in your <code>kustomization.yaml</code> files:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ingressname
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
</code></pre>
<p>What I've shown here works well if you have an application consisting of a single Ingress and you want to produce multiple variants (maybe one per cluster, or per namespace, or something). That is, you have:</p>
<ul>
<li>A Deployment</li>
<li>A Service</li>
<li>An Ingress</li>
<li>(etc.)</li>
</ul>
<p>Then you would have one directory for each variant of the app, giving you a layout something like:</p>
<pre><code>.
βββ base
β βββ deployment.yaml
β βββ ingress.yaml
β βββ kustomization.yaml
β βββ service.yaml
βββ overlays
βββ variant1
β βββ kustomization.yaml
βββ variant2
βββ kustomization.yaml
</code></pre>
<p>If your application has <em>multiple</em> Ingress resources, and you want to apply the same patch to all of them, <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md" rel="nofollow noreferrer">Kustomize can do that</a>. If you were to modify the patch in your <code>kustomization.yaml</code> so that it looks like this instead:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ".*"
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
</code></pre>
<p>This would apply the same patch to all matching Ingress resources (which is "all of them", in this case, because we used <code>.*</code> as our match expression).</p>
| larsks |
<p>My Elasticsearch7.8.0 is running in OCI OKE (Kubernetes running in Oracle Cloud). I want to setup Elasticsearch backup snapshot with OCI Object store using OCI Amazon S3 Compatibility API. Added repository-s3 plugin and configured ACCESS_KEY and SECRET_KEY in the PODs. While repository, I am getting "s_s_l_peer_unverified_exception"</p>
<pre><code>PUT /_snapshot/s3-repository
{
"type": "s3",
"settings": {
"client": "default",
"region": "OCI_REGION",
"endpoint": "OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com",
"bucket": "es-backup"
}
}
</code></pre>
<p>Respose :</p>
<pre><code>{
"error" : {
"root_cause" : [
{
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node"
}
],
"type" : "repository_verification_exception",
"reason" : "[s3-repository] path is not accessible on master node",
"caused_by" : {
"type" : "i_o_exception",
"reason" : "Unable to upload object [tests-0J3NChNRT9WIQJknHAssKg/master.dat] using a single upload",
"caused_by" : {
"type" : "sdk_client_exception",
"reason" : "Unable to execute HTTP request: Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]",
"caused_by" : {
"type" : "s_s_l_peer_unverified_exception",
"reason" : "Certificate for <es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com> doesn't match any of the subject alternative names: [swiftobjectstorage.us-ashburn-1.oraclecloud.com]"
}
}
}
},
"status" : 500
}
</code></pre>
| Binoy Thomas | <p>Well you shouldn't be talking to es-backup.OCI_TENANCY.compat.objectstorage.OCI_REGION.oraclecloud.com where your bucket name is part of the domain. You can try it in your browser and you'll get a similar security warning about certs.</p>
<p>If you look at <a href="https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI" rel="nofollow noreferrer">https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm#usingAPI</a> you'll see a mention of:</p>
<blockquote>
<p>The application must use path -based access. Virtual host-style access (accessing a bucket as bucketname.namespace.compat.objectstorage.region.oraclecloud.com) is not supported.</p>
</blockquote>
<p>AWS is migrating from path based to sub-domain based URLs for S3 (<a href="https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/</a>) so the ES S3 plugin is probably defaulting to doing things the new AWS way.</p>
<p>Does it make a difference if you use an https:// URL for the endpoint value? Looking at my 6.8 config I have something like:</p>
<pre><code>{
"s3-repository": {
"type": "s3",
"settings": {
"bucket": "es-backup",
"client": "default",
"endpoint": "https://{namespace}.compat.objectstorage.us-ashburn-1.oraclecloud.com/",
"region": "us-ashburn-1"
}
}
}
</code></pre>
<p>What I'm guessing is that having a full URL for the endpoint probably sets the <strong>protocol</strong> and <strong>path_style_access</strong> or 6.8 didn't require you to set <strong>path_style_access</strong> to <strong>true</strong> but 7.8 might. Either way, try a full URL or setting <strong>path_style_access</strong> to <strong>true</strong>. Relevant docs at <a href="https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/plugins/master/repository-s3-client.html</a></p>
| Chase |
<p>Is it possible to provide environment variables which will be set in all pods instead of configuring in each pods spec?</p>
<p>If not natively possible in Kubernetes, what would be an efficient method to accomplish it? We have Helm, but that still requires a lot of duplication.</p>
<p>This old answer suggested "PodPreset" which is no longer part of Kubernetes: <a href="https://stackoverflow.com/q/54550933/591975">Kubernetes - Shared environment variables for all Pods</a></p>
| sean | <p>You could do this using a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">mutating admission webhook</a> to inject the environment variable into the pod manifest.</p>
<p>There are more details on implementing webhooks <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#request" rel="nofollow noreferrer">here</a>.</p>
| larsks |
<p>I refactored my k8s objects to use Kustomization, Components, replacements, patches and got to a good DRY state so that I don't repeat much between 2 apps and between those across dev and test environments. While doing so I am referring to objects outside of the folder (but same repository)</p>
<pre><code>components:
- ../../common-nonprod
- ../../common-nonprod-ui
</code></pre>
<p>My question is will ArgoCD be able to detect a change in the app if any of the files inside the common folders change that this application refers to as <code>components</code>.</p>
<p>In other words does ArgoCD performs <code>kustomize build</code> to detect what changed ?</p>
| bhantol | <p>Yes. ArgoCD runs <code>kustomize build</code> to realize your manifests before trying to apply them to the cluster. ArgoCD doesn't care which files have changed; it simply cares that the manifests produced by <code>kustomize build</code> differ (or not) from what is currently deployed in the cluster.</p>
| larsks |
<p>I created a Kubernetes cluster through Kops. The configuration and the ssh keys were in a machine that I don't have access to anymore. Is it possible to ssh to the nodes through kops even if I have lost the key? I see there is a command - </p>
<blockquote>
<p>kops get secrets</p>
</blockquote>
<p>This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?</p>
<p>I see the cluster state is stored in S3. Does it store the secret key as well?</p>
| Anshul Tripathi | <p>You can't recover the private key, but you should be able install a new public key following this procedure:</p>
<pre><code>kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes to reconfigure the auto-scaling groups
kops rolling-update cluster --name <clustername> --yes to immediately roll all the machines so they have the new key (optional)
</code></pre>
<p>Taken from this document:</p>
<p><a href="https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access" rel="noreferrer">https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access</a></p>
| Ben W. |
<p>I want to use the ClusterRole <strong>edit</strong> for some users of my Kubernetes cluster (<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles</a>).</p>
<p>However, it is unfortunate that the user can be accessing and modifying Resource Quotas and Limit Ranges.</p>
<p>My question is now: How can I grant Users via a RoleBinding access to a namespace, such that the Role is essentially the CluserRole <strong>edit</strong>, but without having any access to Resource Quotas and Limit Ranges?</p>
| tobias | <p>The <code>edit</code> role gives only <em>read</em> access to <code>resourcequotas</code> and <code>limitranges</code>:</p>
<pre><code>- apiGroups:
- ""
resources:
- bindings
- events
- limitranges
- namespaces/status
- pods/log
- pods/status
- replicationcontrollers/status
- resourcequotas
- resourcequotas/status
verbs:
- get
- list
- watch
</code></pre>
<p>If you want a role that doesn't include read access to these resources, just make a copy of the <code>edit</code> role with those resources excluded.</p>
| larsks |
<p>I am learning about k8s and I am trying to make a deployment out of mongo db.
This are my yamls</p>
<p>Deployment</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 2
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITIDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
</code></pre>
<p>Secrets</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: 0paque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
</code></pre>
<p>As you can see the pod is in the CrashLoopBackOff state</p>
<pre><code>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongodb-deployment-6ddd5fb89-h9rjz 0/1 ImagePullBackOff 0 7m25s
pod/mongodb-deployment-6ddd5fb89-wvz6p 0/1 ImagePullBackOff 0 7m25s
pod/mongodb-deployment-f7df49f67-2gp4x 0/1 CrashLoopBackOff 5 (43s ago) 3m54s
pod/nginx-deployment-78cc6468fb-22wz5 1/1 Running 0 49m
pod/nginx-deployment-78cc6468fb-6hxq8 1/1 Running 0 49m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m
service/nginx-service ClusterIP 10.110.136.45 <none> 80/TCP 34m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb-deployment 0/2 1 0 7m25s
deployment.apps/nginx-deployment 2/2 2 2 49m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-deployment-6ddd5fb89 2 2 0 7m25s
replicaset.apps/mongodb-deployment-f7df49f67 1 1 0 3m54s
replicaset.apps/nginx-deployment-78cc6468fb 2 2 2 49m
</code></pre>
<p>And if I check the pod, it only it backed off. Not sure what is going on in there, surely it is a config issue, but yet again, I am new to k8s.</p>
<pre><code>kubectl describe pod/mongodb-deployment-f7df49f67-2gp4x
Name: mongodb-deployment-f7df49f67-2gp4x
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Thu, 06 Oct 2022 13:10:09 -0500
Labels: app=mongodb
pod-template-hash=f7df49f67
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/mongodb-deployment-f7df49f67
Containers:
mongodb:
Container ID: docker://73f6707f1fc2b5ae690cf3518e35ab05d258a3f209b106c014310b0e38c05f00
Image: mongo
Image ID: docker-pullable://mongo@sha256:2ca8fb22c9522b49fd1f5490dee3e7026a4331b9f904d5acf10a9638c1d1539d
Port: 27017/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 06 Oct 2022 13:11:47 -0500
Finished: Thu, 06 Oct 2022 13:11:47 -0500
Ready: False
Restart Count: 4
Environment:
MONGO_INITDB_ROOT_USERNAME: <set to the key 'mongo-root-username' in secret 'mongodb-secret'> Optional: false
MONGO_INITIDB_ROOT_PASSWORD: <set to the key 'mongo-root-password' in secret 'mongodb-secret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gpmw4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m17s default-scheduler Successfully assigned default/mongodb-deployment-f7df49f67-2gp4x to minikube
Normal Pulled 2m13s kubelet Successfully pulled image "mongo" in 2.4540099s
Normal Pulled 2m11s kubelet Successfully pulled image "mongo" in 2.0397836s
Normal Pulled 113s kubelet Successfully pulled image "mongo" in 2.5676941s
Normal Created 84s (x4 over 2m13s) kubelet Created container mongodb
Normal Started 84s (x4 over 2m13s) kubelet Started container mongodb
Normal Pulled 84s kubelet Successfully pulled image "mongo" in 2.2913276s
Warning BackOff 53s (x7 over 2m10s) kubelet Back-off restarting failed container
Normal Pulling 42s (x5 over 2m16s) kubelet Pulling image "mongo"
Normal Pulled 39s kubelet Successfully pulled image "mongo" in 2.3732048s
</code></pre>
| sebsmgzz | <p>If you look at the logs for the failing pod, you will see:</p>
<pre><code>$ k logs deploy/mongodb-deployment
error: missing 'MONGO_INITDB_ROOT_USERNAME' or 'MONGO_INITDB_ROOT_PASSWORD'
both must be specified for a user to be created
</code></pre>
<p>This gives us an idea where we should look for the source of the problem. Taking a closer look at your manifests, we see that you have a typo in your Deployment.</p>
<p>You have:</p>
<pre><code>- name: MONGO_INITIDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
</code></pre>
<p>There is an erroneous <code>I</code> in the variable name. You want:</p>
<pre><code>- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
</code></pre>
| larsks |
<p>I was looking at step-by-step tutorial on how to run my spring boot, mysql-backed app using AWS EKS (Elastic Container service for Kubernetes) using the existing SSL wildcard certificate and wasn't able to find a complete solution. </p>
<p>The app is a standard Spring boot self-contained application backed by MySQL database, running on port 8080. I need to run it with high availability, high redundancy including MySQL db that needs to handle large number of writes as well as reads. </p>
<p>I decided to go with the EKS-hosted cluster, saving a custom Docker image to AWS-own ECR private Docker repo going against EKS-hosted MySQL cluster. And using AWS issued SSL certificate to communicate over HTTPS. Below is my solution but I'll be very curious to see how it can be done differently</p>
| Bostone | <p>This a step-by-step tutorial. Please don't proceed forward until the previous step is complete. </p>
<p><strong>CREATE EKS CLUSTER</strong></p>
<p>Follow <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="nofollow noreferrer">the standard tutorial</a> to create EKS cluster. Don't do step 4. When you done you should have a working EKS cluster and you must be able to use <code>kubectl</code> utility to communicate with the cluster. When executed from the command line you should see the working nodes and other cluster elements using
<code>kubectl get all --all-namespaces</code> command</p>
<p><strong>INSTALL MYSQL CLUSTER</strong></p>
<p>I used <code>helm</code> to install MySQL cluster following steps from <a href="https://github.com/helm/helm#install" rel="nofollow noreferrer">this tutorial</a>. Here are the steps</p>
<p><strong>Install helm</strong></p>
<p>Since I'm using Macbook Pro with <code>homebrew</code> I used <code>brew install kubernetes-helm</code> command</p>
<p><strong>Deploy MySQL cluster</strong></p>
<p>Note that in <em>MySQL cluster</em> and <em>Kubernetes (EKS) cluster</em>, word "cluster" refers to 2 different things. Basically you are installing cluster into cluster, just like a Russian Matryoshka doll so your MySQL cluster ends up running on EKS cluster nodes.</p>
<p>I used a 2nd part of <a href="https://www.presslabs.com/code/kubernetes-mysql-operator-aws-kops/" rel="nofollow noreferrer">this tutorial</a> (ignore kops part) to prepare the <code>helm</code> chart and install MySQL cluster. Quoting helm configuration:</p>
<pre><code>$ kubectl create serviceaccount -n kube-system tiller
serviceaccount "tiller" created
$ kubectl create clusterrolebinding tiller-crule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io "tiller-crule" created
$ helm init --service-account tiller --wait
$HELM_HOME has been configured at /home/presslabs/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
$ helm repo add presslabs https://presslabs.github.io/charts
"presslabs" has been added to your repositories
$ helm install presslabs/mysql-operator --name mysql-operator
NAME: mysql-operator
LAST DEPLOYED: Tue Aug 14 15:50:42 2018
NAMESPACE: default
STATUS: DEPLOYED
</code></pre>
<p>I run all commands exactly as quoted above.</p>
<p>Before creating a cluster, you need a secret that contains the ROOT_PASSWORD key.</p>
<p>Create a file named <code>example-cluster-secret.yaml</code> and copy into it the following YAML code</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
# root password is required to be specified
ROOT_PASSWORD: Zm9vYmFy
</code></pre>
<p>But what is that <code>ROOT_PASSWORD</code>? Turns out this is base64 encoded password that you planning to use with your MySQL root user. Say you want <code>root/foobar</code> (please don't actually use <code>foobar</code>). The easiest way to encode the password is to use one of the websites such as <a href="https://www.base64encode.org/" rel="nofollow noreferrer">https://www.base64encode.org/</a> which encodes <code>foobar</code> into <code>Zm9vYmFy</code></p>
<p>When ready execute <code>kubectl apply -f example-cluster-secret.yaml</code> which will create a new secret</p>
<p>Then you need to create a file named <code>example-cluster.yaml</code> and copy into it the following YAML code:</p>
<pre><code>apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: my-cluster
spec:
replicas: 2
secretName: my-secret
</code></pre>
<p>Note how the <code>secretName</code> matches the secret name you just created. You can change it to something more meaningful as long as it matches in both files. Now run <code>kubectl apply -f example-cluster.yaml</code> to finally create a MySQL cluster. Test it with</p>
<pre><code>$ kubectl get mysql
NAME AGE
my-cluster 1m
</code></pre>
<p>Note that I did not configure a backup as described in the rest of the article. You don't need to do it for the database to operate. But how to access your db? At this point the mysql service is there but it doesn't have external IP. In my case I don't even want that as long as my app that will run on the same EKS cluster can access it. </p>
<p>However you can use <code>kubectl</code> port forwarding to access the db from your dev box that runs <code>kubectl</code>. Type in this command: <code>kubectl port-forward services/my-cluster-mysql 8806:3306</code>. Now you can access your db from <code>127.0.0.1:8806</code> using user <code>root</code> and the non-encoded password (<code>foobar</code>). Type this into separate command prompt: <code>mysql -u root -h 127.0.0.1 -P 8806 -p</code>. With this you can also use MySQL Workbench to manage your database just don't forget to run <code>port-forward</code>. And of course you can change 8806 to other port of your choosing</p>
<p><strong>PACKAGE YOUR APP AS A DOCKER IMAGE AND DEPLOY</strong></p>
<p>To deploy your Spring boot app into EKS cluster you need to package it into a Docker image and deploy it into the Docker repo. Let's start with a Docker image. There are plenty tutorials on this <a href="https://spring.io/guides/gs/spring-boot-docker/" rel="nofollow noreferrer">like this one</a> but the steps are simple:</p>
<p>Put your generated, self-contained, spring boot jar file into a directory and create a text file with this exact name: <code>Dockerfile</code> in the same directory and add the following content to it:</p>
<pre><code>FROM openjdk:8-jdk-alpine
MAINTAINER me@mydomain.com
LABEL name="My Awesome Docker Image"
# Add spring boot jar
VOLUME /tmp
ADD myapp-0.1.8.jar app.jar
EXPOSE 8080
# Database settings (maybe different in your app)
ENV RDS_USERNAME="my_user"
ENV RDS_PASSWORD="foobar"
# Other options
ENV JAVA_OPTS="-Dverknow.pypath=/"
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
</code></pre>
<p>Now simply run a Docker command from the same folder to create an image. Of course that requires Docker client installed on your dev box.</p>
<p><code>$ docker build -t myapp:0.1.8 --force-rm=true --no-cache=true .</code></p>
<p>If all goes well you should see your image listed with <code>docker ps</code> command</p>
<p><strong>Deploy to the private ECR repo</strong></p>
<p>Deploying your new image to ECR repo is easy and ECR works with EKS right out of the box. Log into AWS console and navigate to <a href="https://us-west-2.console.aws.amazon.com/ecr/get-started?region=us-west-2" rel="nofollow noreferrer">the ECR section</a>. I found it confusing that apparently you need to have one repository per image but when you click "Create repository" button put your image name (e.g. <code>myapp</code>) into the text field. Now you need to copy the ugly URL for your image and go back to the command prompt</p>
<p>Tag and push your image. I'm using a fake URL as example: <code>901237695701.dkr.ecr.us-west-2.amazonaws.com</code> you need to copy your own from the previous step</p>
<pre><code>$ docker tag myapp:0.1.8 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
$ docker push 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
</code></pre>
<p>At this point the image should show up at ECR repository you created</p>
<p><strong>Deploy your app to EKS cluster</strong></p>
<p>Now you need to create a Kubernetes deployment for your app's Docker image. Create a <code>myapp-deployment.yaml</code> file with the following content</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
name: myapp
ports:
- containerPort: 8080
name: server
env:
# optional
- name: RDS_HOSTNAME
value: "10.100.98.196"
- name: RDS_PORT
value: "3306"
- name: RDS_DB_NAME
value: "mydb"
restartPolicy: Always
status: {}
</code></pre>
<p>Note how I'm using a full URL for the <code>image</code> parameter. I'm also using a private CLUSTER-IP of mysql cluster that you can get with <code>kubectl get svc my-cluster-mysql</code> command. This will differ for your app including any env names but you do have to provide this info to your app somehow. Then in your app you can set something like this in the <code>application.properties</code> file:</p>
<pre><code>spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${RDS_HOSTNAME}:${RDS_PORT}/${RDS_DB_NAME}?autoReconnect=true&amp;zeroDateTimeBehavior=convertToNull
spring.datasource.username=${RDS_USERNAME}
spring.datasource.password=${RDS_PASSWORD}
</code></pre>
<p>Once you save the <code>myapp-deployment.yaml</code> you need to run this command</p>
<p><code>kubectl apply -f myapp-deployment.yaml</code></p>
<p>Which will deploy your app into EKS cluster. This will create 2 pods in the cluster that you can see with <code>kubectl get pods</code> command</p>
<p>And rather than try to access one of the pods directly we can create a service to front the app pods. Create a <code>myapp-service.yaml</code> with this content:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
ports:
- port: 443
targetPort: 8080
protocol: TCP
name: http
selector:
app: myapp
type: LoadBalancer
</code></pre>
<p>That's where the magic happens! Just by setting the port to 443 and type to <code>LoadBalancer</code> the system will create a Classic Load Balancer to front your app.</p>
<p>BTW if you don't need to run your app over HTTPS you can set port to 80 and you will be pretty much done!</p>
<p>After you run <code>kubectl apply -f myapp-service.yaml</code> the service in the cluster will be created and if you go to to the Load Balancers section in the EC2 section of AWS console you will see that a new balancer is created for you. You can also run <code>kubectl get svc myapp-service</code> command which will give you EXTERNAL-IP value, something like <code>bl3a3e072346011e98cac0a1468f945b-8158249.us-west-2.elb.amazonaws.com</code>. Copy that because we need to use it next.</p>
<p>It is worth to mention that if you are using port 80 then simply pasting that URL into the browser should display your app</p>
<p><strong>Access your app over HTTPS</strong></p>
<p>The following section assumes that you have AWS-issued SSL certificate. If you don't then go to AWS console "Certificate Manager" and create a wildcard certificate for your domain</p>
<p>Before your load balancer can work you need to access <code>AWS console -> EC2 -> Load Balancers -> My new balancer -> Listeners</code> and click on "Change" link in <code>SSL Certificate</code> column. Then in the pop up select the AWS-issued SSL certificate and save.</p>
<p>Go to Route-53 section in AWS console and select a hosted zone for your domain, say <code>myapp.com.</code>. Then click "Create Record Set" and create a <code>CNAME - Canonical name</code> record with <code>Name</code> set to whatever alias you want, say <code>cluster.myapp.com</code> and <code>Value</code> set to the EXTERNAL-IP from above. After you "Save Record Set" go to your browser and type in <a href="https://cluster.myapp.com" rel="nofollow noreferrer">https://cluster.myapp.com</a>. You should see your app running</p>
| Bostone |
<p>When a POD is terminating, how to get correct status <strong>Terminating</strong> using Kubernetes <strong>REST</strong> API. I am not able to figure it out.
But <strong>kubectl</strong> always report correct status, and it also uses REST API to do that.</p>
<p>What magic am I missing in REST API ? does it call two different API's and accumulate status ?</p>
| SuVeRa | <p>You are not the <a href="https://github.com/kubernetes/kubernetes/issues/22839" rel="nofollow noreferrer">first person to ask this question</a>. The answer appears to be that <code>kubectl</code> inspects <code>metadata.deletionTimestamp</code>; if this exists (and, presumably, has a non-null value) then the pod is in <code>Terminating</code> state.</p>
<p>For example, for a running pod:</p>
<pre><code>$ curl -s localhost:8001/api/v1/namespaces/mynamespace/pods/example | jq .metadata.deletionTimestamp
<empty output>
</code></pre>
<p>And then immediately after I <code>kubectl delete</code> the pod:</p>
<pre><code>$ curl -s localhost:8001/api/v1/namespaces/mynamespace/pods/example | jq .metadata.deletionTimestamp
"2022-01-15T15:30:01Z"
</code></pre>
| larsks |
Subsets and Splits