Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>Small question regarding Kubernetes, and the kubectl command please.</p>
<p>Currently, I perform three separate actions:</p>
<p>action 1:
<code>kubectl -n=mynamespace apply -f /path/to/manifest.yml</code></p>
<p>Based on the successful creation, when I see the <code>created</code> I perform action 2, in order to retrieve the pod</p>
<p><code>kubectl -n=mynamespace get all</code></p>
<p>This step is purely a manual, time consuming and error prone step.
I have to look at the terminal with eyes, selecting the correct pod name. Just to copy paste manually, the latest pod created from step 1.</p>
<p>I will be looking with my eye for something like this, and manually copy paste:
<code>pod/my-pod-6bd84ccb9f-6kjwj</code></p>
<p>Once I get the pod, I go with action 3, which is to <strong>exec</strong> inside the pod, with the copy paste from step 2:</p>
<p><code>kubectl -n=mynamespace exec -it pod/my-pod-6bd84ccb9f-6kjwj -- bash</code></p>
<p>I feel like this is not the correct solution, and because of my ignorance, I am doing unnecessary steps.</p>
<p>May I ask if there is a smarter way to simply just create the pod, then to be able to directly exec inside this pod?</p>
<p>Some kind of command which will allow this.</p>
<p><code>kubectl -n=mynamespace exec -it {into the pod I just created from kubectl apply -f file.yml} -- bash</code></p>
<p>Thank you.</p>
| PatPanda | <p>Not really. There is <code>kubectl run -i</code> but I don't think that's what you're looking for? Really the answer is "don't". <code>kubectl exec</code> is intended only for very rare debugging use and if you're doing it enough to be annoyed, something is probably very very wrong with your workflow. What do you think you need it?</p>
| coderanger |
<p>I am trying to understand the retry behavior for liveness probe, its not clear from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">documentation</a>.</p>
<p>I will to illustrate with an example. Consider the following spec for liveness probe</p>
<pre><code>periodSeconds: 60
timeoutSeconds: 15
failureThreshold: 3
</code></pre>
<p>Lets assume the service is down</p>
<p>Which behavior is expected?</p>
<pre><code>the probe kicks off at 0s
sees a failure at 15s, (due to timeoutSeconds 15)
retry1 at ~15s, fail at ~30s and retry2 at ~30s, fail at ~45 (retry immediately after failure)
ultimately restart pod at ~45s (due to failureThreshold 3)
</code></pre>
<p>or</p>
<pre><code>the probe kicks off at 0s
sees a failure at 15s, (due to timeoutSeconds 15)
retry1 at ~60s, fail at ~75s and retry2 at ~120s, fail at ~135s (due to periodSeconds 60, doesnt really do retry after a failure)
ultimately restart pod at ~180s (due to failureThreshold 3)
</code></pre>
| Aditya Dara | <p><code>periodSeconds</code> is how often it checks. If you mean retry after crossing the failure threshold it never will because the container is full restarted from scratch.</p>
| coderanger |
<p>I have a deployment with two containers, let's say A and B.</p>
<p><code>container A</code> serves http requests, the important container in the deployment. has liveness and readiness probes properly set.</p>
<p><code>Container B</code> serves as a proxy to a certain third party service through an ssh tunnel, has a very specific usecase, but for whatever reason, the third party service can be unreachable which puts this container on a crash loop.</p>
<p>The question is: How can I make this pod serve requests even if <code>Container B</code> is on a crashloop?</p>
<p>TLDR; How to make a deployment serve requests if a specific container is on a crashloop?</p>
| Amine Hakkou | <p>You kind of can't. You could remove the readiness probe from B but any kind of crash is assumed to be bad. Probably your best bet is to change B so that it doesn't actually crash out to where the kubelet can see it. Like put a little <code>while true; do theoriginalcommand; sleep 1; done</code> bash loop on it or something so the kubelet isn't aware of it crashing. Or just make it not crash ...</p>
| coderanger |
<p>Quick question regarding Kubernetes job status.</p>
<p>Lets assume I submit my resource to 10 PODS and want to check if my JOB is completed successfully.</p>
<p>What is the best available options that we can use from KUBECTL commands.</p>
<p>I think of kubectl get jobs but the problem here is you have only two codes 0 & 1. 1 for completion 0 for failed or running, we cannot really depend on this</p>
<p>Other option is kubectl describe to check the PODS status like out of 10 PODS how many are commpleted/failed.</p>
<p>Any other effective way of monitoring the PODs? Please let me know</p>
| Ram | <p>Anything that can talk to the Kubernetes API can query for Job object and look at the <code>JobStatus</code> field, which has info on which pods are running, completed, failed, or unavailable. <code>kubectl</code> is probably the easiest, as you mentioned, but you could write something more specialized using any client library if you wanted/needed to.</p>
| coderanger |
<p>I'm trying to support two systems both of which pull images from private repos. One is from Kubernetes - which needs imagePullSecrets via a mounted secret and the other just needs standard docker login.</p>
<p>Based on this - <a href="https://stackoverflow.com/questions/65356293/pulling-images-from-private-repository-in-kubernetes-without-using-imagepullsecr">Pulling images from private repository in kubernetes without using imagePullSecrets</a> - it does not appear there's a way to use injected values to pull an image inside of Kubernetes - am I wrong?</p>
<p>What i'd really like is for both Kubernetes/Kubeflow and the other system to just both get an array of values (server, login, password, email) and be able to pull a private image.</p>
| aronchick | <p>You can handle both by doing the login at the lower level of dockerd or containerd on the host itself. Otherwise not really, other than mounting the image pull secret into the container if it will respect a dockerconfig.</p>
| coderanger |
<p>I have an application in node js that expose 2 ports, 80 for web and 5000 for a notification service with websockets.
I want to deploy in azure kubernetes service and I followed the tutorial <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-tls</a>.
Everything works fine but websockets don't.</p>
<p>This is the yaml of the ingress controller:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dih-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- www.mydomain.com
secretName: tls-secret
rules:
- host: www.mydomain.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: dihkub-9865
port:
number: 80
</code></pre>
<p>And this is the port configuration in the service yaml:</p>
<pre><code>spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: websocket
protocol: TCP
port: 5000
targetPort: 5000
selector:
app: dihkub-9865
clusterIP: 10.0.147.128
type: ClusterIP
</code></pre>
<p>I am new to this and sorry for my bad english, thanks</p>
<p>Edit:
This is the new yaml file of the ingress controller</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dih-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- www.mydomain.com
secretName: tls-secret
rules:
- host: www.mydomain.com
http:
paths:
- path: /socket.io/(.*)
backend:
service:
name: dihkub-9865
port:
number: 5000
- path: /(.*)
pathType: Prefix
backend:
service:
name: dihkub-9865
port:
number: 80
</code></pre>
<p>Requests with the url /socket.io/ return error 502, now I put the services with loadbalancer so now I have 2 public IPs, the ingress controller handles requests to port 80 and the service handles requests from the websocket . This is not right but for now it works :(</p>
<p>Although the certificate with which the websockets work is not valid, since they do not work with the domain configured in the ingress controller, and having 2 public IPs is a bit more expensive.</p>
| cmb1197 | <p>You probably need something like:</p>
<pre><code> - path: /websockets
pathType: Prefix
backend:
service:
name: dihkub-9865
port:
number: 5000
</code></pre>
<p>Or whatever path you want to use for the websockets server.</p>
| coderanger |
<p>I have kubernetes HA environment with three masters. Just have a test, shutdown two masters(kill the apiserver/kcm/scheduler process), then only one master can work well. I can use kubectl to create a deployment successfully ,some pods were scheduled to different nodes and start. So can anyone explain why it is advised odd number of masters? Thanks.</p>
| yangyang | <p>Because if you have an even number of servers, it's a lot easier to end up in a situation where the network breaks and you have exactly 50% on each side. With an odd number, you can't (easily) have a situation where more than one partition in the network thinks it has majority control.</p>
| coderanger |
<p>I have a deployment of a simple Django app in minikube. It has two containers one for the Django app and one for postgres db. It is working with docker-compose, but couldn't make it work in minikube k8s cluster. When I opened a terminal to the container and ping the service it wasn't successful. I didn't find what causes this communication error.</p>
<p>It's a basic app that just has a login page for signing in or signing up. After signing in you can create some notes. After I deployed to minikube I was able to access 127.0.0.1:8000, but when I enter the information to sign up it gave the error below. Apparently, it couldn't store data in the db.</p>
<p>DATABASES part in settings.py in django project:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'db',
'PORT': '5432',
}
}
</code></pre>
<p>Dockerfile for building the image for the app:</p>
<pre><code>FROM python:2.7
WORKDIR /notejam
COPY ./ ./
RUN pip install -r requirements.txt
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
</code></pre>
<p>Deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: notejam-deployment
labels:
app: notejam-app
spec:
selector:
matchLabels:
app: notejam-app
template:
metadata:
labels:
app: notejam-app
spec:
volumes:
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc
containers:
- name: notejam
image: notejam_k8s
imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8000
- name: postgres-db
image: postgres
imagePullPolicy: Never
ports:
- containerPort: 5432
resources:
limits:
memory: "128Mi"
cpu: "500m"
env:
- name: POSTGRES_USERNAME
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: postgres-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: postgres-db
volumeMounts:
- mountPath: /notejam-db
name: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: notejam-app
ports:
- port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: notejam-external-service
spec:
selector:
app: notejam-app
type: LoadBalancer
ports:
- protocol: TCP
port: 8000
targetPort: 8000
</code></pre>
<p>Error:</p>
<pre><code>OperationalError at /signup/
could not connect to server: Connection timed out
Is the server running on host "db" (10.96.150.207) and accepting
TCP/IP connections on port 5432?
Request Method: POST
Request URL: http://127.0.0.1:8000/signup/
Django Version: 1.6.5
Exception Type: OperationalError
Exception Value:
could not connect to server: Connection timed out
Is the server running on host "db" (10.96.150.207) and accepting
TCP/IP connections on port 5432?
Exception Location: /usr/local/lib/python2.7/site-packages/psycopg2/__init__.py in connect, line 127
Python Executable: /usr/local/bin/python
Python Version: 2.7.18
Python Path:
['/notejam',
'/usr/local/lib/python27.zip',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/usr/local/lib/python2.7/lib-old',
'/usr/local/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/site-packages']
Server time: Thu, 26 Aug 2021 00:40:58 +0300
</code></pre>
| tkarahan | <p>You're trying to run postgres as a secondary container in the same pod. It should be its own deployment and service. Multi-container pods are not like docker compose :)</p>
| coderanger |
<p>Is it possible to have docker desktop issue a service token for the host machine so that one can use kubernetes auth method for code running inside an IDE to authenticate with apps running inside a local kubernetes? The instance of kubernetes I am using is by docker desktop for Mac.</p>
<p>Use case: I have deployed vault locally to my docker desktop kubernetes and have configured it to authenticate by kubernetes service tokens. This works fine for apps I deploy to the same kubernetes cluster because they automatically get a service token in their container which they send over to vault for authentication. However, this becomes challenging while I am developing code in my IDE since I am running my app from inside my IDE and to kubernetes cluster, they don't exist. As a result I can't authenticate to vault since I don't have a service token. </p>
<p>I understand this might not be a usual use case and eventually can not be supported. I have tried to spawn up a linux container for the sole purpose of sharing its service account with my local machine to no avail.</p>
| Farzad | <p>Sure, you can make a ServiceAccount and just manually copy the JWT out of the Secret it creates for you. That JWT can be used to access the API from anywhere, no special magic.</p>
| coderanger |
<p>I have a kubectl config map like below.</p>
<pre><code>apiVersion: v1
data:
server.properties: |+
server.hostname=test.com
kind: ConfigMap
metadata:
name: my-config
</code></pre>
<p>And I tried to read this config inside a container.</p>
<pre><code>containers:
- name: testserver
env:
- name: server.hostname
valueFrom:
configMapKeyRef:
name: my-config
key: server.properties.server.hostname
</code></pre>
<p>However, these configs are not passing to the container properly. Do I need do any changes to my configs?</p>
| prime | <p>What you have in there isn't the right key. ConfigMaps are strictly 1 level of k/v pairs. The <code>|+</code> syntax is YAML for a multiline string but the fact the data inside that is also YAML is not something the system knows. As far as Kubernetes is concerned you have one key there, <code>server.properties</code>, with a string value that is opaque.</p>
| coderanger |
<p>I am debugging certain behavior from my application pods; i am launching on K8s cluster. In order to do that I am increasing logging by increasing verbosity of deployment by adding <code>--v=N</code> flag to <code>Kubectl create deployment</code> command.</p>
<p>my question is : how can i configure increased verbosity globally so all pods start reporting increased verbosity; including pods in kube-system name space.</p>
<p>i would prefer if it can be done without re-starting k8s cluster; but if there is no other way I can re-start.</p>
<p>thanks
Ankit </p>
| ankit patel | <p>For your applications, there is nothing global as that is not something that has global meaning. You would have to add the appropriate config file settings, env vars, or cli options for whatever you are using.</p>
<p>For kubernetes itself, you can turn up the logging on the kubelet command line, but the defaults are already pretty verbose so I’m not sure you really want to do that unless you’re developing changes for kubernetes.</p>
| coderanger |
<p>For Traefik 1.7 as an ingress controller, how to match HTTP Method to route to a specific service?</p>
<p>For example, I want to pass a request to the service, only if HTTP method is GET and it matches with provided path.</p>
<p>I am looking documentation at: <a href="https://doc.traefik.io/traefik/v1.7/configuration/backends/kubernetes/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/v1.7/configuration/backends/kubernetes/</a></p>
<p>But cannot find any relevant annotation. Is there any possible workaround?</p>
<p>[I found a similar question: <a href="https://stackoverflow.com/questions/58012063/restrict-allowed-methods-on-traefik-routes">Restrict allowed methods on Traefik routes</a> but answered to handle CORS]</p>
| Vishal | <p>Nope, that was not a feature of 1.7.</p>
| coderanger |
<p>I'm experimenting with SMTP (mailoney) and SSH honeypots in a Kubernetes cluster to be exposed to the big bad WWW. I cant seem to figure out how to get it working since I'm only beginning to understand Kubernetes just recently. </p>
<p>I've got some config now, for example my mailoney.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mailoney
spec:
selector:
matchLabels:
app: mailoney
template:
metadata:
labels:
app: mailoney
spec:
containers:
- name: mailoney
image: dtagdevsec/mailoney:2006
ports:
- containerPort: 25
</code></pre>
<p>and the service config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-mailoney
labels:
name: mailoney
spec:
type: LoadBalancer
ports:
- name: smtp
port: 25
targetPort: 25
protocol: TCP
selector:
name: mailoney
</code></pre>
<p>But when the loadbalancer is configured, it exposes the services on port >30000, which I know is default behaviour for Kubernetes. But how do I exactly configure the loadbalancer to allow connections on port 25 and 22 respectively and actually letting connections through to the honeypots?</p>
<p>am I overlooking something really obvious?</p>
<p>Any help is appreciated. </p>
| chr0nk | <p>You are probably seeing the node port in the <code>kubectl get service</code> output? That's a red herring, the final LB port will still be 25 as requested. You can confirm this in your cloud provider's systems to be sure. The node port is an intermediary relay between the cloud LB and the internal network.</p>
| coderanger |
<p>I have a mongodb service up and running. I port-forward to access it locally and in the meantime, I try to check connection with a go app. But I get the error below.</p>
<pre><code>panic: error parsing uri: lookup _mongodb._tcp.localhost on 8.8.8.8:53: no such host
</code></pre>
<p>Port-forward:</p>
<pre><code>kubectl port-forward service/mongodb-svc 27017:27017
</code></pre>
<p>Go app:</p>
<pre><code>package main
import (
"context"
"fmt"
//"log"
"time"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.mongodb.org/mongo-driver/mongo/readpref"
)
func main() {
username := "username"
address := "localhost"
password := "password"
// Replace the uri string with your MongoDB deployment's connection string.
uri := "mongodb+srv://" + username + ":" + password + "@" + address + "/admin?w=majority"
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri))
if err != nil {
panic(err)
}
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
// Ping the primary
if err := client.Ping(ctx, readpref.Primary()); err != nil {
panic(err)
}
fmt.Println("Successfully connected and pinged.")
}
</code></pre>
| cosmos-1905-14 | <p>Your client is trying to a DNS service lookup because you specified the <code>+srv</code> connection type in your URI. Stop doing that and use the correct connection string instead. We do support that in-cluster but not via port forward. I suspect you're trying to mix and match tutorials for both in-cluster and out of cluster. You can't do that.</p>
| coderanger |
<p>recently I'm trying to create ingress gateways per API I create, I need to know that is there any limit for that, or can we create as many as I want?</p>
| uvindu sri | <p>The exact answer depends on which Ingress Controller you are using, but if there are limits for any of them, expect them to be in the billions (like the kind of limit where 2^32 or 2^64 are involved because there's an integer index on something somewhere).</p>
| coderanger |
<p>I got a question regarding namespaces and seeking your expertise to clear out my doubts.
What I understood about namespaces is that they are there to introduce logical boundaries among teams and projects.
Of course, I read somewhere namespaces can be used to introduce/define different environments within the same cluster.
E.g Test, UAT and PRODUCTION.</p>
<p>However, if an organization is developing a solution and that solution consists of X number of microservices and have dedicated teams to look after those services,
should we still need to use namespaces to separate them or are they gonna deploy in one single namespace reflecting the solution?</p>
<p>E.g if we are developing an e-commerce application:
Inventory, ShoppingCart, Payment, Orders etc. would be the microservices that I can think of. Should we deploy them under the namespace of sky-commerce for an instance? or should they need dedicated namespaces.?</p>
<p>My other question is. if we deploy services in different namespaces, is it possible for us to access them through APIGateway/ Ingress controller?</p>
<p>For an instance, I have the front-end SPA application and it has its BFF (Backend For Frontend). can the BFF access the other services through the APIGateway/Ingress controller?</p>
<p>Please help me to clear these doubts.</p>
<p>Thanks in advance for your prompt reply in this regard.</p>
<p>RSF</p>
| RSF | <p>Namespaces are cheap, use lots of them. Only ever put two things in the same namespace if they are 100% a single unit (two daemons that are always updated at the same time and are functionally a single deployment) or if you must because a related object is used (such as a Service being in the same ns as Pods it references).</p>
| coderanger |
<p>As per: <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/</a></p>
<p>I'm trying to install ingress-nginx with custom ports, but it does not expose those ports when I pass in the <code>controller.customPorts</code> parameter. I think I'm not passing it in the right format. The documentation says</p>
<pre><code>A list of custom ports to expose on the NGINX ingress controller pod. Follows the conventional Kubernetes yaml syntax for container ports.
</code></pre>
<p>Can anyone explain to me what that format should be?</p>
| NimaKapoor | <p>Assuming they mean what shows up in Pod definitions:</p>
<pre class="lang-yaml prettyprint-override"><code>- port: 1234
name: alan
</code></pre>
| coderanger |
<p>We are moving towards Microservice and using K8S for cluster orchestration. We are building infra using Dynatrace and Prometheus server for metrics collection but they are yet NOT in good shape.
Our Java Application on one of the Pod is not working. I want to see the application logs.</p>
<p>How do I access these logs? </p>
| dotnetavalanche | <p>Assuming the application logs to stdout/err, <code>kubectl logs -n namespacename podname</code>.</p>
| coderanger |
<p>Why does kubernetes (using minikube) use dns for nginx's <code>proxy_pass</code> but not rewrite?</p>
<p>Kubernetes replaces <code>auth-proxy-service.default</code> with ip address</p>
<pre><code> location /widget-server/ {
proxy_pass http://auth-proxy-service.default:5902/;
}
</code></pre>
<p>kubernetes does not replace <code>auth-proxy-service.default</code> with ipaddress and the url in browser actually shows <code>http://auth-proxy-service.default:5902/foo</code></p>
<pre><code> location /form-builder-auth {
rewrite ^(.*)$ http://auth-proxy-service.default:5902/foo redirect;
}
</code></pre>
| dman | <p>Because that's what a redirect means? Reverse proxies are transparent, redirects are user-facing.</p>
| coderanger |
<p>I want to use the Kubernetes pod name to be used as an identifier in my container as an argument. </p>
<p>I have deployed my echo containers on my Kubernetes cluster using the following config:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
</code></pre>
<p>When I do "kubectl get pods":</p>
<pre><code>NAME READY STATUS RESTARTS AGE
echo1-76c689b7d-48h4v 1/1 Running 0 19h
echo1-76c689b7d-4gq2v 1/1 Running 0 19h
</code></pre>
<p>I want to echo the pod name by passing the pod name in my config above:</p>
<pre><code>args:
- "-text=echo1"
</code></pre>
<p>How do I access my pod name to be used in my args?</p>
| Katlock | <p>So a few things. First you would use the fieldRef syntax for an environment variable as shown in <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</a>. Then you would use the env var in your argument (<code>"-text=$(PODNAME)"</code>). However this will give you the actual pod name, like <code>echo1-76c689b7d-48h4v</code>. What you want is either the deployment name or the value of the <code>app</code> label, the latter is easier to instead of <code>metadata.name</code> as the field path, use something like <code>metadata.labels['app']</code> (requires Kubernetes 1.9+).</p>
| coderanger |
<p>How to list the current deployments running in Kubernetes with custom columns displayed as mentioned below:</p>
<p><strong>DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE</strong></p>
<p>The data should be sorted by the increasing order of the deployment name.</p>
| Yash Bindlish | <p>Look a the <code>-o custom-columns</code> feature. <a href="https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns</a> shows the basics. The hard one would be container_image, since a pod can contain more than one, but assuming you just want the first, something like <code>.spec.template.containers[0].image</code>? Give a shot and see how it goes.</p>
| coderanger |
<p>I'm attempting to add some <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="nofollow noreferrer">recommended labels</a> to several k8s resources, and I can't see a good way to add labels for things that would change frequently, in this case "app.kubernetes.io/instance" and "app.kubernetes.io/version". Instance seems like a label that should change every time a resource is deployed, and version seems like it should change when a new version is released, by git release or similar. I know that I could write a script to generate these values and interpolate them, but that's a lot of overhead for what seems like a common task. I'm stuck using Kustomize, so I can't just use Helm and have whatever variables I want. Is there a more straightforward way to apply labels like these?</p>
| EMC | <p>Kustomize's <code>commonLabels</code> transformer is a common way to handle this, sometimes via a component. It really depends on your overall layout.</p>
| coderanger |
<p>I am new to Kubernetes. Setting up nginx-ingress in a test cluster. One of our senior people rolled by and noticed the following.</p>
<pre><code># kubectl get services
...
ingress-ingress-nginx-controller-admission ClusterIP xx.xxx.xxx.xxx <none> 443/TCP
...
</code></pre>
<p>What's that, he asked. Get rid of it if you don't need it.</p>
<p>Before I rip it out and maybe get cripple my test cluster .. what <em>is</em> ingress-nginx-controller-admission and why do I need it?</p>
| Brian Dunbar | <p>It's the service for the validating webhook that ingress-nginx includes. If you remove it, you'll be unable to create or update Ingress objects unless you also remove the webhook configuration.</p>
<p>tl;dr it's important, no touchy</p>
| coderanger |
<p>I am doing the end of course work and I would like to know if I can make a regular volume a shared directory with ftp, that is to say that when you mount the disk kubeneretes takes the directory of the external ftp server.</p>
<p>I know it can be done with NFS but I would like to do it with SFTP.</p>
<p>Thanks in advance.</p>
| Alejandro López | <p>There is code floating around for a FlexVolume plugin which delegates the actual mount to FUSE: <a href="https://github.com/adelton/kubernetes-flexvolume-fuse" rel="nofollow noreferrer">https://github.com/adelton/kubernetes-flexvolume-fuse</a></p>
<p>But I have no idea if that will even compile anymore, and FlexVolume is on its way out in favor of CSI. You could write a CSI plugin on top of the FUSE FS but I don't know of any such thing already existing.</p>
<p>More commonly what you would do is use a RWX shared volume (such as NFS) and mount it to both the SFTP server and whatever your hosting app pod is.</p>
| coderanger |
<p>We are storing secrets in GCP Secret Manager, during an app deployment we using an init container which fetches secrets and places them in volume (path). Going forward we need the requirement is to load the secrets as env variable on the main container needing the secrets from the init container, instead of the paths. How can it be achieved ? Any workaround ?</p>
<p>Thank you !</p>
| Sanjay M. P. | <p>You can copy from GSM into a Kubernetes Secret and then use that in a normal <code>envFrom</code> or you can have the init container write a file into a shared emptyDir volume and then change the command on the main container to be something like <code>command: [bash, -c, "source /shared/env && exec original command"]</code>. The latter requires you rewrite the command fully though which is annoying.</p>
| coderanger |
<p>How do I pass a script.sh file to a container while creating it using kubernetes from the following ds-scheduler.yaml file and without changing the image file?</p>
<p>When kubernetes create the container from the yaml file, it uses a start.sh script residing inside the images file . I would like to push my my-start.sh file to the image file before kubelet creates container from it. So that kubelet uses my script instead of the one residing inside the image.</p>
<p>Thanks in advance.</p>
<h2 id="ds-scheduler.yaml">ds-scheduler.YAML</h2>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: scheduler-nodes
labels:
role: scheduler
spec:
selector:
matchLabels:
role: scheduler
template:
metadata:
labels:
role: scheduler
spec:
#nodeSelector:
# role: scheduler
hostNetwork: true
hostIPC: true
containers:
- name: scheduler-container
image: hydroproject/cloudburst
imagePullPolicy: Always
env:
- name: ROUTE_ADDR
value: ROUTE_ADDR_DUMMY
- name: MGMT_IP
value: MGMT_IP_DUMMY
- name: ROLE
value: scheduler
- name: REPO_ORG
value: hydro-project
- name: REPO_BRANCH
value: master
- name: ANNA_REPO_ORG
value: hydro-project
- name: ANNA_REPO_BRANCH
value: master
- name: POLICY
value: locality
# resources::
# limits:
#ephemeral-storage: "64Mi"
</code></pre>
| Azad Md Abul Kalam | <p>You would generally build a new image with <code>FROM hydroproject/cloudburst</code> and then layer in your customizations.</p>
<p>Another option is to use ConfigMap volume mounts over key files however this can get tedious and error prone.</p>
| coderanger |
<p>I would like to redirect incoming traffic to myserver.mydomain.com/prometheus to my prometheus pod. Here are my YAML files where I try to achieve this:</p>
<p>Here's the deployment manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-server
template:
metadata:
labels:
app: prometheus-server
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--web.external-url=http://myserver.mydomain.com/prometheus"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: prometheus-local-zfs-pvc
</code></pre>
<p>the service manifest...</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9090'
spec:
selector:
app: prometheus-server
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
</code></pre>
<p>...and the ingress manifest...</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: myserver.mydomain.com
http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-service
servicePort: 80
</code></pre>
<p>However, this returns a 404. What am I doing wrong?</p>
<p>EDIT: I have added the option as suggested by @coderanger. One thing I noticed when applying the new deployment was that there seemed to be a locking issue when the new pod is brought up before the old one is deleted, the error in the logs was <code>err="opening storage failed: lock DB directory: resource temporarily unavailable....</code></p>
| ticktockhouse | <p>You need to pass <code>--web.external-url</code> on the Prometheus command line options since you are moving it to a sub-path.</p>
| coderanger |
<p>I'd like to understand when it's better to favor a Custom Initializer Controller vs a Mutating Webhook.</p>
<p>From what I can gather, webhooks are:</p>
<ol>
<li>More powerful (can trigger on any action).</li>
<li>More performant (only persist to etcd once).</li>
<li>Easier to write (subjective, but production grade controllers aren’t trivial).</li>
<li>Less likely to break during complete redeploy (there seems to be a chicken-and-egg problem requiring the deployment to exist <em>before</em> the initializer is in place, or the initializer will block the deployment).</li>
</ol>
<p>When would I want an initializer instead? …or are they being abandoned in favor of webhooks?</p>
| Tammer Saleh | <p>Always favor webhooks. Initializers are unlikely to ever graduate from alpha, and will probably be removed as the apimachinery team dislikes the approach. They might remain in a few specialized cases like Namespaces, but not in general.</p>
| coderanger |
<p>I wanna using traefik auto discovery kubernetes service.
The docker-compose.yaml is like this:</p>
<pre><code>version: "3.3"
services:
traefik:
image: "traefik:v2.5"
version: "3.3"
services:
whoami:
image: "traefik/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.com`)"
- "traefik.http.routers.whoami.entrypoints=web"
</code></pre>
<p>And run docker-compose up -d is working,the whoami.com discoveried.</p>
<p>But when I convert this to kubernetes yaml,I got error with labels,
How can I write the labels on kubernetes yaml.</p>
<pre><code>labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.com`)"
- "traefik.http.routers.whoami.entrypoints=web"
</code></pre>
| Michael | <p>Check out <a href="https://kompose.io/" rel="nofollow noreferrer">Kompose</a>.</p>
<p>It will help generate some stuff for you. But specifically with Traefik you'll have to move to either the Ingress system or Traefik's custom IngressRoute system. See <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/" rel="nofollow noreferrer">this doc</a> for the former and the latter is linked in the sidebar there.</p>
| coderanger |
<p>What is the best way to restart all pods in a cluster? I was thinking that setting a cronjob task within kubernetes to do this on a normal basis and make sure that the cluster is load balanced evenly, but what is the best practice to do this on a normal basis? Also, what is the best way to do this as a one-time task?</p>
| mmiara | <p>This is a bad idea. Check out <a href="https://github.com/kubernetes-sigs/descheduler" rel="noreferrer">https://github.com/kubernetes-sigs/descheduler</a> instead to do it selectively and with actual analysis :)</p>
<p>But that said, <code>kubectl delete pod --all --all-namespaces</code> or similar.</p>
| coderanger |
<p>I have read enough papers on serverless cold start, but have not found a clear explanation on what causes cold start. Could you try to explain it from both commercial and open-source platform's points of view?</p>
<ol>
<li>commercial platform such as AWS Lambda or Azure Funtion. I know they are more like a black-box to us</li>
<li>There are open-source platforms such as OpenFaaS, Knative, or OpenWhisk. Do those platforms also have a cold start issue?</li>
</ol>
<p>My initial understanding about cold start latency is time spent on spinning up a container. After the container being up, it can be reused if not being killed yet, so there is a warm start. Is this understanding really true? I have tried to run a container locally from the image, no matter how large the image is, the latency is near to none.</p>
<p>Is the image download time also part of cold start? But no matter how many cold starts happened in one node, only one image download is needed, so this seems to make no sense.</p>
<p>Maybe a different question, I also wonder what happened when we instantiate a container from the image? Are the executable and its dependent libraries (e.g., Python library) copied from disk into memory during this stage? What if there are multiple containers based on the same image? I guess there should be multiple copies from disk to memory because each container is an independent process.</p>
| yeehaw | <p>There's a lot of levels of "cold start" that all add latency. The hottest of the hot paths is the container is still running and additional requests can be routed to it. The coldest is a brand new node so it has to pull the image, start the container, register with SD, wait for the serverless plane's routing stuffs to update, probably some more steps if you dig deep enough. Some of those can happen in parallel but most can't. If the pod has been shut down because it wasn't being used, and the next run schedules on the same machine then yes kubelet usually skips pulling image (unless imagePullPolicy Always is forced somewhere) so you get a bit of a faster launch. K8s' scheduler doesn't generally optimize for that though.</p>
| coderanger |
<p><a href="https://www.tutorialworks.com/kubernetes-imagepullbackoff/" rel="nofollow noreferrer">Re: ImagePullBackOff</a> errors: K8s "relies on the fact that images described in a Pod manifest are available across every machine in the cluster" ... and if that assumption is not met you might see ImagePullBackOff.</p>
<p>Just so I am totally clear here (my K8s cluster runs containerd) are these true?</p>
<ul>
<li>Once a new image is created, is the correct procedure to install it <em>to each node in the k8s cluster individually</em>?</li>
<li>Is that something the Terraform K8s provider can do? As far as I understand it, Terraform <strong>apply</strong> typically assumes the image is already known to containerd, because something else placed the images onto each node in the k8s cluster already. Terraform <strong>doesn't make the image nor does</strong> terraform place or stage images in containerd for ultimate K8s deploy. Terraform assumes that's already done.</li>
</ul>
<p>Originally I was thinking one just gives the image to the master K8s node, and then before/during deployment, K8s automagically replicates and installs it everywhere.</p>
| recepient recepient | <p>The normal approach is that your images are coming from a registry that is reachable from all nodes, either a public one like Docker Hub or GitHub Packages or a private one hosted locally (sometimes even inside the cluster).</p>
<p>Technically you <em>could</em> skip that and manually distribute images to nodes some other way (for example there are some preseeding tools that use BitTorrent for it) but those are generally extremely advanced use cases. Use a normal registry :)</p>
| coderanger |
<blockquote>
<p>I run a pulumi deployment program to create a kubernetes cluster with a deployment on a Google Cloud Platfrom
The Docker component in the deployment do not start.
The Docker operating system: linux Architecture: arm64
I get the error message
"standard_init_linux.go:211: exec user process caused "exec format error""
I think that the cause of this error message is that prossesor that I running on is not ARM . I am now running on e2-standard-2
The code that create a cluster looks like this</p>
</blockquote>
<pre><code>const cluster = new gcp.container.Cluster(name, {
project: config.cloudProject,
clusterAutoscaling: {enabled: true, resourceLimits:[ {resourceType: 'cpu', minimum:1 ,maximum:20 },
{resourceType: 'memory', minimum:1 ,maximum:64 }
]
},
initialNodeCount: 1,
minMasterVersion: engineVersion,
nodeVersion: engineVersion,
nodeConfig: {
machineType: "e2-standard-2",
oauthScopes: [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
],
},
location: config.cloudLocation,
});
</code></pre>
<blockquote>
<p>The code that creates a deployment looks like this</p>
</blockquote>
<p>export const deployment = new k8s.apps.v1.Deployment("hello-world-deployment", {
spec: {
replicas: 1,
selector: { matchLabels: {app: "hello-world"} },
template: {
metadata: { labels: {app: "hello-world"},
namespace: nameSpaceName },
spec: {</p>
<pre><code> containers: [{ name: "hello-world-image",
image: "docker.io/steinko/gradle-ci-cd",
livenessProbe:{ httpGet:{path:'/actuator/health/liveness',
port: 8080},
initialDelaySeconds:5,
timeoutSeconds: 1,
periodSeconds: 10,
failureThreshold: 3
}
}],
}
}enter code here
</code></pre>
<blockquote>
<p>How do I get the docker component up and running?</p>
</blockquote>
| stein korsveien | <p>You are correct, the image you pushed is only for ARM64 while the E2 machine type is x86_64. The two are not interchangable. You'll need to build an x86_64 version of the image either in addition or instead of the current one.</p>
| coderanger |
<p>I need to monitor the creation of new namespaces in my k8s infrastructure so that when new namespaces are created a series of commands should be executed like assigning permissions and pvc creation, could someone help me?</p>
| user2530802 | <p>Yes, it is possible. But showing how to do it is way out of scope for a StackOverflow answer. The short answer is you do it just like any other controller but the root object is something from Kubernetes core rather than your own struct(s).</p>
| coderanger |
<p>Before we deployed services to Kubernetes, we used to write logs to files. There have been times that some of that information we were logging was considerable. In those situations, it is definitely worth considering using a logback AsyncAppender, to get a little bit better performance.</p>
<p>If these services are running in a Kubernetes container, where we write logs directly to stdout, and other processes are collecting stdout and sending it to a log aggregator, I would think the performance questions are different.</p>
<p>Is there evidence that in this kind of situation, implementing an AsyncAppender, as opposed to just a synchronous appender, would make an appreciable difference in performance? Setting up testbeds to get reliable measurements of this will be a significant undertaking. I've seen documentation of results testing async appenders while writing to files, but I think that isn't quite the same situation.</p>
<p>Can anyone document real performance differences with AsyncAppenders in this kind of environment?</p>
| David M. Karr | <p>Stdout/err from the container is, usually, connected to a file. It's possibly via some number of pipes rather than a direct thing but somewhere under all of it is probably still a disk that needs to keep up with all the writes. That said, the pipe buffers in your CRI plugin can probably eat most spikes and level them out so "diluted" might be fair, but that depends on the specifics of your CRI and how it's configured.</p>
| coderanger |
<p>I made a deployment and scaled out to 2 replicas. And I made a service to forward it.</p>
<p>I found that kube-proxy uses iptables for forwarding from Service to Pod. But the load balancing strategy of iptables is RANDOM.</p>
<p>How can I force my service to forward requests to 2 pods using round-robin strategy without switching my kube-proxy to <code>userspace</code> or <code>ipvs</code> mode?</p>
| Peng Deng | <p>You cannot, the strategies are only supported in ipvs mode. The option is even called <code>--ipvs-scheduler</code>.</p>
| coderanger |
<p>I have a use case in which front-end application in sending a file to back-end service for processing. And at a time only one request can be processed by backend service pod. And if multiple request came service should autoscale and send that request to new Pod.
So I am finding a way in which I can spawn a new POD against each request and after completion of processing by backend service pod it will return the result to front-end service and destroy itself.
So that each pod only process a single request at a time.</p>
<p>I explore the HPA autoscaling but did not find any suitable way.
Open to use any custom metric server for that, even can use Jobs if they are able to fulfill the above scenario.</p>
<p>So if someone have knowledge or tackle the same use case then help me so that I can also try that solution.
Thanks in advance.</p>
| Prakul Singhal | <p>There's not really anything built-in for this that I can think of. You could create a service account for your app that has permissions to create pods, and then build the spawning behavior into your app code directly. If you can get metrics about which pods are available, you could use HPA with Prometheus to ensure there is always at least one unoccupied backend, but that depends on what kind of metrics your stuff exposes.</p>
| coderanger |
<p>I have a docker image. I want to analyze the docker image history, for this I can use <code>docker image history</code> command in the docker installed environment.</p>
<p>But when am working in a Openshift cluster, I may not have the access to the <code>docker</code> command here. So here I want get the <code>docker history</code> command result for the given image.</p>
<p>So basically I have a docker image and I don't have docker installed there. In this case how can we get the history of that docker image?</p>
<p>Can anyone please help me on this?</p>
| Abdul | <p>You can get the registry info either via curl or <code>skopeo inspect</code>. But the rest of the metadata is stored inside the image itself so you do have to download at least the final layer.</p>
| coderanger |
<p><code>kubectl explain serviceaccount.secrets</code> describes ServiceAccount Secrets as the secrets allowed to be used by Pods running using this ServiceAccount, but what effect does adding a Secret name to this list have?</p>
<p>The ServiceAccount token Secret (which is automatically added to this list) gets automatically mounted as a volume into all containers in a Pod running using this ServiceAccount (as long as the ServiceAccount admission controller is enabled), but what happens for other secrets?</p>
| dippynark | <p>It holds the name of all secrets containing tokens for that SA so when the controller goes to rotate things, it knows where to find them.</p>
| coderanger |
<p>I'm developing an app whose logs contain custom fields for metric purposes.
Therefore, we produce the logs in JSON format and send them to an Elasticsearch cluster.
We're currently working on migrating the app from a local Docker node to our organization's Kubernetes cluster.
Our cluster uses Fluentd as a DaemonSet, to output logs from all pods to our Elasticsearch cluster.
The setup is similar to this: <a href="https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a" rel="nofollow noreferrer">https://medium.com/kubernetes-tutorials/cluster-level-logging-in-kubernetes-with-fluentd-e59aa2b6093a</a></p>
<p>I'm trying to figure out what's the best practice to send logs from our app. My two requirements are:</p>
<ol>
<li>That the logs are formatted correctly in JSON format. I don't want them to be nested in the <code>msg</code> field of the persisted document.</li>
<li>That I can run <code>kubectl logs -f <pod></code> and view the logs in readable text format.</li>
</ol>
<p>Currently, if I don't do anything and let the DaemonSet send the logs, <strong>it'll fail both requirements</strong>.</p>
<p>The best solution I thought about is to ask the administrators of our Kubernetes cluster to replace the Fluentd logging with Fluentbit.
Then I can configure my deployment like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
labels:
app: example-app
annotations:
fluentbit.io/parser-example-app: json
fluentbit.io/exclude-send-logs: "true"
spec:
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: myapp:1.0.0
volumeMounts:
- name: app-logs
mountPath: "/var/log/app"
- name: tail-logs
image: busybox
args: [/bin/sh, -c, 'tail -f /var/log/example-app.log']
volumeMounts:
- name: app-logs
mountPath: "/var/log/app"
volumes:
- name: app-logs
emptyDir: {}
</code></pre>
<p>Then the logs are sent to the Elasticsearch in correct JSON format, and I can run <code>kubectl logs -f example-app -c tail-logs</code> to view them in a readable format.</p>
<p>Is this the best practice though? Am I missing a simpler solution?
Is there an alternative supported by Fluentd?</p>
<p>I'll be glad to here your opinion :)</p>
| Itai Spiegel | <p>There isn't really a good option here that isn't going to chew massive amounts of CPU. The closest things I can suggest other than the solution you mentioned above is inverting it where the main output stream is unformatted and you run Fluent* (usually Bit) are a sidecar on a secondary file stream. That's no better though.</p>
<p>Really most of us just make the output be in JSON format and on the rare occasions we need to manually poke at logs outside of the normal UI (Kibana, Grafana, whatever), we just deal with the annoyance.</p>
<p>You could also theoretically make your "human" format sufficiently machine parsable to allow for querying. The usual choice there is "logfmt", aka <code>key=value</code> pairs. So my log lines on logfmt-y services look like <code>timestamp=2021-05-15T03:48:05.171973Z level=info event="some message" otherkey=1 foo="bar baz"</code>. That's simple enough to read by hand but also can be parsed efficiently.</p>
| coderanger |
<p>Today I want to increase PostgreSQL max conenctions, then I add config to my kubernetes PostgreSQL config:</p>
<pre><code> args:
- '-c'
- max_connections=500
</code></pre>
<p>but throw this error:</p>
<pre><code>postgresql 01:49:57.73
postgresql 01:49:57.74 Welcome to the Bitnami postgresql container
postgresql 01:49:57.74 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 01:49:57.74 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 01:49:57.74
/opt/bitnami/scripts/postgresql/entrypoint.sh: line 30: exec: max_connections=500: not found
2021-08-11T09:49:57.764236886+08:00
</code></pre>
<p>this is the config in kubernetes v1.21.3:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: reddwarf-postgresql-postgresql
namespace: reddwarf-storage
uid: 787a18c8-f6fb-4deb-bb07-3c3d123cf6f9
resourceVersion: '1435471'
generation: 9
creationTimestamp: '2021-08-05T05:29:03Z'
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"meta.helm.sh/release-name":"reddwarf-postgresql","meta.helm.sh/release-namespace":"reddwarf-storage"},"creationTimestamp":"2021-08-05T05:29:03Z","generation":9,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1"},"managedFields":[{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:podManagementPolicy":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:serviceName":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app.kubernetes.io/component":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{},"f:role":{}},"f:name":{}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:automountServiceAccountToken":{},"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"BITNAMI_DEBUG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"PGDATA\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_CLIENT_MIN_MESSAGES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_LDAP\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_ENABLE_TLS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_CONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_DISCONNECTIONS\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_LOG_HOSTNAME\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PGAUDIT_LOG_CATALOG\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_PORT_NUMBER\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_SHARED_PRELOAD_LIBRARIES\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRESQL_VOLUME_DIR\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"POSTGRES_PASSWORD\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:secretKeyRef":{".":{},"f:key":{},"f:name":{}}}},"k:{\"name\":\"POSTGRES_USER\"}":{".":{},"f:name":{},"f:value":{}}},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":5432,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/bitnami/postgresql\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/dev/shm\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{".":{},"f:fsGroup":{}},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"dshm\"}":{".":{},"f:emptyDir":{".":{},"f:medium":{}},"f:name":{}}}}},"f:updateStrategy":{"f:type":{}},"f:volumeClaimTemplates":{}}},"manager":"Go-http-client","operation":"Update","time":"2021-08-05T05:29:03Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:image":{}}}}}}},"manager":"kubectl-client-side-apply","operation":"Update","time":"2021-08-10T16:50:45Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:template":{"f:spec":{"f:containers":{"k:{\"name\":\"reddwarf-postgresql\"}":{"f:args":{}}}}}}},"manager":"kubectl","operation":"Update","time":"2021-08-11T01:46:21Z"},{"apiVersion":"apps/v1","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:collisionCount":{},"f:currentRevision":{},"f:observedGeneration":{},"f:replicas":{},"f:updateRevision":{},"f:updatedReplicas":{}}},"manager":"kube-controller-manager","operation":"Update","time":"2021-08-11T01:46:57Z"}],"name":"reddwarf-postgresql-postgresql","namespace":"reddwarf-storage","selfLink":"/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql","uid":"787a18c8-f6fb-4deb-bb07-3c3d123cf6f9"},"spec":{"podManagementPolicy":"OrderedReady","replicas":1,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql","role":"primary"}},"serviceName":"reddwarf-postgresql-headless","template":{"metadata":{"creationTimestamp":null,"labels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"postgresql","helm.sh/chart":"postgresql-10.9.1","role":"primary"},"name":"reddwarf-postgresql"},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/component":"primary","app.kubernetes.io/instance":"reddwarf-postgresql","app.kubernetes.io/name":"postgresql"}},"namespaces":["reddwarf-storage"],"topologyKey":"kubernetes.io/hostname"},"weight":1}]}},"automountServiceAccountToken":false,"containers":[{"args":["-c","max_connections=500"],"env":[{"name":"BITNAMI_DEBUG","value":"false"},{"name":"POSTGRESQL_PORT_NUMBER","value":"5432"},{"name":"POSTGRESQL_VOLUME_DIR","value":"/bitnami/postgresql"},{"name":"PGDATA","value":"/bitnami/postgresql/data"},{"name":"POSTGRES_USER","value":"postgres"},{"name":"POSTGRES_PASSWORD","valueFrom":{"secretKeyRef":{"key":"postgresql-password","name":"reddwarf-postgresql"}}},{"name":"POSTGRESQL_ENABLE_LDAP","value":"no"},{"name":"POSTGRESQL_ENABLE_TLS","value":"no"},{"name":"POSTGRESQL_LOG_HOSTNAME","value":"false"},{"name":"POSTGRESQL_LOG_CONNECTIONS","value":"false"},{"name":"POSTGRESQL_LOG_DISCONNECTIONS","value":"false"},{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG","value":"off"},{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES","value":"error"},{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES","value":"pgaudit"}],"image":"docker.io/bitnami/postgresql:13.3.0-debian-10-r75","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["/bin/sh","-c","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p
5432"]},"failureThreshold":6,"initialDelaySeconds":30,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"name":"reddwarf-postgresql","ports":[{"containerPort":5432,"name":"tcp-postgresql","protocol":"TCP"}],"readinessProbe":{"exec":{"command":["/bin/sh","-c","-e","exec
pg_isready -U \"postgres\" -h 127.0.0.1 -p 5432\n[ -f
/opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized
]\n"]},"failureThreshold":6,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5},"resources":{"requests":{"cpu":"250m","memory":"256Mi"}},"securityContext":{"runAsUser":1001},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/dev/shm","name":"dshm"},{"mountPath":"/bitnami/postgresql","name":"data"}]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{"fsGroup":1001},"terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{"medium":"Memory"},"name":"dshm"}]}},"updateStrategy":{"type":"RollingUpdate"},"volumeClaimTemplates":[{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"creationTimestamp":null,"name":"data"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"8Gi"}},"volumeMode":"Filesystem"},"status":{"phase":"Pending"}}]}}
meta.helm.sh/release-name: reddwarf-postgresql
meta.helm.sh/release-namespace: reddwarf-storage
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2021-08-05T05:29:03Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:meta.helm.sh/release-name': {}
'f:meta.helm.sh/release-namespace': {}
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:spec':
'f:podManagementPolicy': {}
'f:replicas': {}
'f:revisionHistoryLimit': {}
'f:selector': {}
'f:serviceName': {}
'f:template':
'f:metadata':
'f:labels':
.: {}
'f:app.kubernetes.io/component': {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:role': {}
'f:name': {}
'f:spec':
'f:affinity':
.: {}
'f:podAntiAffinity':
.: {}
'f:preferredDuringSchedulingIgnoredDuringExecution': {}
'f:automountServiceAccountToken': {}
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
.: {}
'f:env':
.: {}
'k:{"name":"BITNAMI_DEBUG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"PGDATA"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_CLIENT_MIN_MESSAGES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_LDAP"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_ENABLE_TLS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_CONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_DISCONNECTIONS"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_LOG_HOSTNAME"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PGAUDIT_LOG_CATALOG"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_PORT_NUMBER"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_SHARED_PRELOAD_LIBRARIES"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRESQL_VOLUME_DIR"}':
.: {}
'f:name': {}
'f:value': {}
'k:{"name":"POSTGRES_PASSWORD"}':
.: {}
'f:name': {}
'f:valueFrom':
.: {}
'f:secretKeyRef':
.: {}
'f:key': {}
'f:name': {}
'k:{"name":"POSTGRES_USER"}':
.: {}
'f:name': {}
'f:value': {}
'f:imagePullPolicy': {}
'f:livenessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:name': {}
'f:ports':
.: {}
'k:{"containerPort":5432,"protocol":"TCP"}':
.: {}
'f:containerPort': {}
'f:name': {}
'f:protocol': {}
'f:readinessProbe':
.: {}
'f:exec':
.: {}
'f:command': {}
'f:failureThreshold': {}
'f:initialDelaySeconds': {}
'f:periodSeconds': {}
'f:successThreshold': {}
'f:timeoutSeconds': {}
'f:resources':
.: {}
'f:requests':
.: {}
'f:cpu': {}
'f:memory': {}
'f:securityContext':
.: {}
'f:runAsUser': {}
'f:terminationMessagePath': {}
'f:terminationMessagePolicy': {}
'f:volumeMounts':
.: {}
'k:{"mountPath":"/bitnami/postgresql"}':
.: {}
'f:mountPath': {}
'f:name': {}
'k:{"mountPath":"/dev/shm"}':
.: {}
'f:mountPath': {}
'f:name': {}
'f:dnsPolicy': {}
'f:restartPolicy': {}
'f:schedulerName': {}
'f:securityContext':
.: {}
'f:fsGroup': {}
'f:terminationGracePeriodSeconds': {}
'f:volumes':
.: {}
'k:{"name":"dshm"}':
.: {}
'f:emptyDir':
.: {}
'f:medium': {}
'f:name': {}
'f:updateStrategy':
'f:type': {}
'f:volumeClaimTemplates': {}
- manager: kubectl-client-side-apply
operation: Update
apiVersion: apps/v1
time: '2021-08-10T16:50:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
'f:image': {}
- manager: kubectl
operation: Update
apiVersion: apps/v1
time: '2021-08-11T01:46:21Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:spec':
'f:template':
'f:spec':
'f:containers':
'k:{"name":"reddwarf-postgresql"}':
'f:args': {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2021-08-11T01:46:57Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:collisionCount': {}
'f:currentRevision': {}
'f:observedGeneration': {}
'f:replicas': {}
'f:updateRevision': {}
'f:updatedReplicas': {}
selfLink: >-
/apis/apps/v1/namespaces/reddwarf-storage/statefulsets/reddwarf-postgresql-postgresql
status:
observedGeneration: 9
replicas: 1
updatedReplicas: 1
currentRevision: reddwarf-postgresql-postgresql-5695cb9676
updateRevision: reddwarf-postgresql-postgresql-8576c5d4c5
collisionCount: 0
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
role: primary
template:
metadata:
name: reddwarf-postgresql
creationTimestamp: null
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-10.9.1
role: primary
spec:
volumes:
- name: dshm
emptyDir:
medium: Memory
containers:
- name: reddwarf-postgresql
image: 'docker.io/bitnami/postgresql:13.3.0-debian-10-r75'
args:
- '-c'
- max_connections=500
ports:
- name: tcp-postgresql
containerPort: 5432
protocol: TCP
env:
- name: BITNAMI_DEBUG
value: 'false'
- name: POSTGRESQL_PORT_NUMBER
value: '5432'
- name: POSTGRESQL_VOLUME_DIR
value: /bitnami/postgresql
- name: PGDATA
value: /bitnami/postgresql/data
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: reddwarf-postgresql
key: postgresql-password
- name: POSTGRESQL_ENABLE_LDAP
value: 'no'
- name: POSTGRESQL_ENABLE_TLS
value: 'no'
- name: POSTGRESQL_LOG_HOSTNAME
value: 'false'
- name: POSTGRESQL_LOG_CONNECTIONS
value: 'false'
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: 'false'
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: 'off'
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: error
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: pgaudit
resources:
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
livenessProbe:
exec:
command:
- /bin/sh
- '-c'
- exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/sh
- '-c'
- '-e'
- >
exec pg_isready -U "postgres" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f
/bitnami/postgresql/.initialized ]
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
automountServiceAccountToken: false
securityContext:
fsGroup: 1001
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: primary
app.kubernetes.io/instance: reddwarf-postgresql
app.kubernetes.io/name: postgresql
namespaces:
- reddwarf-storage
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
serviceName: reddwarf-postgresql-headless
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
</code></pre>
<p>what should I do increase the max connection of PostgreSQL in kubernetes? I read the <a href="https://artifacthub.io/packages/helm/bitnami/postgresql" rel="nofollow noreferrer">docs</a> and using <code>postgresqlMaxConnections</code> still not work.</p>
| Dolphin | <p>If you open the link that Bitnami helpfully provided you right there in the output you can find the documentation for the image. <a href="https://github.com/bitnami/bitnami-docker-postgresql#configuration-file" rel="nofollow noreferrer">https://github.com/bitnami/bitnami-docker-postgresql#configuration-file</a> seems to be the most relevant part to you though.</p>
| coderanger |
<p>I have one pod that I want to automatically restart once a day. I've looked at the Cronjob documentation and I think I'm close, but I keep getting an Exit Code 1 error. I'm not sure if there's an obvious error in my .yaml. If not, I can post the error log as well. Here's my code:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-deployment-restart
spec:
schedule: "0 20 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment my-deployment'
</code></pre>
| akabin | <p>You would need to give it permissions to access the API, that means making a ServiceAccount and some RBAC policy objects (Role, RoleBinding) and then set <code>serviceAccountName</code> in your pod spec there.</p>
| coderanger |
<p>i am running 4 services inside a kubernetes pod and all are running on same port but the health check points are different for every service. I want to add multiple path in liveness/readiness probe.
Currently i am using below configuration to check the health of one service. I want to add more path. How can i do that? I have tried with binary operator("/service1/health && /service2/health && /service3/health") but that was not worked for me. </p>
<pre><code>livenessProbe:
httpGet:
path: /service1/health
port: 8080
httpHeaders:
- name: Custom-Header
value: iamalive
initialDelaySeconds: 60
failureThreshold: 5
periodSeconds: 60
timeoutSeconds: 120
</code></pre>
| neha | <p>You kind of can't. I mean technically you could use an exec probe and write your own logic in there, but really really don't do that. If your 4 services are independent, you should make 4 separate Deployments for them. Kubernetes does sometimes force you to rearchitect your systems but every time it does this it's pushing you towards a better system :)</p>
| coderanger |
<p>I have a Kubernetes template file that contains multiple temples inside for example:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: elastic-foo
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
# all input will come from filebeat, no local logs
input {
}
filter {
}
output {
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: elastic-foo
spec:
replicas: 1
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.1
env:
ports:
- containerPort: 5044
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-config
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-config
items:
- key: logstash.conf
path: logstash.conf
---
kind: Service
apiVersion: v1
metadata:
name: logstash-service
namespace: elastic-foo
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 5044
targetPort: 5044
type: ClusterIP
</code></pre>
<p>My question is, how can I execute only one template by name
for example :</p>
<pre><code>kubectl create -f Service or Deployment or ConfigMap -f my_config.yml
</code></pre>
| user63898 | <p>That is not a feature of Kubectl. You can split them into different files or pipe through a tool like <code>yq</code> and use <code>kubectl create -f -</code>.</p>
| coderanger |
<p>I create a secret and mount it as file in pod(config in deployment).
If I change the secret, the mounted file will be changed in few seconds.
But how can I check the file is updated actually? I don't want to exec into pod and check it , because I want to check it by k8s api or k8s resource status. Is there any way to do that?</p>
| Glaha | <p>You wouldn't really in general check that Kubernetes is not broken. Unless you think you've found a bug, in which case you would use <code>kubectl exec</code> and probably many other things to try and track it down.</p>
| coderanger |
<p>In a custom Kubernetes operator implemented with the operator-sdk in golang is it possible to call the custom API directly and retrieve the object as YAML?</p>
<p>For example. I have a custom resource</p>
<pre><code>apiVersion: test.com/v1alpha1
kind: TEST
metadata::
name: example-test
spec:
replicas: 3
randomname: value
</code></pre>
<p>I don't know ahead of time what the fields in the spec are going to be apart from replicas. So I am not able to create a go type that includes structs to hold the entries.</p>
<p>So rather than doing:</p>
<pre><code>instance := &testv1alpha1.Test{}
err := r.client.Get(context.TODO(), nameSpaceName, instance)
</code></pre>
<p>I want to be able to do something like:</p>
<pre><code>instanceYAML := genericContainer{}
err := r.client.GetGeneric(context.TODO(), nameSpaceName, instance)
</code></pre>
<p>and then parse the instanceYAML to check the entries.</p>
| Kerry Gunn | <p>This is called the "unstructured" client. The docs are pretty light so I recommend looking over the tests as examples <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/ea32729106c995d9df310ac4731c2061490addfb/pkg/client/client_test.go#L1536-L1566" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/controller-runtime/blob/ea32729106c995d9df310ac4731c2061490addfb/pkg/client/client_test.go#L1536-L1566</a></p>
| coderanger |
<p>I know this is a bit weird, but I'm building an application that makes small local changes to ephemeral file/folder systems and needs to sync them with a store of record. I am using NFS right now, but it is slow, not super scalable, and expensive. Instead, I'd love to take advantage of <code>btrfs</code> or <code>zfs</code> snapshotting for efficient syncing of snapshots of a small local filesystem, and push the snapshots into cloud storage.</p>
<p>I am running this application in Kubernetes (in GKE), which uses GCP VMs with <code>ext4</code> formatted root partitions. This means that when I mount an <code>emptyDir</code> volume into my pods, the folder is on an <code>ext4</code> filesystem I believe.</p>
<p>Is there an easy way to get an ephemeral volume mounted with a different filesystem that supports these fancy snapshotting operations?</p>
| hornairs | <p>No. Nor does GKE offer that kind of low level control anyway but the rest of this answer presumes you've managed to create a local mount of some kind. The easiest answer is a hostPath mount, however that requires you manually account for multiple similar pods on the same host so they don't collide. A new option is an ephemeral CSI volume combined with a CSI plugin that basically reimplements emptyDir. <a href="https://github.com/kubernetes-csi/csi-driver-host-path" rel="nofollow noreferrer">https://github.com/kubernetes-csi/csi-driver-host-path</a> gets most of the way there but would 1) require more work for this use case and 2) is <em>explicitly</em> not supported for production use. Failing either of those, you can move the whole kubelet data directory onto another mount, though that might not accomplish what you are looking for.</p>
| coderanger |
<p>I'm not finding any answers to this on Google but I may just not know what terms to search for.</p>
<p>In a CRD, is there a way to define a field in the spec that is a secret (and therefore shouldn't be stored in plain text)? For example, if the custom resource needs to have an API token included in it, how do you define that in the CRD? </p>
<p>One thought I had was to just have the user create a Secret outside of the CRD and then provide the secret's name in a custom resource field so the operator can query it from the K8s API on demand when needed (and obviously associated RBAC needs to be configured so the operator has read access to the Secret). So the field in the CRD would just be a normal string that is the name of the target Secret.</p>
<p>But is there a better way? Any existing best practices around this?</p>
| Freedom_Ben | <p>You do indeed just store the value in an actual Secret and reference it. You'll find the same pattern all over k8s. Then in your controller code you get your custom object, find the ref, get that secret, and then you have your data.</p>
| coderanger |
<p>It is possible, through either https or ssh to clone from a private repo <strong>without creating a secrets file with my git credentials?</strong> I don't see why this is recommended, anyone in the kubernetes cluster can view my git credentials if they wanted to...</p>
<p>Both of the top two answers advocate this dangerously unsafe practice
<a href="https://stackoverflow.com/questions/42422892/kubernetes-init-containers-using-a-private-repo">see</a> and <a href="https://stackoverflow.com/questions/42462244/using-kubernetes-init-containers-on-a-private-repo">also</a>.</p>
<p>I've also been looking at git-sync but it also wants to expose the git credentials to everyone in the cluster <a href="https://stackoverflow.com/questions/42462244/using-kubernetes-init-containers-on-a-private-repo">see this answer</a>.</p>
<p>Is it assumed that you'd have a service account for this? What if I don't have a service account? Am I just out of luck?</p>
| nodel | <p>The credentials have to exist somewhere and a Secret is the best place for them. You wouldn't give access to "anyone" though, you should use the Kubernetes RBAC policy system to limit access to Secret objects to only places and people that need them. There are other solutions which read directly from some other database (Hashicorp Vault, AWS SSM, GCP SM, etc) but they are generally the same in terms of access control since the pod would be authenticating to that other system using its ServiceAccount token which ... is in a Secret. If you go full-out on this I'm sure you can find some kind of HSM which supports GitHub but unless you have a lot of hundreds of thousands of dollars to burn, that seems like overkill vs. just writing a better RBAC policy.</p>
| coderanger |
<p>I have a command to run docker,</p>
<pre><code>docker run --name pre-core -itdp 8086:80 -v /opt/docker/datalook-pre-core:/usr/application app
</code></pre>
<p>In above command, /opt/docker/datalook-pre-core is host directory, /usr/application is container directory. The purpose is that container directory maps to host directory. So when container crashes, the directory functions as storage and data on it would be saved. </p>
<p>When I am going to use kubernetes to create a pod for this containter, how to write pod.yaml file?</p>
<p>I guess it is something like following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: app-ykt
labels:
app: app-ykt
purpose: ykt_production
spec:
containers:
- name: app-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumnMounts:
- name: volumn-app-ykt
mountPath: /usr/application
volumns:
- name: volumn-app-ykt
????
</code></pre>
<p>I do not know what's the exact properties in yaml I shall write in my case?</p>
| user84592 | <p>This would be a <code>hostPath</code> volume: <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a></p>
<pre><code> volumes:
- name: volumn-app-ykt
hostPath:
# directory location on host
path: /opt/docker/datalook-pre-core
# this field is optional
type: Directory
</code></pre>
<p>However remember that while a container crash won't move things, other events can cause a pod to move to a different host so you need to be prepared to both deal with cold caches and to clean up orphaned caches.</p>
| coderanger |
<p>I would like to have some Kubernetes ingress configuration like this:</p>
<ul>
<li>DEV environment</li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: shop-page-ingress
annotations:
nginx.org/server-snippets: |
location / {
proxy_pass https://luz-shop:8443/shop.php?env=SHOP-DEV
proxy_redirect https://luz-shop:8443/ https://$host;
}
</code></pre>
<ul>
<li>TEST environment</li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: shop-page-ingress
annotations:
nginx.org/server-snippets: |
location / {
proxy_pass https://luz-shop:8443/shop.php?env=SHOP-TEST
proxy_redirect https://luz-shop:8443/ https://$host;
}
</code></pre>
<p>The only one different thing is the query parameter between 2 environments: <code>env=SHOP-DEV</code>.
The question is that I would organize these overlays by kustomize but I don't know it is possible or not? Can I have the <strong>BASE</strong> configuration with variable <code>${ENV_NAME}</code> like below and specify the value in the overlay configurations yaml?</p>
<ul>
<li>BASE yaml:</li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: shop-page-ingress
annotations:
nginx.org/server-snippets: |
location / {
proxy_pass https://luz-shop:8443/shop.php?env=${ENV_NAME}
proxy_redirect https://luz-shop:8443/ https://$host;
}
</code></pre>
| bkl | <p>Not directly. Kustomize doesn't handle unstructured replacement. However it is extensible via a plugins system and those can be arbitrary code either in bash or Go (or the newer KRM stuff from kpt). One of the example plugins uses sed to run arbitrary replacements <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/master/plugin/someteam.example.com/v1/sedtransformer/SedTransformer</a></p>
<p>Another option is to use a pipeline like <code>kustomize build | envsubst</code>.</p>
| coderanger |
<p>When trying to run command:</p>
<pre><code>kubectl get deployments
</code></pre>
<p>I get this message:</p>
<pre><code>C:\Users\win 10\AppData\Local\Google\Cloud SDK\helloworld-gke>kubectl rollout status deployment helloworld-gke
Waiting for deployment "helloworld-gke" rollout to finish: 0 of 1 updated replicas are available...
</code></pre>
<p>and nothing has happened since then, is this a freeze or it is taking time to deploy ?</p>
| nabeel mumtaz | <p>You gave an invalid docker image name in your deployment so it can’t succeed.</p>
| coderanger |
<p>I have a pod running Linux, I have let others use it. Now I need to save the changes made by others. Since sometimes I need to delete/restart the pod, the changes are reverted and new pod get created. So I want to save the pod container as docker image and use that image to create a pod.</p>
<p>I have tried <code>kubectl debug node/pool-89899hhdyhd-bygy -it --image=ubuntu</code> then install docker, dockerd inside but they don't have root permission to perform operations, installed crictl they where listing the containers but they don't have options to save them.</p>
<p>Also created a privileged docker image, created a pod from it, then used the command <code>kubectl exec --stdin --tty app-7ff786bc77-d5dhg -- /bin/sh </code> then tried to get running container, but it was not listing the containers. Below is the deployment i used to the privileged docker container</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: app
labels:
app: backend-app
backend-app: app
spec:
replicas: 1
selector:
matchLabels:
app: backend-app
task: app
template:
metadata:
labels:
app: backend-app
task: app
spec:
nodeSelector:
kubernetes.io/hostname: pool-58i9au7bq-mgs6d
volumes:
- name: task-pv-storage
hostPath:
path: /run/docker.sock
type: Socket
containers:
- name: app
image: registry.digitalocean.com/my_registry/docker_app@sha256:b95016bd9653631277455466b2f60f5dc027f0963633881b5d9b9e2304c57098
ports:
- containerPort: 80
volumeMounts:
- name: task-pv-storage
mountPath: /var/run/docker.sock
</code></pre>
<p>Is there any way I can achieve this, get the pod container and save it as a docker image? I am using digitalocean to run my kubernetes apps, I do not ssh access to the node.</p>
| Anil Kumar H P | <p>This is not a feature of Kubernetes or CRI. Docker does support snapshotting a running container to an image however Kubernetes no longer supports Docker.</p>
| coderanger |
<p>I have K8s deployed on an EC2 based cluster,<br>
There is an application running in the deployment, and I am trying to figure out the manifest files that were used to create the resources,<br>
There were deployment, service and ingress files used to create the App setup.<br></p>
<p>I tried the following command, but I'm not sure if it's the correct one as it's also returning a lot of unusual data like <code>lastTransitionTime</code>, <code>lastUpdateTime</code> and <code>status</code>-<br></p>
<pre><code>kubectl get deployment -o yaml
</code></pre>
<p>What is the correct command to view the manifest yaml files of an existing deployed resource?</p>
| Dev1ce | <p>There is no specific way to do that. You should store your source files in source control like any other code. Think of it like decompiling, you can do it, but what you get back is not the same as what you put in. That said, check for the last-applied annotation, if you use <code>kubectl apply</code> that would have a JSON version of a more original-ish manifest, but again probably with some defaulted fields.</p>
| coderanger |
<p>I am adding liveness probe and readiness probe using Exec probe.<br />
My config looks like this:</p>
<pre><code>readinessProbe:
exec:
command: "/usr/bin/jps -l | grep QueueProcess"
periodSeconds: 10
failureThreshold: 2
successThreshold: 2
</code></pre>
<p>When the above one didn't work. I modified this and also tried:</p>
<pre><code>readinessProbe:
exec:
command: ["/usr/bin/jps", "-l", "|", "grep","QueueProcess"]
periodSeconds: 10
failureThreshold: 2
successThreshold: 2
</code></pre>
<p>On running <code>kubectl describe pod</code>, got following output:</p>
<pre><code> Normal Created 37s kubelet Created container app1
Normal Started 37s kubelet Started container app1
Warning Unhealthy 6s (x3 over 26s) kubelet Readiness probe failed: invalid argument count
usage: jps [-help]
jps [-q] [-mlvV] [<hostid>]
Definitions:
<hostid>: <hostname>[:<port>]
</code></pre>
<p>I tried another application, where I am running <code>grpcurl</code> to call health check:</p>
<pre><code>readinessProbe:
exec:
command: ["grpcurl", "-plaintext", "-protoset", "/app/HealthCheck.protoset", "localhost:8123", "service.HealthCheckService/GetHealthCheck","|", "jq", ".", "|","grep","200"]
periodSeconds: 10
failureThreshold: 2
successThreshold: 2
</code></pre>
<p>On running <code>kubectl describe pod</code> for this one, I got:</p>
<pre><code> Normal Created 23s kubelet Created container app2
Normal Started 23s kubelet Started container app2
Warning Unhealthy 3s (x2 over 13s) kubelet Readiness probe failed: Too many arguments.
Try 'grpcurl -help' for more details.
</code></pre>
<p>Both of these are failing.
The question, how can I write an <code>Exec probe</code>, which has a pipe(or multiple pipe) in it?</p>
<p>I am using EKS v1.18.<br />
(Both the above configs belong to different applications.)</p>
| kadamb | <p>You need to actually use a shell, since that's a shell feature. <code>sh -c "foo | bar"</code> or whatever. Also remember that all the relevant commands need to be available in the target image.</p>
| coderanger |
<p>Assume there is a system that accepts millions of simultaneous WebSocket connections from client applications. I was wondering if there is a way to route WebSocket connections to a specific instance behind a load balancer (or IP/Domain/etc) if clients provide some form of metadata, such as hash key, instance name, etc.</p>
<p>For instance, let's say each WebSocket client of the above system will always belong to a group (e.g. max group size of 100), and it will attempt to communicate with 99 other clients using the above system as a message gateway.</p>
<p>So the system's responsibility is to relay messages sent from clients in a group to other 99 clients in the same group. Clients won't ever need to communicate with other clients who belong to different groups.</p>
<p>Of course, one way to tackle this problem is to use Pubsub system, such that regardless of which instance clients are connected to, the server can simply publish the message to the Pubsub system with a group identifier and other clients can subscribe to the messages with a group identifier.</p>
<p>However, the Pubsub system can potentially encounter scaling challenges, excessive resource usage (single message getting published to thousands of instances), management overhead, latency increase, cost, and etc.</p>
<p>If it is possible to guarantee that WebSocket clients in a group will all be connected to the instance behind LB, we can skip using the Pubsub system and make things simpler, lower latency, and etc.</p>
<p>Would this be something that is possible to do, and if it isn't, what would be the best option?</p>
<p>(I am using Kubernetes in one of the cloud service providers if that matters.)</p>
| user482594 | <p>Routing in HTTP is generally based on the hostname and/or URL path. Sometimes to a lesser degree on other headers like cookies. But in this case it would mean that each group should have it's own unique URL.</p>
<p>But that part is easy, what I think you're really asking is "given arbitrary URLs, how can I get consistent routing?" which is much, much more complicated. The base concept is "consistent hashing", you hash the URL and use that to pick which endpoint to talk to. But then how to do deal with adding or removing replicas without scrambling the mapping entirely. That usually means using a hash ring and assigning portions of the hash space to specific replicas. Unfortunately this is the point where off-the-shelf tools aren't enough. These kinds of systems require deep knowledge of your protocol and system specifics so you'll probably need to rig this up yourself.</p>
| coderanger |
<p>I've noticed that even <em>failed</em> pods replays to ICMP pings ( pods in not Ready state ). Is there a way to configure CNI ( or Kubernetes ) in the way so failed pods didn't generate ICMP replies ?</p>
<pre><code>#kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
multitool-1 1/1 Running 0 20m 172.17.0.3 minikube <none> <none>
multitool-2 0/1 ImagePullBackOff 0 20m 172.17.0.4 minikube <none> <none>
multitool-3 1/1 Running 0 3m9s 172.17.0.5 minikube <none> <none>
#kubectl exec multitool-3 -it bash
bash-5.0# ping 172.17.0.4
PING 172.17.0.4 (172.17.0.4) 56(84) bytes of data.
64 bytes from 172.17.0.4: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 172.17.0.4: icmp_seq=2 ttl=64 time=0.107 ms
^C
--- 172.17.0.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1041ms
rtt min/avg/max/mdev = 0.048/0.077/0.107/0.029 ms
bash-5.0#
</code></pre>
| Andy | <p>No, that's not how ICMP works. The kernel handles those, it only checks if the networking interface is operational, which it is regardless of how broken the container process might be.</p>
| coderanger |
<p>I have 3 replicas of same application running in kubernetes and application exposes an endpoint. Hitting the endpoint sets a boolean variable value to true which is having a use in my application. But the issue is, when hitting the endpoint, the variable is updated only in one of the replicas. How to make the changes in all replicas just by hitting one endpoint?</p>
| Gill Varghese Sajan | <p>You need to store your data in a shared database of some kind, not locally in memory. If all you need is a temporary flag, Redis would be a popular choice, or for more durable stuff Postgres is the gold standard. But there's a wide and wonderful world of databases out there, explore which match your use case.</p>
| coderanger |
<p>I've specified that i want to use 4 vcpu's in my resource request</p>
<p>This doesn't line up with what im seeing when i describe the node</p>
<pre><code> Resource Requests Limits
-------- -------- ------
cpu 410m (10%) 100m (2%)
memory 440Mi (2%) 640Mi (4%)
</code></pre>
<p>I didnt specify limits, this looks like the limit is below the requested resources, is this the case? If so do i need to specify a limit?</p>
<p>Here is my manifest:</p>
<pre><code>kind: Workflow
metadata:
name: ensembl-orthologs
namespace: argo-events
generateName: ensembl-orthologs-
spec:
entrypoint: ensembl-orthologs
templates:
- name: ensembl-orthologs
nodeSelector:
instanceType: t3.xlarge
resources:
requests:
memory: '16G'
cpu: '4'
container:
image: REDACTED
imagePullPolicy: Always
volumeMounts:
- name: REDACTED
mountPath: REDACTED
</code></pre>
| Happy Machine | <p>If you don't specify a limit on CPU or RAM then that resource isn't limited. The percentage/totals on limits are mostly just for reference to humans, limits can (and usually do) end up higher than requests when set but setting CPU limits can be counter productive (I do recommend setting a memory limit to improve system stability, even if its very high).</p>
<p>Also you are setting things on an workflow object that doesn't appear to be running, so those pods wouldn't count towards the current total.</p>
| coderanger |
<p>can it be that process inside container used more memory then the container itself?</p>
<p>i have a pod with single container, that based on stackdriver graphs uses 1.6G memory at his peak.
at the same time, i saw an error on the container and while looking the root casue i saw on the VM itself oom-killer message that indicate one of the processes inside the container killed due to usage of 2.2G. (rss)</p>
<p>how can it be?</p>
<pre><code>Memory cgroup out of memory: Killed process 2076205 (chrome) total-vm:4718012kB, anon-rss:2190464kB, file-rss:102640kB, shmem-rss:0kB, UID:1001 pgtables:5196kB oom_score_adj:932
</code></pre>
<p>10x!</p>
| user14242404 | <p>Two pieces. First what you see in the metrics is probably the working set size, which does not include buffers while I think the oom_killer shows rss which does. But more importantly, the data in metrics output is sampled, usually every 30 seconds. So if the memory usage spiked suddenly, or even if it just tried to allocate one huge buffer, then it would be killed.</p>
| coderanger |
<p>I want to add some flags to change sync periods. can I do it with minikube and kubectl? Or will I have to install and use kubeadm for any such kind of initialization? I refered the this <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/" rel="nofollow noreferrer">link</a>.</p>
<p>I created and ran the yaml file but there was an error stating that</p>
<blockquote>
<p>error: unable to recognize "sync.yaml": no matches for kind "ClusterConfiguration" in version "kubeadm.k8s.io/v1beta2"</p>
</blockquote>
<p>sync.yaml that I have used to change the flag (with minikube):</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
controllerManager:
extraArgs:
horizontal-pod-autoscaler-sync-period: "60"
</code></pre>
| Museb Momin | <p>Minikube and kubeadm are separate tools, but you can pass custom CLI options to minikube control plane components as detailed here <a href="https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults</a></p>
<pre><code>minikube start --extra-config=controller-mananger.foo=bar
</code></pre>
| coderanger |
<p>I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this tutorial</a> to serve a basic application using the NGINX Ingrss Controller, and cert-manager with letsencrypt.</p>
<p>I am able to visit the website, but the SSL certificate is broken, saying <code>Issued By: (STAGING) Artificial Apricot R3</code>.</p>
<p>This is my <code>ClusterIssuer</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-issuer
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>And the <code>Ingress</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress-dev
namespace: my-app
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
spec:
tls:
- secretName: echo-tls
hosts:
- my-app.example.com
rules:
- host: my-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-dev
port:
number: 80
</code></pre>
| chrispytoes | <p>LetsEncrypt staging is for testing, and does not issue certificates that are trusted by browsers. Use the production LE URL instead <code>https://acme-v02.api.letsencrypt.org/directory</code></p>
| coderanger |
<p>I an changed my kubernetes cluster(v1.15.2) configmap, now I want to make my config apply to all of my deployment in some namespace. What is the best practice to do? I am tried to do like this:</p>
<pre><code>kubectl rollout restart deployment soa-report-consumer
</code></pre>
<p>but my cluster has so many deployment, should I write shell script to complete this task, any simple way?</p>
| Dolphin | <p>The usual fix for this is to use some automation from a tool like Kustomize or Helm so that the deployments automatically update when the config data changes.</p>
| coderanger |
<p>Is it possible to block egress network access from a sidecar container?
I'm trying to implement capability to run some untrusted code in a sidecar container exposed via another trusted container in same pod having full network access.
It seems 2 containers in a pod can't have different network policies. Is there some way to achieve similar functionality?
As a sidenote, I do control the sidecar image which provides runtime to untrusted code.</p>
| Sumit | <p>You are correct, all containers in a pod share the same networking so you can't easily differentiate it. In general Kubernetes is not suitable for running code you assume to be actively malicious. You can build such a system around Kubernetes, but K8s itself is not nearly enough.</p>
| coderanger |
<p>Software versions are given below.</p>
<p>In DigitalOcean's managed Kubernetes, I have a running nginx-ingress with two containers routed properly with my DNS and a jetstack cert-manager. One container is running a test app and the other my API. They work great. I am trying to connect my Angular client in a third container, but the Ingress routing is not working. It returns a 502 Bad Gateway. After trying many things, I now think the problem is related to a specific nginx-ingress configuration that is needed for SPAs (Single Page Applications).</p>
<p>NanoDano on <a href="https://www.devdungeon.com/content/deploy-angular-apps-nginx" rel="nofollow noreferrer">DevDungeon</a> remarks, "Create a config in /etc/nginx/conf.d/. ... The most important part for Angular in particular is to include the try_files line which will ensure that even if someone visits a URL directly, the server will rewrite it properly so the Angular app behaves properly." This "try_files" configuration is also mentioned by Andre Dublin on his <a href="http://andredublin.github.io/javascript/2014/08/18/nginx-and-angularjs-url-routing.html" rel="nofollow noreferrer">GitHub</a> page, and by Niklas Heidloff on his <a href="http://heidloff.net/article/angular-react-vue-kubernetes" rel="nofollow noreferrer">website</a>.</p>
<p>In examples I've found on how to do this, the explanation is given from the perspective that you are doing a multistage build to combine your application, ingress, and the ingress configuration into one docker container via Dockerfile, such as this example by Lukas Marx on <a href="https://malcoded.com/posts/angular-docker/" rel="nofollow noreferrer">Malcoded</a>. Or that you manually edit the configurations after ingress is started, which is suggested in this unresolved <a href="https://stackoverflow.com/questions/45277183/how-to-config-nginx-to-run-angular4-application">Stackflow</a>.</p>
<p>In my situation, I already have Nginx-Ingress and I only need to dynamically add a configuration to properly route the Angular SPA. Using kubectl port-forward, I have confirmed the Angular app is served both from its pod and from its cluster IP service. It is working. When I attempt to connect to the app via Ingress, I get a 502 Bad Gateway error, which I've discussed on this <a href="https://stackoverflow.com/questions/67054836/502-bad-gateway-error-with-angular-app-in-kubernetes">StackOverflow</a>.</p>
<p>A configuration can be added to an existing Nginx-Ingress using a ConfigMap as described on this <a href="https://stackoverflow.com/questions/64178370/custom-nginx-conf-from-configmap-in-kubernetes">StackOverflow</a>. My Nginx-Ingress was created and configured following this example from this DigitalOcean <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">tutorial</a>, see Step 2, which installs with the following:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/do/deploy.yaml
</code></pre>
<p>I attempted to add the "try_files" configuration. First, I created this ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
server {
listen 80;
server_name pvgd.markwilx.tech;
index index.html index.htm;
root /home/node/app;
location / {
try_files $uri $uri/ /index.html =404;
}
include /etc/nginx/extra-conf.d/*.conf;
}
</code></pre>
<p>Second, I extracted Ingress Deployment from the deploy.yaml <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/do/deploy.yaml" rel="nofollow noreferrer">file</a> in the kubectl command above and modified it (six lines are added toward the bottom, and marked by comment):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.45.0@sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
- name: nginx-config # ADDED THIS CONFIGURATION
mountPath: /etc/nginx/nginx.conf # ADDED THIS CONFIGURATION
subPath: nginx.conf # ADDED THIS CONFIGURATION
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
- name: nginx-config # ADDED THIS CONFIGURATION
configMap: # ADDED THIS CONFIGURATION
name: nginx-config # ADDED THIS CONFIGURATION
</code></pre>
<p>Third, I reapplied this code executing it with:</p>
<pre><code>kubectl apply -f ingress-deployment-mod.yaml
</code></pre>
<p>This does not appear to be working. Indeed, it seems to have broken everything else. If you have any suggestions as to what I might be doing wrong to create a route between the Internet and my Angular app via Nginx-Ingress, I'd appreciate your insights.</p>
<p>Thanks,
Mark</p>
<p><strong>Versions</strong></p>
<ul>
<li>node.js 10.19.0</li>
<li>npm 7.7.6</li>
<li>yarn 1.22.10</li>
<li>ng 11.2.2</li>
<li>docker 19.03.8</li>
<li>kubectl 1.20.1</li>
<li>doctl 1.57.0</li>
<li>kubernetes 1.20.2-do.0</li>
<li>helm 3.5.3</li>
<li>nginx controller 0.45.0</li>
<li>jetstack 1.3.0</li>
</ul>
| MarkWilx | <p>You can't use ingress-nginx to serve actual files like that. Those files live in some other container you've built that has your application code in it. The Ingress system in Kubernetes is only for proxying and routing HTTP connections. The actual web serving, even if that also happens to use Nginx, has to be deployed separately.</p>
| coderanger |
<p>I am planning to deploy Keycloak on my K8S cluster but for the moment not in cluster mode as described on <a href="https://www.keycloak.org/2019/04/keycloak-cluster-setup.html" rel="nofollow noreferrer">https://www.keycloak.org/2019/04/keycloak-cluster-setup.html</a>. PostgresSQL will be use as data storage for it. </p>
<p>My plan is:</p>
<p><a href="https://i.stack.imgur.com/2UAnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2UAnk.png" alt="enter image description here"></a> </p>
<p>Create a pod with Keycloak and PostgreSQL inside. Deployment replicas will be 1, because for the moment I do not need clustering. </p>
<p>I know, that it is recommended to run one container in one pod but for my purpose will be acceptable to run 2 containers in one pod?</p>
| softshipper | <p>No, you should only run things in the same pod if there is no way to not do that. In this case the alternative is run separate pods so you should do that.</p>
| coderanger |
<p>I have a Kubernetes cluster with an undefined series of services, and what I want to do is serve each service on an endpoint, with the ability to add new services at any time, with them still being available at an endpoint.</p>
<p>I'm looking for a way to set, in my ingress, a wildcard on <code>serviceName</code>, so that <code>/xx</code> will be routed to service <code>xx</code>, <code>/yy</code> to service <code>yy</code>, etc.
Another solution that I could also use would be matching <code>http://xx.myurl.com</code> to service <code>xx</code>.</p>
<p>Is this something doable with Kubernetes?</p>
<p>I imagine something of the like</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /(.*)
backend:
serviceName: $1
servicePort: 80
</code></pre>
<p>Thanks,</p>
<p>Colin</p>
| Colin FAY | <p>This is not something the Ingress system supports. Some other tools may, you can do this pretty easily with a static Nginx config for example.</p>
| coderanger |
<p>I am trying to purge images from the local kubernetes cache on a set cadence. Before you could setup some volumeMounts on a daemonSet and talk to the docker runtime directly.</p>
<p>The latest runtime is based on containerd but I can't seem to connect using the containerd.sock - when I run <code>ctr image ls</code> or <code>nerdctl</code> it shows as nothing running or images on the node. It also returns no errors.</p>
<p>Is there a different method for manually purging from the containerd runtime running on a daemonSet?</p>
| Travis Sharp | <p>Answered in comments, most containerd commands are built for the Docker integration which uses the default containerd namespace (note, nothing to do with Linux namespaces, this is administrative namespacing inside containerd). Most commands have an option to set the ns being used but <code>crictl</code> is already set up for the CRI namespace that Kubernetes uses (because it's also a CRI client).</p>
| coderanger |
<p><a href="https://i.stack.imgur.com/hwgkG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hwgkG.png" alt="enter image description here"></a></p>
<ol>
<li><p>A Web API receives a customer's credit card request data at an endpoint.</p></li>
<li><p>The endpoint sends a message with this data to the Kafka.</p></li>
<li><p>Several pods/containers will be connected to Kafka in this topic for processing each request in parallel. The application requires high asynchronous processing power.</p></li>
<li><p>After sending the request, the frontend will display a progress bar and will wait. It needs a response as soon as the process is finished.</p></li>
</ol>
<p><strong>The question</strong></p>
<blockquote>
<p>How to return in the same call to this endpoint the result of a
processing that will be done in another web API project?
(asynchronous)</p>
</blockquote>
<p><strong>What I thought</strong></p>
<ul>
<li><p>Creating a topic in Kafka to be notified of the completion of processing and subscribe to it in the endpoint after sending the CreditCardRequest message to process on Kafka. </p></li>
<li><p>Using a query on the mongo on every 3~4 seconds (pooling) and check if the record has been included by the Worker / Pod / Processing Container. (URRGGH)</p></li>
<li><p>Creating another endpoint for the frontend to query the operation status, this endpoint will also do a query in the mongo to check the current status of the process.</p></li>
</ul>
<p><strong>I wonder deeply if there is no framework/standard already used for these situations.</strong></p>
| Vinicius Gonçalves | <p>yes, there are frameworks that handle this.
From a .NET perspective, I have used nServiceBus todo something similar in the past (coordinate long-running processes). </p>
<p>They have a concept called Sagas <a href="https://docs.particular.net/nservicebus/sagas/" rel="nofollow noreferrer">https://docs.particular.net/nservicebus/sagas/</a>
A saga will wait for all messages that are required to finish processing before notifying the next step of the process to continue.</p>
<p>If the framework is not useful, hopefully, the processes are and you can discover how to implement in a Kafka/Mongo environment.</p>
| Bluephlame |
<p>I am trying to get kubernetes api server from a pod.</p>
<p>Here is my pod configuration</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: test:v5
env:
imagePullPolicy: IfNotPresent
command: ['python3']
args: ['test.py']
restartPolicy: OnFailure
</code></pre>
<p>And here is my kubernetes-client python code inside test.py</p>
<pre><code>from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>But i am getting this error:</p>
<pre><code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
</code></pre>
| karlos | <p>When operating in-cluster you want <code>load_incluster_config()</code> instead.</p>
| coderanger |
<p>I have a case where I want to set up an alert where at least one value of the label is distinct.</p>
<p>For example, a Kubernetes cluster xyz (having 20 nodes) with metric <code>test_metric{cluster_name="xyz",os="ubuntu"}</code>. I want to find out/setup an alert if any of these 20 nodes are having different "os" values.</p>
<p>Basically, the idea is to get an alert when the os's value is not the same across all nodes in a cluster.</p>
<p>At the moment I am testing a very simple rule which I think is not correct:</p>
<pre><code>count(test_metric{cluster_name="xyz",os!=""} != count(test_metric{cluster_name="xyz",os!=""})
</code></pre>
| Shantanu Deshpande | <p>Nested counts is the way to handle this:</p>
<pre><code>count by (cluster_name) (
count by (os, cluster_name)(test_metric)
) != 1
</code></pre>
| brian-brazil |
<p>Installing grafana using helm <a href="https://github.com/grafana/helm-charts" rel="nofollow noreferrer">charts</a>, the deployment goes well and the grafana ui is up, needed to add an existence persistence volume, ran the below cmd:</p>
<pre><code>helm install grafana grafana/grafana -n prometheus --set persistence.enabled=true --set persistence.existingClaim=grafana-pvc
</code></pre>
<p>The init container crashes, with the below logs:</p>
<pre><code>kubectl logs grafana-847b88556f-gjr8b -n prometheus -c init-chown-data
chown: /var/lib/grafana: Operation not permitted
chown: /var/lib/grafana: Operation not permitted
</code></pre>
<p>On checking the deployment yaml found this section:</p>
<pre><code>initContainers:
- command:
- chown
- -R
- 472:472
- /var/lib/grafana
image: busybox:1.31.1
imagePullPolicy: IfNotPresent
name: init-chown-data
resources: {}
securityContext:
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/grafana
name: storage
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 472
runAsGroup: 472
runAsUser: 472
serviceAccount: grafana
serviceAccountName: grafana
</code></pre>
<p>Why is the operation failing though its running with <code>runAsUser: 0</code> ? and the pvc is having <code>access:ReadWriteMany</code>, any workaround ? Or am I missing something</p>
<p>Thanks !!</p>
| Sanjay M. P. | <p>NFS turns on <code>root_squash</code> mode by default which functionally disables uid 0 on clients as a superuser (maps those requests to some other UID/GID, usually 65534). You can disable this in your mount options, or use something other than NFS. I would recommend the latter, NFS is bad.</p>
| coderanger |
<p>I have a simple demo Flask application that is deployed to kubernetes using minikube. I am able to access the app using the Services. But I am not able to connect using ingress.</p>
<p><strong>Services.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: services-app-service
spec:
selector:
app: services-app
type: ClusterIP
ports:
- protocol: TCP
port: 5000 # External connection
targetPort: 5000 # Internal connection
</code></pre>
<hr />
<pre><code>D:Path>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP None <none> 3306/TCP 120m
kubernetes ClusterIP 10.20.30.1 <none> 443/TCP 3h38m
services-app-service ClusterIP 10.20.30.40 <none> 5000/TCP 18m
</code></pre>
<p><strong>I am able to access the app using minikube.</strong></p>
<pre><code>D:Path>minikube service services-app-service --url
* service default/services-app-service has no node port
* Starting tunnel for service services-app-service.
|-----------|----------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------------|-------------|------------------------|
| default | services-app-service | | http://127.0.0.1:50759 |
|-----------|----------------------|-------------|------------------------|
http://127.0.0.1:50759
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
<p><strong>Ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mydemo.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: services-app-service
port:
number: 5000
</code></pre>
<hr />
<pre><code>D:Path>kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
services-ingress <none> mydemo.info 192.168.40.1 80 15m
</code></pre>
<p>Are there any additional configuration required to access the app via ingress?</p>
| Rakesh | <p>The issue is that you need to access it with a Host head of <code>mydemo.info</code> for that Ingress spec to work. You also need to confirm you have an Ingress Controller installed, usually ingress-nginx for new users but there are many options. Then you would look for the Ingress Controllers NodePort or LoadBalancer service and access through that.</p>
| coderanger |
<p>We just started seeing an odd behavior using the Quarkus Kubernetes Config Extension and overriding properties in the application.yml</p>
<p>We have started to use configmap environment variables to override application.yml properties like this:</p>
<pre><code>QUARKUS_OIDC_AUTH_SERVER_URL: "https://sso.localhost/auth/realms/test"
</code></pre>
<p>The expectation is that it overrides any setting in the application.yml and takes precedence but it did not.</p>
<p>Instead we did this in the application.yml and it works.</p>
<pre><code>quarkus:
oidc:
auth-server-url: ${QUARKUS_OIDC_AUTH_SERVER_URL:https://localhost:8543/auth/realms/test}
</code></pre>
<p>We are seeing this across any environment variable in the configmap that is meant to override an existing application.yml property. Outside of a native build, in our CI for example, we use this same tactic to override properties and it works.</p>
<p>Another test we tried was to directly change the <code>QUARKUS_LOG_LEVEL</code> to something bad. This showed no changes after the pod depending on the config was restarted. Doing the same to a property that depended on an environment variable ( ${MY_LOG_LEVEL:debug} ) broke as expected.</p>
<p>Have there been any changes recently that would/should affect the precedence of the properties when using the Quarkus Kubernetes Config extension?</p>
| KJQ | <p>This ended up being my own fault and pure luck it had works as it did before (because I was mostly defining each property in the profiles).</p>
<p>I was able to leverage the new profile.parent to build of a hiearchy of profiles much cleaner and realized (at least it seems that way) that the kubernetes config extension only deals with the application properties not setting environment variables so, once I included those as configmaps and secrets in my kubernetes configuration everything worked as expected.</p>
<pre><code>kubernetes:
env:
configmaps:
- my-configmap
secrets:
- my-secrets
kubernetes-config:
config-maps:
- my-configmap
- my-service
secrets:
~:
- my-secrets
</code></pre>
| KJQ |
<p>I have an operator that has 2 controllers in it. The controllerA watches for CRD_A and if it finds a CR (we can have only one CR of this type in the cluster) of this type A the controller creates a podA and sets the CR as the owner of the podA.
The controllerB watches for CRD_B, if it finds CR of type B the controller checks if podA exists, and it setup the pod by sending an HTTP request to the podA with the info from the CR.
This is a simple overview of the operator's work.</p>
<p>The problem is that when the podA is deleted (by me or Kubernetes wants to reschedule it) the reconcile of controllerA is triggered because CR_A is the owner of podA and it creates a new podA. But I also want controllerB to be reconciled because it has to setup the podA, because now it is not reconciled because there is no connection between the podA and controllerB.</p>
<p>What is the right way to trigger reconcile of controllerB when such an event happens? I cannot set two CRs as owners of the pod. I think somehow controllerA should send reconcile event to controllerB but I don't how this can happen and is this the right way?</p>
| aliench0 | <p>I think you were asking in Slack about this last week but the rough answer is "use a watch map". <a href="https://github.com/coderanger/migrations-operator/blob/088a3b832f0acab4bfe02c03a4404628c5ddfd97/components/migrations.go#L63-L91" rel="nofollow noreferrer">https://github.com/coderanger/migrations-operator/blob/088a3b832f0acab4bfe02c03a4404628c5ddfd97/components/migrations.go#L63-L91</a> is an example of one. It gets the event from the low level watch (an instance of A) and then you write some code to match that back to which root object to reconcile (an instance of B).</p>
| coderanger |
<p>Is it possible to add a custom DNS entry (type A) inside Kubernetes 1.19? I'd like to be able to do:</p>
<pre><code>kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
host custom-dns-entry.example.com
custom-dns-entry.example.com has address 10.0.0.72
</code></pre>
<p>with <code>custom-dns-entry.example.com</code> not being registered inside my upstream DNS server (and also not having a corresponding k8s service at all).
Following example <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/</a> seems to provide a solution, but it is a bit complex and it may be deprecated. Is there a simpler way to do it please? For example with a few <code>kubectl</code> commands?</p>
<p><em>The reason why I need this is because I run my workload on <code>kind</code> so my ingress DNS record is not registered inside upstream DNS and some pods require access to this ingress DNS record from inside (maybe to configure a javascript client provided by the pods which will effectively access the ingress DNS record from outside...). However I cannot modify the workload code as I am not maintaining it, so addgin this custom DNS entry seems to be a reasonable solution</em></p>
| Fabrice Jammes | <p>CoreDNS would be the place to do this. You can also do similar-ish things using ExternalName-type Services but that wouldn't give you full control over the hostname (it would be a Service name like anything else).</p>
| coderanger |
<p>I've installed GitLab runner on a kubernetes cluster under a namespace <code>gitlab-runner</code>. Like so</p>
<pre><code># cat <<EOF | kubectl create -f -
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "gitlab-runner",
"labels": {
"name": "gitlab-runner"
}
}
}
# helm repo add gitlab https://charts.gitlab.io
# cat <<EOF|helm install --namespace gitlab-runner gitlab-runner -f - gitlab/gitlab-runner
gitlabUrl: https://gitlab.mycompany.com
runnerRegistrationToken: "c................Z"
</code></pre>
<p>The GitLab runner properly registers with the GitLab project but all jobs fail. </p>
<p>A quick look into the GitLab runner logs tells me that the service account used by the GitLab runner lack the proper permissions:</p>
<pre><code># kubectl logs --namespace gitlabrunner gitlab-runner-gitlab-runner-xxxxxxxxx
ERROR: Job failed (system failure): pods is forbidden: User "system:serviceaccount:gitlabrunner:default" cannot create resource "pods" in API group "" in the namespace "gitlab-runner" duration=42.095493ms job=37482 project=yyy runner=xxxxxxx
</code></pre>
<p>What permission does the gitlab runner kubernetes executor need?</p>
| RubenLaguna | <p>I couldn't find in the <a href="https://docs.gitlab.com/runner/" rel="noreferrer">GitLab runner documentation</a> a list of permissions but I try adding permissions one by one and I compiled a list of the permission required for basic functioning.</p>
<p>The gitlab runner will use the service account <code>system:serviceaccount:gitlab-runner:default</code> so we need to create a role and assign that role to that service account.</p>
<pre><code># cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: gitlab-runner
namespace: gitlab-runner
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# kubectl create rolebinding --namespace=gitlab-runner gitlab-runner-binding --role=gitlab-runne r --serviceaccount=gitlab-runner:default
</code></pre>
<p>With that role assigned to the service account, GitLab runner will be able to create, execute and delete the pod and also access the logs. </p>
| RubenLaguna |
<p>I have defined a K8S configuration which deploy a metricbeat container. Below is the configuration file. But I got an error when run <code>kubectl describe pod</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m24s default-scheduler Successfully assigned default/metricbeat-6467cc777b-jrx9s to ip-192-168-44-226.ap-southeast-2.compute.internal
Warning FailedMount 3m21s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[default-token-4dxhl config modules]: timed out waiting for the condition
Warning FailedMount 74s (x10 over 5m24s) kubelet MountVolume.SetUp failed for volume "config" : configmap "metricbeat-daemonset-config" not found
Warning FailedMount 67s kubelet Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config modules default-token-4dxhl]: timed out waiting for the condition
</code></pre>
<p>based on the error message, it says <code>configmap "metricbeat-daemonset-config" not found</code> but it does exist in below configuration file. why does it report this error?</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-config
namespace: kube-system
labels:
k8s-app: metricbeat
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT:9200}']
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-daemonset-modules
labels:
k8s-app: metricbeat
data:
aws.yml: |-
- module: aws
access_key_id: 'xxxx'
secret_access_key: 'xxxx'
period: 600s
regions:
- ap-southeast-2
metricsets:
- elb
- natgateway
- rds
- transitgateway
- usage
- vpn
- cloudwatch
metrics:
- namespace: "*"
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: Deployment
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
spec:
selector:
matchLabels:
k8s-app: metricbeat
template:
metadata:
labels:
k8s-app: metricbeat
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
containers:
- name: metricbeat
image: elastic/metricbeat:7.11.1
env:
- name: ELASTICSEARCH_HOST
value: es-entrypoint
- name: ELASTICSEARCH_PORT
value: '9200'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0640
name: metricbeat-daemonset-config
- name: modules
configMap:
defaultMode: 0640
name: metricbeat-daemonset-modules
</code></pre>
| Joey Yi Zhao | <p>There's a good chance that the other resources are ending up in namespace <code>default</code> because they do not specify a namespace, but the config does (<code>kube-system</code>). You should probably put all of this in its own <code>metricbeat</code> namespace.</p>
| coderanger |
<p>I have install bitnami/external-dns on my EKS Kubernetes cluster. The role of the pod is to create new records in my Route53 hosted zone once an Ingress expects the records to be there. No problems to this point.</p>
<p>But when removing the Ingress, the Route53 records are not deleted. What is expected to delete these records? What do I do wrong?</p>
<p>Installation of External DNS</p>
<pre><code>helm install extdns bitnami/external-dns \
--set provider=aws \
--set interval=1m \
--set logLevel=debug \
</code></pre>
<p>The Ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{$.Chart.Name}}-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxxxxx:certificate/some-uuid
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTP": 81}, {"HTTPS":443}]'
external-dns.alpha.kubernetes.io/hostname: zzz1.blah.nl
labels:
app: {{$.Chart.Name}}-service
spec:
rules:
- host: zzz1.blah.nl
http:
paths:
- path: /*
backend:
serviceName: {{$.Chart.Name}}-service
servicePort: 8080
- http:
paths:
- path: /zzz1/*
backend:
serviceName: {{$.Chart.Name}}-service
servicePort: 8080
</code></pre>
<p>External DNS logging</p>
<pre><code>time="2021-05-05T20:31:02Z" level=debug msg="Refreshing zones list cache"
time="2021-05-05T20:31:02Z" level=debug msg="Considering zone: /hostedzone/xxxx (domain: local.)"
time="2021-05-05T20:31:02Z" level=debug msg="Considering zone: /hostedzone/xxxx (domain: blah.nl.)"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service kube-system/aws-load-balancer-webhook-service"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service default/extdns-external-dns"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service default/module1-service"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service default/kubernetes"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service kube-system/kube-dns"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service kubernetes-dashboard/kubernetes-dashboard"
time="2021-05-05T20:31:03Z" level=debug msg="No endpoints could be generated from service kubernetes-dashboard/dashboard-metrics-scraper"
time="2021-05-05T20:31:03Z" level=debug msg="Endpoints generated from ingress: default/module1-ingress: [zzz1.blah.nl 0 IN CNAME k8s-default-module1i-0000000-1693479811.us-west-2.elb.amazonaws.com [] zzz1.liberaalgeluid.nl 0 IN CNAME k8s-default-module1i-000000-1693479811.us-west-2.elb.amazonaws.com []]"
time="2021-05-05T20:31:03Z" level=debug msg="Removing duplicate endpoint zzz1.blah.nl 0 IN CNAME k8s-default-module1i-000000000-1693479811.us-west-2.elb.amazonaws.com []"
time="2021-05-05T20:31:03Z" level=debug msg="Modifying endpoint: zzz1.blah.nl 0 IN CNAME k8s-default-module1i-000000000-1693479811.us-west-2.elb.amazonaws.com [], setting alias=true"
time="2021-05-05T20:31:03Z" level=debug msg="Modifying endpoint: zzz1.blah.nl 0 IN CNAME k8s-default-module1i-000000000-1693479811.us-west-2.elb.amazonaws.com [{alias true}], setting aws/evaluate-target-health=true"
time="2021-05-05T20:31:03Z" level=debug msg="Refreshing zones list cache"
time="2021-05-05T20:31:03Z" level=debug msg="Considering zone: /hostedzone/Z000000000 (domain: blah.nl.)"
time="2021-05-05T20:31:03Z" level=debug msg="Considering zone: /hostedzone/Z000000000 (domain: local.)"
time="2021-05-05T20:31:03Z" level=info msg="All records are already up to date"
^
</code></pre>
| Marc Enschede | <p>The default <code>--policy</code> option in the Chart is <code>upsert-only</code>, this is different from the underlying default in ext-dns itself which is <code>sync</code>. In <code>upsert-only</code> mode, it will not delete anything. This is usually for safety as cleanup can happen in batches and under user supervision. You can override the policy value back to <code>sync</code> if you would like though (<a href="https://github.com/bitnami/charts/blob/05a5bd69206574f3f8638197eb98da2164343a42/bitnami/external-dns/values.yaml#L432" rel="nofollow noreferrer">https://github.com/bitnami/charts/blob/05a5bd69206574f3f8638197eb98da2164343a42/bitnami/external-dns/values.yaml#L432</a>).</p>
| coderanger |
<p>I'm trying to get sshd running in my Kubernetes cluster so that I can set up a reverse proxy/ngrok tunnel per <a href="https://jerrington.me/posts/2019-01-29-self-hosted-ngrok.html" rel="nofollow noreferrer">this blog post</a>.</p>
<p>I've got nginx running, but I can't seem to connect to SSH.</p>
<p>Here's the complete Kubernetes config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: dev-example-ingress
namespace: dev-example
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- dev.example.ca
secretName: dev-example-tls
rules:
- host: dev.example.ca
http:
paths:
- backend:
serviceName: app-service
servicePort: 80
- backend:
serviceName: app-service
servicePort: 2222
---
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: dev-example
spec:
selector:
pod: my-pod-label
ports:
- name: http
protocol: TCP
port: 80
- name: ssh-tcp
protocol: TCP
port: 2222
# - name: ssh-udp
# protocol: UDP
# port: 2222
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: dev-example
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: my-pod-label
template:
metadata:
labels:
pod: my-pod-label
spec:
imagePullSecrets:
- name: regcred
containers:
- name: nginx
image: dreg.example.ca/example/dev.example.ca
ports:
- name: http
containerPort: 80
- name: sshd
image: linuxserver/openssh-server
ports:
- name: ssh
containerPort: 2222
env:
- name: PUID
value: '1000'
- name: PGID
value: '1000'
- name: TZ
value: America/Los_Angeles
- name: USER_NAME
value: example
- name: USER_PASSWORD
value: test
</code></pre>
<p>But when I try to connect it says "connection refused":</p>
<pre><code>❯ ssh -p 2222 [email protected]
ssh: connect to host dev.example.ca port 2222: Connection refused
</code></pre>
<p>I don't know if that means I didn't expose the port properly or what.</p>
<p>I'm pretty sure <a href="https://registry.hub.docker.com/r/linuxserver/openssh-server" rel="nofollow noreferrer"><code>linuxserver/openssh-server</code></a> is running on port 2222. If I run <code>ps -aux</code> in that container I get:</p>
<pre><code>^@root@app-deployment-5d9567dcc5-f5hq6:/# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 200 4 ? Ss 19:26 0:00 s6-svscan -t0 /var/run/s6/services
root 34 0.0 0.0 200 4 ? S 19:26 0:00 s6-supervise s6-fdholderd
root 277 0.0 0.0 200 4 ? S 19:26 0:00 s6-supervise openssh-server/log
root 278 0.0 0.0 200 4 ? S 19:26 0:00 s6-supervise openssh-server
mpen 280 0.0 0.0 256 4 ? Ss 19:26 0:00 s6-log n30 s10000000 S30000000 T !gzip -nq9 /config/logs/openssh
mpen 281 0.0 0.0 4384 3480 ? Ss 19:26 0:00 sshd: /usr/sbin/sshd -D -e -p 2222 [listener] 0 of 10-100 startups
root 298 0.5 0.0 2580 2304 pts/0 Ss 19:32 0:00 bash
root 307 0.0 0.0 1640 868 pts/0 R+ 19:32 0:00 ps -aux
</code></pre>
<p>What am I missing?</p>
<p>I'm open to using other ssh docker images if they work better/are easier. This is just for dev.</p>
| mpen | <p>The ingress system only works for HTTP (some Ingress Controllers support basic TCP routing as a custom extension but you're just using the basics). SSH is not an HTTP-based protocol :) What you want instead is a Service with type LoadBalancer.</p>
| coderanger |
<p>I am trying to understand default prometheus rules for kubernetes. And then I came across with this expression:</p>
<pre><code> sum(namespace_cpu:kube_pod_container_resource_requests:sum{})
/
sum(kube_node_status_allocatable{resource="cpu"})
>
((count(kube_node_status_allocatable{resource="cpu"}) > 1) - 1) / count(kube_node_status_allocatable{resource="cpu"})
</code></pre>
<p>Specifically, i am curious at <code>namespace_cpu:kube_pod_container_resource_requests:sum{}</code>. namespace_cpu does not appear to be an operation or reserved word in promql.</p>
<p>I can't seem to find it either in Prometheus documentation. <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/querying/basics/</a></p>
<p>Any hints?</p>
| Jerome Ortega | <p>Nothing, it's not an operator <code>:</code> is just a legal character in metric names. Some standard rulesets use it for grouping rollup rules together but it's just a naming scheme at most.</p>
| coderanger |
<p>I have an application that needs to know it's assigned NodePort. Unfortunately it is not possible to write a file to a mountable volume and then read that file in the main container. Therefore, i'm looking for a way to either have the initContainer set an environment variable that gets passed to the main container or to modify the main pod's launch command to add an additional CLI argument. I can't seem to find any examples or documentation that would lead me to this answer. TIA.</p>
| AcceleratXR | <p>There's no direct way so you have to get creative. For example you can make a shared emptyDir mount that both containers can access, have the initContainer write <code>export FOO=bar</code> to that file, and then change the main container command to something like <code>[bash, -c, "source /thatfile && exec originalcommand"]</code></p>
| coderanger |
<p>I moved from gvisor-containerd-shim (Shim V1) to containerd-shim-runsc-v1 (Shim V2). The metrics server and the Horizontal Pod Autoscaler used to work just fine in the case of gvisor-containerd-shim.</p>
<p>But now, with containerd-shim-runsc-v1, I keep getting CPU and memory metrics for nodes and runc pods, but I only get memory metrics for runsc (gvisor) pods.</p>
<p>For example, I deployed a PHP server in a gvisor pod with containerd-shim-runsc-v1. I get the following metrics:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 68s
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
snf-877559 549m 13% 2327Mi 39%
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
php-apache-gvisor-6f7bb6cf84-28qdk 0m 52Mi
</code></pre>
<p>After sending some load to the php-apache-gvisor pod, I can see CPU and memory usage increment for the node and for the runc pod (load-generator). <strong>I can also see that php-apache-gvisor's memory is increased from 52 to 72 Mi but its CPU usage remains at 0%.</strong> Why does the cpu usage remain at 0%?</p>
<p>I also tried with different container images, but I keep getting same results.</p>
<p>With load I get the following metrics:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 68s
kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
snf-877559 946m 23% 2413Mi 41%
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
load-generator-7d549cd44-xmbqw 3m 1Mi
php-apache-gvisor-6f7bb6cf84-28qdk 0m 72Mi
</code></pre>
<p><strong>Further infos:</strong></p>
<p>kubeadm, kubernetes 1.15.3, containerd 1.3.3, runsc nightly/2019-09-18, flannel</p>
<pre class="lang-sh prettyprint-override"><code>kubectl logs metrics-server-74657b4dc4-8nlzn -n kube-system
I0728 09:33:42.449921 1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0728 09:33:44.153682 1 secure_serving.go:116] Serving securely on [::]:4443
E0728 09:35:24.579804 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-gvisor-6f7bb6cf84-28qdk: no metrics known for pod
E0728 09:35:39.940417 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-gvisor-6f7bb6cf84-28qdk: no metrics known for pod
</code></pre>
<p>/etc/containerd/config.toml (containerd-shim-runsc-v1)</p>
<pre><code>subreaper = true
oom_score = -999
disabled_plugins = ["restart"]
[debug]
level = "debug"
[metrics]
address = "127.0.0.1:1338"
[plugins.linux]
runtime = "runc"
shim_debug = true
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
</code></pre>
<p>/etc/containerd/config.toml (gvisor-containerd-shim)</p>
<pre><code>subreaper = true
oom_score = -999
disabled_plugins = ["restart"]
[debug]
level = "debug"
[metrics]
address = "127.0.0.1:1338"
[plugins.linux]
runtime = "runc"
shim_debug = true
shim = "/usr/local/bin/gvisor-containerd-shim"
[plugins.cri.containerd.runtimes.runsc]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
</code></pre>
<p>The metrics server yaml is based on <a href="https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml</a> with the following args</p>
<pre class="lang-sh prettyprint-override"><code>....
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=4443
....
</code></pre>
<p>The current deployment has the below resources section</p>
<pre><code> resources:
limits:
cpu: 500m
requests:
cpu: 200m
</code></pre>
| virt | <p>gVisor currently only reports memory and Pids on a per Pod basis.
See: <a href="https://github.com/google/gvisor/blob/add40fd/runsc/boot/events.go#L62-L68" rel="nofollow noreferrer">https://github.com/google/gvisor/blob/add40fd/runsc/boot/events.go#L62-L68</a></p>
<p>We are planning to export more stats and the issue for tracking that work is here:
<a href="https://gvisor.dev/issue/172" rel="nofollow noreferrer">https://gvisor.dev/issue/172</a></p>
| Ian Lewis |
<p>I know about snapshots and tested volume cloning. And it works, when storage class is the same.</p>
<p>But what if I have two storage classes: one for fast ssd and second for cold storage hdd over network and I want periodically make backup to cold storage? How to do it?</p>
| omikron | <p>This is not a thing Kubernetes supports since it would be entirely up to your underlying storage. The simple version would be a pod that mounts both and runs <code>rsync</code> I guess?</p>
| coderanger |
<p>I have a cluster on GKE currently on version v1.19.9-gke.1400. Accordingly do kubernetes release notes, on 1.20 dockershim will be deprecated. My cluster is configured to auto-upgrades and in one specific application I use docker socket mapped to the application, where I run direct containers through their API.</p>
<p>My question: In a hypothetical upgrade of the cluster to the 1.20 version of kubernetes, the docker socket will be unavailable immediately? Or the deprecated flag only points that in the future it will be removed?</p>
| conrallendale | <p>Yes, if you use the non-containerd images. In the node pool config you can choose which image type you want and COS vs COS_Containerd are separate choices there. At some point later in 2021 we may (if all goes according to plan) remove Docker support in Kubernetes itself for 1.23. However Google may choose to remove support one version earlier in 1.22 or continue it later via the out-of-tree Docker CRI that Mirantis is working on.</p>
<p>I am running 1.20 in the Rapid channel and can confirm that Docker is still there and happy. Also FWIW if you need to run <code>dockerd</code> yourself via a DaemonSet it takes like 30 seconds to set up, really not a huge deal either way.</p>
| coderanger |
<p>I have a kubernetes cluster and I've been experimenting so far with cert-manager and letsencrypt ssl certificates.</p>
<p>Everything goes fine, I have issued an SSL certificate and applied to the cluster and https connection is working excellent.</p>
<p>The problem I face is that I am experimenting with new things and it often leads me to delete the whole cluster and create a new fresh one, which on it's side makes me lose the SSL certificate and issue a new one, but there's a rate limit by 50 certificates per week per domain.</p>
<p>Is there a way I can reuse a certificate in a new k8s cluster?</p>
| Ilia Hanev | <p>Copy the secret locally (<code>kubectl get secret -o yaml</code> and then clean up unneeded fields) and then upload it to the new cluster (<code>kubectl apply</code>).</p>
| coderanger |
<p>I have a service deployed on Kubernetes and it has url <code>app.io</code> (using ingress).</p>
<p>What if I need a user to every time go to <code>app.io</code> and:</p>
<ul>
<li><p>if it's running okay with no errors, it redirects to the <code>app.io</code> (on k8s)</p>
</li>
<li><p>and if not running well or have an error, it would redirect on a backup service on Heroku for example with url <code>backup.io</code>.</p>
</li>
</ul>
<p>How can I do that?</p>
<p>Thanks in advance</p>
| Ahmed Nageh | <p>Fallback routing like you describe is not part of the Ingress standard. It only does routing based on incoming Host header and request path. It's possible some specific Ingress Controller supports this as a custom extension but I don't know of any that do.</p>
| coderanger |
<p>I have a pod running RabbitMQ. Below is the deployment manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
app: service-rabbitmq
ports:
- port: 5672
targetPort: 5672
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-rabbitmq
spec:
selector:
matchLabels:
app: deployment-rabbitmq
template:
metadata:
labels:
app: deployment-rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
volumeMounts:
- name: rabbitmq-data-volume
mountPath: /var/lib/rabbitmq
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 750m
memory: 256Mi
volumes:
- name: rabbitmq-data-volume
persistentVolumeClaim:
claimName: rabbitmq-pvc
</code></pre>
<p>When I deploy it in my local cluster, I see the pod running for a while and then crashing afterwards. So basically it goes under crash-loop. Following is the logs I got from the pod:</p>
<pre><code>$ kubectl logs deployment-rabbitmq-649b8479dc-kt9s4
2021-10-14 06:46:36.182390+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 06:46:36.221717+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
2021-10-14 06:46:36.221768+00:00 [info] <0.222.0> Feature flags: [ ] maintenance_mode_status
2021-10-14 06:46:36.221792+00:00 [info] <0.222.0> Feature flags: [ ] quorum_queue
2021-10-14 06:46:36.221813+00:00 [info] <0.222.0> Feature flags: [ ] stream_queue
2021-10-14 06:46:36.221916+00:00 [info] <0.222.0> Feature flags: [ ] user_limits
2021-10-14 06:46:36.221933+00:00 [info] <0.222.0> Feature flags: [ ] virtual_host_metadata
2021-10-14 06:46:36.221953+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 06:46:37.018537+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-10-14 06:46:37.018646+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-10-14 06:46:37.045601+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-10-14 06:46:37.635024+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-10-14 06:46:37.635139+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit@deployment-rabbitmq-649b8479dc-kt9s4/quorum/rabbit@deployment-rabbitmq-649b8479dc-kt9s4
2021-10-14 06:46:37.849041+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-10-14 06:46:37.877504+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
</code></pre>
<p>This log isn't helpful too much, I can't find any error message from here. The only useful line here could be <code>Application syslog exited with reason: stopped</code>, only but it's not as far as I understand. The event log isn't helpful too:</p>
<pre><code>$ kubectl describe pods deployment-rabbitmq-649b8479dc-kt9s4
Name: deployment-rabbitmq-649b8479dc-kt9s4
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 14 Oct 2021 12:45:03 +0600
Labels: app=deployment-rabbitmq
pod-template-hash=649b8479dc
skaffold.dev/run-id=7af5e1bb-e0c8-4021-a8a0-0c8bf43630b6
Annotations: <none>
Status: Running
IP: 10.1.5.138
IPs:
IP: 10.1.5.138
Controlled By: ReplicaSet/deployment-rabbitmq-649b8479dc
Containers:
rabbitmq:
Container ID: docker://de309f94163c071afb38fb8743d106923b6bda27325287e82bc274e362f1f3be
Image: rabbitmq:latest
Image ID: docker-pullable://rabbitmq@sha256:d8efe7b818e66a13fdc6fdb84cf527984fb7d73f52466833a20e9ec298ed4df4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 0
Started: Thu, 14 Oct 2021 13:56:29 +0600
Finished: Thu, 14 Oct 2021 13:56:39 +0600
Ready: False
Restart Count: 18
Limits:
cpu: 750m
memory: 256Mi
Requests:
cpu: 250m
memory: 128Mi
Environment: <none>
Mounts:
/var/lib/rabbitmq from rabbitmq-data-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9shdv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
rabbitmq-data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: rabbitmq-pvc
ReadOnly: false
kube-api-access-9shdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 23m (x6 over 50m) kubelet (combined from similar events): Successfully pulled image "rabbitmq:latest" in 4.267310231s
Normal Pulling 18m (x16 over 73m) kubelet Pulling image "rabbitmq:latest"
Warning BackOff 3m45s (x307 over 73m) kubelet Back-off restarting failed container
</code></pre>
<p>What could be the reason for this crash-loop?</p>
<blockquote>
<p><strong>NOTE:</strong> <code>rabbitmq-pvc</code> is successfully bound. No issue there.</p>
</blockquote>
<h2>Update:</h2>
<p><a href="https://stackoverflow.com/a/58610625/11317272">This answer</a> indicates that RabbitMQ should be deployed as <strong>StatefulSet</strong>. So I adjusted the manifest like so:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
app: service-rabbitmq
ports:
- name: rabbitmq-amqp
port: 5672
- name: rabbitmq-http
port: 15672
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-rabbitmq
spec:
selector:
matchLabels:
app: statefulset-rabbitmq
serviceName: service-rabbitmq
template:
metadata:
labels:
app: statefulset-rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
volumeMounts:
- name: rabbitmq-data-volume
mountPath: /var/lib/rabbitmq/mnesia
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 750m
memory: 256Mi
volumes:
- name: rabbitmq-data-volume
persistentVolumeClaim:
claimName: rabbitmq-pvc
</code></pre>
<p>The pod still undergoes crash-loop, but the logs are slightly different.</p>
<pre><code>$ kubectl logs statefulset-rabbitmq-0
2021-10-14 09:38:26.138224+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 09:38:26.158953+00:00 [info] <0.222.0> Feature flags: [x] implicit_default_bindings
2021-10-14 09:38:26.159015+00:00 [info] <0.222.0> Feature flags: [x] maintenance_mode_status
2021-10-14 09:38:26.159037+00:00 [info] <0.222.0> Feature flags: [x] quorum_queue
2021-10-14 09:38:26.159078+00:00 [info] <0.222.0> Feature flags: [x] stream_queue
2021-10-14 09:38:26.159183+00:00 [info] <0.222.0> Feature flags: [x] user_limits
2021-10-14 09:38:26.159236+00:00 [info] <0.222.0> Feature flags: [x] virtual_host_metadata
2021-10-14 09:38:26.159270+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 09:38:26.830814+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-10-14 09:38:26.830925+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-10-14 09:38:26.852048+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-10-14 09:38:33.754355+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-10-14 09:38:33.754526+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/quorum/rabbit@statefulset-rabbitmq-0
2021-10-14 09:38:33.760365+00:00 [info] <0.290.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-10-14 09:38:33.761023+00:00 [noti] <0.302.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
</code></pre>
<p>The feature flags are now marked as it's seen. No other notable changes. So I still need help.</p>
<h2>! New Issue !</h2>
<p>Head over <a href="https://stackoverflow.com/q/69574698/11317272">here</a>.</p>
| msrumon | <p>The pod gets oomkilled (last state, reason) and you need to assign more resources (memory) to the pod.</p>
| Jehof |
<p>Let's say that I have a pre-existing (retained) EBS volume that was created by a PVC/PV that has been deleted by mistake. This volume was created like this:</p>
<pre><code>---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2-retain
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: prometheus
name: prometheus-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: gp2-retain
volumeMode: Filesystem
</code></pre>
<p>and was used by a pod created by a helm chart with</p>
<pre><code>helm install prometheus-current stable/prometheus server.persistentVolume.existingClaim=prometheus-server
</code></pre>
<p>So this EBS contains some files created by that pod that I want to keep. Now, we managed to delete the PVC/PV <strong>but the EBS volume was retained</strong> due to the <code>reclaimPolicy</code>.</p>
<p><strong>So I want to recreate the PersistingVolumeClaim and PersistentVolume in a way that points to this particular EBS volumeID <code>aws://eu-west-1/vol-xxxxx</code>.</strong> How can created a PVC without triggering the dynamic provisioning and create a new PV backed by a completely new EBS
volume?</p>
| RubenLaguna | <p>You can "adopt" an existing EBS-volume into a new PVC/PV the key points are:</p>
<ul>
<li>Create a <code>PersistentVolume</code> with a <code>.metadata.name</code> of your choosing (like <code>vol-imported-prometheus-server</code> and a <code>.spec.awsElasticBlockStore.volumeID</code> equal to <code>aws://region/vol-xxxx</code>
<ul>
<li>If you specify the <code>volumeID</code> Kubernetes will not try to allocate a new EBS volume</li>
</ul>
</li>
<li>Create a <code>PersistentVolumeClaim</code> with <code>spec.volumeName</code> equal to the name of the PV in the previous step
<ul>
<li>If you specify the <code>volumeName</code> Kubernetes will bound the PVC to the existing PV instead of trying to dynamically provision a new PV based on the <code>StorageClass</code></li>
</ul>
</li>
</ul>
<p>Like this example:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol-imported-prometheus-server
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-west-1c/vol-xxxxx
capacity:
storage: 8Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2-retain
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: prometheus
name: imported-prometheus-server
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClassName: gp2-retain
volumeMode: Filesystem
volumeName: vol-imported-prometheus-server
</code></pre>
<p>If you <code>kubectl apply -f thatfile.yaml</code> you will end up with the desired PVC -> PV -> existing EBS volume.</p>
<pre><code>kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
vol-imported-prometheus-server 8Gi RWO Retain Bound prometheus/imported-prometheus-server gp2-retain 15m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
imported-prometheus-server Bound vol-imported-prometheus-server 8Gi RWO gp2-retain 16m
</code></pre>
<p>And then you can use that PVC name in helm like so:</p>
<pre><code>helm install prometheus-current stable/prometheus server.persistentVolume.existingClaim=imported-prometheus-server
</code></pre>
<p>where <code>imported-prometheus-server</code> is the name of the PVC you just created.</p>
| RubenLaguna |
<p>It's probably something obvious but I don't seem to find a solution for joining 2 vectors in prometheus.</p>
<pre><code>sum(
rabbitmq_queue_messages{queue=~".*"}
) by (queue)
*
on (queue) group_left max(
label_replace(
kube_deployment_labels{label_daemon_name!=""},
"queue",
"$1",
"label_daemon_queue_name",
"(.*)"
)
) by (deployment, queue)
</code></pre>
<p>Below a picture of the output of the two separate vectors.</p>
<p><a href="https://i.stack.imgur.com/lGlFH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGlFH.png" alt="enter image description here"></a></p>
| Enrico Stahn | <p>Group left has the many on the left, so you've got the factors to the <code>*</code> the wrong way around. Try it the other way.</p>
| brian-brazil |
<p>I work with API kuberneteswith (library is @kubernetes/client-node).
I can to get a list of pods of specigic namespace, but i don`t understand to get a list of name all namespaces
How i may code with @kubernetes/client-node?</p>
| Natasha Grosella | <p>In the corev1 API, it's <code>listNamespace</code>.</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
k8sApi.listNamespace().then((res) => {
console.log(res.body);
});
</code></pre>
| coderanger |
<p>I want to execute <code>set</code> in a pod, to analyze the environment variables:</p>
<pre><code>kubectl exec my-pod -- set
</code></pre>
<p>But I get this error:</p>
<pre><code>OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "set": executable file not found in $PATH: unknown
</code></pre>
<p>I think, this is a special case, because there's no executable <code>set</code> like there's for example an execute <code>ls</code>.</p>
<p><strong>Remarks</strong></p>
<ul>
<li>When I open a shell in the pod, it's possible to call <code>set</code> there.</li>
<li>When I call <code>kubectl exec</code> with other commands, for example <code>ls</code>, I get no error.</li>
<li>There are some other questions regarding <code>kubectl exec</code>. But these do not apply to my question, because my problem is about executing <code>set</code>.</li>
</ul>
| Matthias M | <p><code>set</code> is not a binary but instead a shell command that sets the environment variable.</p>
<p>If you want to set an environment variable before executing a follow up command consider using <code>env</code></p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec mypod -- env NAME=value123 script01
# or
kubectl exec mypod -- /bin/sh -c 'NAME=value123 script01'
</code></pre>
<p>see <a href="https://stackoverflow.com/a/55894599/93105">https://stackoverflow.com/a/55894599/93105</a> for more information</p>
<p>if you want to set the environment variable for the lifetime of the pod then you probably want to set it in the yaml manifest of the pod itself before creating it.</p>
<p>you can also run <code>set</code> if you first run the shell</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec mypod -- /bin/sh -c 'set'
</code></pre>
| codebreach |
<p>I have the following architecture for the PostgreSQL cluster:</p>
<p><a href="https://i.stack.imgur.com/TlP1l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TlP1l.png" alt="enter image description here" /></a></p>
<p>Here, there are multiple clients that interacts with PostgreSQL pods via pgpool, the issue is, when the pod (could be <code>pgpool</code> or <code>PostgreSQL</code> pod) terminates (for multiple reasons) the client is getting impacts and have to recreate the connection. For example in this diagram, if <code>postgresql-1</code> pod terminates then <code>client-0</code> will have to recreate the connection with the cluster.</p>
<p>Is there a way in kubernetes to handle it so that connections to <code>pgpool k8s service</code> are load balanced/ recreated to other pods so that the clients do not see the switch over <strong>and are not impacted</strong>?</p>
<p>Please note these are TCP connections and not HTTP connections (which are stateless). Also, all the PostgreSQL pods are <a href="https://www.postgresql.org/docs/10/runtime-config-wal.html#SYNCHRONOUS-COMMIT-MATRIX" rel="nofollow noreferrer">always in sync with remote_apply</a>.</p>
| Vishrant | <p>Without substantial custom code to allow TCP connection transfers between hosts, you kind of can't. When a process shuts down, all TCP streams it has open will be closed, that's how normal Linux networking functions. If you poke a round on your search engine of choice for things like "TCP connection migration" you'll find a lot of research efforts on this but little actual code. More often you just terminate the TCP connection at some long-lived edge proxy and if <em>that</em> has to restart you eat the reconnects.</p>
| coderanger |
<p><strong>Summary</strong></p>
<p>I'm trying to figure out how to properly use the OR <code>|</code> operator in a Prometheus query because my imported Grafana dashboard is not working.</p>
<p><strong>Long version</strong></p>
<p>I'm trying to debug a Grafana dashboard based on some data scraped from my Kubernetes pods running <a href="https://github.com/AppMetrics/Prometheus" rel="nofollow noreferrer">AppMetrics/Prometheus</a>; the dashboard is <a href="https://grafana.com/dashboards/2204" rel="nofollow noreferrer">here</a>. Basically what happens is that when the value "All" for the <code>server</code> is selected on the Grafana dashboard (<code>server</code> is an individual pod in this case), no data appears. However, when I select an individual pod, then data does appear.</p>
<p>Here's an example of the same metric scraped from the two pods:</p>
<pre><code># HELP application_httprequests_transactions
# TYPE application_httprequests_transactions summary
application_httprequests_transactions_sum{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test"} 5.006965628
application_httprequests_transactions_count{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test"} 1367
application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.5"} 0.000202825
application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.75"} 0.000279318
application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.95"} 0.000329862
application_httprequests_transactions{server="myapp-test-58d94bf78d-jdq78",app="MyApp",env="test",quantile="0.99"} 0.055584233
# HELP application_httprequests_transactions
# TYPE application_httprequests_transactions summary
application_httprequests_transactions_sum{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test"} 6.10214788
application_httprequests_transactions_count{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test"} 1363
application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.5"} 0.000218548
application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.75"} 0.000277483
application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.95"} 0.033821094
application_httprequests_transactions{server="myapp-test-58d94bf78d-l9tdv",app="MyApp",env="test",quantile="0.99"} 0.097113234
</code></pre>
<p>I ran the Query inspector in Grafana to find out which query it is calling, and then ran the PromQL query in Prometheus itself. Basically, when I execute the following PromQL queries individually, they return data:</p>
<pre><code>rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-l9tdv"}[15m])*60
rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-jdq78"}[15m])*60
</code></pre>
<p>However, when I try to use PromQL's <code>|</code> operator to combine them, I don't get data back:</p>
<pre><code>rate(application_httprequests_transactions_count{env="test",app="MyApp",server="myapp-test-58d94bf78d-l9tdv|myapp-test-58d94bf78d-jdq78"}[15m])*60
</code></pre>
<p>Here's the raw output from Grafana's query inspector:</p>
<pre><code>xhrStatus:"complete"
request:Object
method:"GET"
url:"api/datasources/proxy/56/api/v1/query_range?query=rate(application_httprequests_transactions_count%7Benv%3D%22test%22%2Capp%3D%22MyApp%22%2Cserver%3D%22myapp-test-58d94bf78d-jdq78%7Cmyapp-test-58d94bf78d-l9tdv%7Cmyapp-test-5b8c9845fb-7lklm%7Cmyapp-test-5b8c9845fb-8jf7n%7Cmyapp-test-5b8c9845fb-d9x5c%7Cmyapp-test-5b8c9845fb-fw4gj%7Cmyapp-test-5b8c9845fb-vtl9z%7Cmyapp-test-5b8c9845fb-vv7xv%7Cmyapp-test-5b8c9845fb-wq9bs%7Cmyapp-test-5b8c9845fb-xqfrt%7Cmyapp-test-69999d58b5-549vd%7Cmyapp-test-69999d58b5-lmp8x%7Cmyapp-test-69999d58b5-nbvt9%7Cmyapp-test-69999d58b5-qphj2%7Cmyapp-test-6b8dcc5ffb-gjjvj%7Cmyapp-test-6b8dcc5ffb-rxfk2%7Cmyapp-test-7fdf446767-bzhm2%7Cmyapp-test-7fdf446767-hp46w%7Cmyapp-test-7fdf446767-rhqhq%7Cmyapp-test-7fdf446767-wxmm2%22%7D%5B1m%5D)*60&start=1540574190&end=1540574505&step=15"
response:Object
status:"success"
data:Object
resultType:"matrix"
result:Array[0] => []
</code></pre>
<p>I opened a GitHub issue for this as well; it has a quick GIF screen recording showing what I mean: <a href="https://github.com/AppMetrics/Prometheus/issues/43" rel="nofollow noreferrer">AppMetrics/Prometheus#43</a></p>
| Scott Crooks | <p><code>|</code> is for regular expressions, PromQL doesn't have a <code>|</code> operator (but it does have an <code>or</code> operator). You need to specify that the matcher is a regex rather than an exact match with <code>=~</code>:</p>
<pre><code>rate(application_httprequest_transactions_count{env="test",app="MyApp",server=~"myapp-test-58d94bf78d-l9tdv|myapp-test-58d94bf78d-jdq78"}[15m])*60
</code></pre>
| brian-brazil |
<p>Can anyone pls help me with Open-Shift Routes? </p>
<p>I have setup a Route with Edge TLS termination, calls made to the service endpoint (<a href="https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com</a>) results in:</p>
<pre><code>502 Bad Gateway
The server returned an invalid or incomplete response.
</code></pre>
<p>Logs from the pod has the below error I make a REST call using the endpoints</p>
<pre><code>CWWKO0801E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired. Exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at com.ibm.jsse2.c.a(c.java:6)
at com.ibm.jsse2.as.a(as.java:532)
at com.ibm.jsse2.as.unwrap(as.java:580)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:5)
at com.ibm.ws.channel.ssl.internal.SSLConnectionLink.readyInbound(SSLConnectionLink.java:515)
</code></pre>
<p>Default Passthrough route termination works!, but this does not let me specify Path Based Routes. Hence trying to use Route with Edge TLS Termination I am trying to route traffic from /ibm/pmi/service to apm-pm-api-service, and /ibm/pmi to apm-pm-ui-service using a single hostname <a href="https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com</a>. </p>
<p>I have SSL certs loaded into the edge route, liberty service uses the same certs via secrets defined in the deployment.yaml.</p>
<p>I am unable to identify the root cause of this SSL related error, is this coming from the wlp liberty application server or an issue with openshift routes?</p>
<p>Any suggestions on how to get the liberty application working.</p>
<p>Thanks for your help in advance!</p>
<p>Attaching the route.yaml</p>
<pre><code>kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: openshift-pmi-dev
namespace: default
selfLink: /apis/route.openshift.io/v1/namespaces/default/routes/openshift-pmi-dev
uid: 9ba296f6-1611-11ea-a1ab-0a580afe00ab
resourceVersion: '6819345'
creationTimestamp: '2019-12-03T21:12:26Z'
annotations:
haproxy.router.openshift.io/balance: roundrobin
haproxy.router.openshift.io/hsts_header: max age=31536000;includeSubDomains;preload
spec:
host: openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com
subdomain: ''
path: /ibm/pmi/service
to:
kind: Service
name: apm-pm-api-service
weight: 100
port:
targetPort: https
tls:
termination: edge
certificate: |
-----BEGIN CERTIFICATE-----
<valid cert>
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
<valid cert>
-----END RSA PRIVATE KEY-----
caCertificate: |
-----BEGIN CERTIFICATE-----
<valid cert>
-----END CERTIFICATE-----
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
status:
ingress:
- host: openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com
routerName: default
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2019-12-03T21:12:26Z'
wildcardPolicy: None
routerCanonicalHostname: apps.vapidly.os.fyre.ibm.com
</code></pre>
<p>Changing the Route to Re-encryte, results in Application is not available 502 error. It seems like the requests are not reaching the service.</p>
<p><a href="https://i.stack.imgur.com/lWBkz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lWBkz.png" alt="With reencrypt termination route"></a></p>
| Lokesh Sreedhar | <p>Edge termination means http (plaintext) to the back end service, but your route goes out of its way to send http to the https port.</p>
<p>Either drop the port:https or use 'reencrypt' termination instead of 'edge'</p>
| covener |
<p>I'm working on a project where i need a front-end in angular and a back-end with spring-boot.
I've made 2 docker images for them, and used an ingress like the following to dipatch requests to the correct service:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: smf-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: rest-service
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: ng-service
port:
number: 4200
</code></pre>
<p>As you see, all requests matching '/api*' are redirected to the rest-service, which is my spring boot application.</p>
<p>If i make a request, say '/api/helloworld', to the ingress, <strong>how is this request modified before reaching spring endpoints?</strong>
Will spring receive the same request, calling an endpoint mapped to '/api/helloworld' or not?</p>
<p>I'm asking this because I mapped my endpoints in spring to '/api/some_endpoint' , but every time i try to access those urls from the browser i get spring error page (404).
To achieve this behavior I set the following property in my spring boot application:</p>
<pre><code>server.servlet.context-path=/api
</code></pre>
<p>is this wrong? why? can you explain me a solution?</p>
| Luca Pasini | <p>First some context, Kubernetes does not include an Ingress Controller so this all specific to ingress-nginx, which I assume you are running given the annotation you are asking about is specific to that project. Other Ingress Controllers will work differently.</p>
<p>That said, you can find the documentation for the rewrite feature here <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#deployment" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/#deployment</a>. The rewrite is implemented in Nginx, more specifically the ingress-nginx controller takes all your Ingress objects and generates an nginx config file, which is then handed off to normal standard nginx as the proxy.</p>
<p>The behavior you have there will be equivalent to <code>s|^/api|/|</code>, however as noted in the docs the behavior changed in version 0.22 so if you're on a newer version that probably wouldn't work as you expect.</p>
| coderanger |
<p>I have a Kubernetes cron job that creates a zip file which takes about 1 hour. After it's completion I want to upload this zip file to an AWS s3 bucket.</p>
<p>How do I tell the cron job to only do the s3 command after the zip is created?</p>
<p>Should the s3 command be within the same cron job?</p>
<p>Currently my YAML looks like this:</p>
<pre><code>kind: CronJob
metadata:
name: create-zip-upload
spec:
schedule: "27 5 * * *" # everyday at 05:27 AM
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: mycontainer
image: 123456789.my.region.amazonaws.com/mycompany/myproject/rest:latest
args:
- /usr/bin/python3
- -m
- scripts.createzip
</code></pre>
| William Ross | <p>Kubernetes doesn't have a concept of a relationship between resources. There isn't an official or clean way to have something occurring in one resource cause an effect on another resource.</p>
<p>Because of this, the best solution is to just put the s3 cmd into the same cronjob.</p>
<p>There's two ways to do this:</p>
<ol>
<li>Add the s3 cmd logic to your existing container.</li>
<li>Create a new container in the same cronjob that watches for the file and then runs the s3 cmd.</li>
</ol>
| Swiss |
<p>I am using the Jenkins <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">kubernetes-plugin</a>. Is it possible to build a docker image from a Dockerfile and then run steps inside the created image? The plugin requires to specify an image in the pod template so my first try was to use docker-in-docker but the step <code>docker.image('jenkins/jnlp-slave').inside() {..}</code> fails:</p>
<pre><code>pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
label 'mypod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:1.11
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Build Docker image') {
steps {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container('docker') {
sh "docker build -t jenkins/jnlp-slave ."
docker.image('jenkins/jnlp-slave').inside() {
sh "whoami"
}
}
}
}
}
}
</code></pre>
<p>Fails with:</p>
<pre><code>WorkflowScript: 31: Expected a symbol @ line 31, column 11.
docker.image('jenkins/jnlp-slave').inside() {
</code></pre>
| Lars Bilke | <p>As pointed out by Matt in the comments this works:</p>
<pre><code>pipeline {
agent {
kubernetes {
//cloud 'kubernetes'
label 'mypod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:1.11
command: ['cat']
tty: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
"""
}
}
stages {
stage('Build Docker image') {
steps {
git 'https://github.com/jenkinsci/docker-jnlp-slave.git'
container('docker') {
script {
def image = docker.build('jenkins/jnlp-slave')
image.inside() {
sh "whoami"
}
}
}
}
}
}
}
</code></pre>
| Lars Bilke |
<p>Is there a way to remove kubernetes cluster contexts from <a href="https://github.com/ahmetb/kubectx" rel="nofollow noreferrer">kubectx</a>? Or can this only be done by manually removing them from kubeconfig?</p>
| Wunderbread | <p>There is a "delete" command for kubectx. You can see the kubectx help with <code>kubectx --help</code></p>
<p>For reference, the syntax is</p>
<pre><code>kubectx -d <NAME> [<NAME...>]
</code></pre>
<p>eg, <code>kubectx -d</code> followed by one or more Kube context names.</p>
| Swiss |
<p>emptyDir vs hostPath (and other) volume type usage on a kubernetes deployment</p>
<blockquote>
<p>As read from official documentation it is stated that when using emptyDir with a POD.
<strong>Container crashing does NOT remove a Pod from a node, so the data in an emptyDir volume is safe across Container crashes</strong>.</p>
</blockquote>
<p>So i was questioning <strong>can we somehow force the content of POD survive on upgrade (rollout) using emptyDir volume type along with (node selector/affinity) usage forcing a POD to be pinned to a given node ?</strong> or hostPath (or other volume type )is what we will need consider at design to make sure data is persisted even during rollout where POD is recreated regardless of node pinning (we have flexibility to pin this app to a large node in cluster)</p>
| DT. | <p>As you may know, when you upgrade a deployment, it creates a new POD and once running it terminates your previous POD. In such case, the <code>emptyDir</code> is going to be removed (termination of a pod). As you mention, you could use a <code>hostPath</code> in case you have the node selector/affinity and in such scenario the folder would be shared with all the instances of your pod (if replicas > 1). This is actually the answer from @Matt. </p>
<p>The <code>emptyDir</code> is more or less a temporary folder that you use for the life-cycle of your application. You shouldn't store data required when restarting. Somewhere in the documentation of Kubernetes, it is stated that pod could be removed from a node or killed in case of issue (E.g.: Auto-healing).</p>
| Nordes |
Subsets and Splits