Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I'm running Prometheus in a kubernetes cluster. All is running find and my UI pods are counting visitors. </p> <p><a href="https://i.stack.imgur.com/Eb8l4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eb8l4.png" alt="enter image description here"></a></p> <p>Please ignore the title, what you see here is the query at the bottom of the image. It's a counter. The gaps in the graph are due to pods restarting. I have two pods running simultaneously!</p> <p>Now suppose I would like to count the total of visitors, so I need to sum over all the pods</p> <p><a href="https://i.stack.imgur.com/FA5qY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FA5qY.png" alt="enter image description here"></a></p> <p>This is what I expect considering the first image, right?</p> <p>However, I don't want the graph to drop when a pod restarts. I would like to have something cumulative over a specified amount of time (somehow ignoring pods restarting). Hope this makes any sense. Any suggestions?</p> <p><strong>UPDATE</strong></p> <p>Below is suggested to do the following</p> <p><a href="https://i.stack.imgur.com/H1u52.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H1u52.png" alt="enter image description here"></a></p> <p>Its a bit hard to see because I've plotted everything there, but the suggested answer <code>sum(rate(NumberOfVisitors[1h])) * 3600</code> is the continues green line there. What I don't understand now is the value of 3 it has? Also why does the value increase after 21:55, because I can see some values before that. </p> <p>As the approach seems to be ok, I noticed that the actual increase is actually 3, going from 1 to 4. In the graph below I've used just one time series to reduce noise</p> <p><a href="https://i.stack.imgur.com/DZrmm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZrmm.png" alt="enter image description here"></a></p>
Jeanluca Scaljeri
<p>Rate, then sum, then multiply by the time range in seconds. That will handle rollovers on counters too.</p>
coderanger
<p>We have the following deployment <code>yaml</code>:</p> <pre><code>--- apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} namespace: {{DEP_ENVIRONMENT}} labels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} spec: replicas: {{NUM_REPLICAS}} selector: matchLabels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} template: metadata: labels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} spec: # [START volumes] volumes: - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}} secret: secretName: {{CLOUD_DB_INSTANCE_CREDENTIALS}} # [END volumes] containers: # [START proxy_container] - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;PROJECT_ID&gt;:{{CLOUD_DB_CONN_INSTANCE}}=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] # [START cloudsql_security_context] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false # [END cloudsql_security_context] volumeMounts: - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}} mountPath: /secrets/cloudsql readOnly: true # [END proxy_container] - name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} image: {{IMAGE_NAME}} ports: - containerPort: 80 env: - name: CLOUD_DB_HOST value: 127.0.0.1 - name: DEV_CLOUD_DB_USER valueFrom: secretKeyRef: name: {{CLOUD_DB_DB_CREDENTIALS}} key: username - name: DEV_CLOUD_DB_PASSWORD valueFrom: secretKeyRef: name: {{CLOUD_DB_DB_CREDENTIALS}} key: password # [END cloudsql_secrets] lifecycle: postStart: exec: command: ["/bin/sh", "-c", "supervisord"] </code></pre> <p>The last <code>lifecycle</code> block is new and is causing the database connection to be refused. This config works fine without the <code>lifecycle</code> block. I'm sure that there is something stupid here that I am missing but for the life of my cannot figure out what it is.</p> <p>Note: we are only trying to start Supervisor like this as a workaround for huge issues when attempting to start it normally.</p>
lola_the_coding_girl
<p>Lifecycle hooks are intended to be short foreground commands. You cannot start a background daemon from them, that has to be the main <code>command</code> for the container.</p>
coderanger
<p>Is it possible to have more than one instance of the actual <code>Service</code> object created in deployments that actually manages the access to pods (containers) and what happens if the actual service object itself is somehow deleted or destroyed? </p> <p>This is the service object specified in a deployment YAML file:</p> <pre class="lang-yaml prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 </code></pre>
JayD
<p>The Service object only exists as an abstraction within the Kubernetes API. The actual implementation is distributed across your entire cluster, generally in the form of iptables rules created by kube-proxy on every node.</p>
coderanger
<p>Lets say I have a pod with one volume created from a secret, which comes to life in ~5sec.</p> <p>Now if I create a bigger amounts of pods and secrets everything getting slower. Which is kind of normal I guess.</p> <p>Lets say I create 20pod with 500secret. It takes ~150sec to get into running state. I get a ton of warning like:</p> <pre><code>Warning FailedMount 45s (x1 over 1m) kubelet Unable to attach or mount volumes: unmounted volumes=[abc-secrets-vol], unattached volumes=[abc-secrets-vol default-token-cp4nw nginx-config]: timed out waiting for the condition </code></pre> <p>But nothing serious , by the end everything will work fine.</p> <p>Now the questions:<br></p> <ul> <li>Is there any cluster level settings which can speed things up?</li> <li>Which part of the K8s responsible for this?</li> <li>Which part is slow in this case? <code>volume creation</code> OR <code>volume attach</code> (I excluded the secret creation because they got created pretty fast) ?</li> </ul>
beatrice
<p>The kubelet handles this. There is no reason for anyone to have load-tested such a weird edge case because why would you ever do that? The slowdown is likely the Kubelet getting access to all the data because that has to wait for the scheduler (because of NodeRestriction) and then it might be hitting a client-side rate limit or APF rate limit after that? Honestly not sure, again we don't load test every ludicrous scenario because there are an unlimited number of those :) Nor do we put in options to control them without good reason.</p>
coderanger
<p>Can multiple long waiting threads (blocked on remote rest call response, non cpu-bound) throttle CPU ? This cpu throttle causes leads to pod restart as health check response takes time to respond.</p>
pranav prashant
<p>Something blocked in a waiting syscall (select and friends, sleep(), blocking read or write) does not count as using any CPU time, the task (how Linux thinks about threads internally) won't be marked as runnable until something interrupts the wait.</p>
coderanger
<p>I'd like to direct traffic from a load balancer within Kubernetes to a deployment. However, rather than attempting to achieve a uniform load across all pods of a deployment, I'd like each connection to and maintain a connection to a specific pod. I'll be sending GRPC requests to a stateful instance on the pod and it's critical that the client's GRPC requests are not sent to other pods.</p> <p>My current implementation is probably needlessly complex. Here's the pseudocode:</p> <ol> <li>Cluster initialized with a custom python scheduler.</li> <li>Several pods with the stateful application are created, each with a node port service and unique node port.</li> <li>Client talks to the python scheduler using a socket interface and is assigned a port.</li> <li>Client talks to the pod using the assigned nodeport.</li> <li>Client (or the scheduler) terminates the pod.</li> </ol> <p>I'm limited by the number of ports and am unable to direct traffic using AKS due to their node port limitations. Additionally, though the advantage of the scheduler is the client can request pods of varying resources, but it's too much to test and maintain.</p> <p>Is there a better solution to direct external traffic to individual stateful pods?</p>
Alex Kaszynski
<p>The default iptables service proxy implementation uses a very simple randomized round-robin algorithm for selecting which pod to use. If you use the IPVS implementation instead that does offer a lot more options, though that is unlikely to be an option on a hosted provider like AKS. So that would leave you with using a userspace proxy that supports gRPC like Traefik or Istio Ingress. Picking one is out of scope for SO but most of those proxies support some kind of connection stickiness of some form.</p>
coderanger
<p>I have several celery workers running in minikube, and they are working on tasks passed using the rabbitMQ. Recently I updated some of the code for the celery workers and changed the image. When I do <code>helm upgrade release_name chart_path</code>, all the existing worker pods are terminated and all the unfinished tasks are abandoned. I was wondering if there is a way to upgrade the helm chart without terminating the old pods? </p> <ol> <li>I know that <code>helm install -n new_release_name chart_path</code> will give me a new set of celery workers; however, due to some limitations, I am not allowed to deploy pods in a new release.</li> <li>I tried running <code>helm upgrade release_name chart_path --set deployment.name=worker2</code> because I thought having a new deployment name will stop helm from deleting the old pods, but this won't work either.</li> </ol>
RonZhang724
<p>This is just how Kubernetes Deployments work. What you should do is to fix your Celery worker image so that it waits to try and complete whatever tasks are pending before actually shutting down. This should already probably be the case unless you did something funky such that the SIGTERM isn't making it to Celery? See <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a> for details.</p>
coderanger
<p>I have a configmap which contains a toml file</p> <p>something like </p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181, cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>When I am trying to create a helm chart to generate this configmap, I was putting something like:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers={{ .Values.config.zookeeperServers }} timeout=6 root-path="/burrow" </code></pre> <p>and in the values.yaml, I put:</p> <pre><code> zookeeperServers: [ "abc.2181", "cde.2181"] </code></pre> <p>However, the rendered value became:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181 cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>The comma is missing. Is there a good way to template this correctly? Thanks!</p>
ustcyue
<p>Try this, <code>servers=[{{ .Values.config.zookeeperServers | join "," }}]</code>. Quoting could get weird if you put TOML metachars in those values, but for simple things it should work.</p>
coderanger
<p>By default Kubernetes cluster has a taint on master nodes that does not allow to schedule any pods to them.</p> <p>What is the reason for that?</p> <p>Docker Swarm, for example, allows to run containers on manager nodes by default.</p>
yaskovdev
<p>Safety during failures. Imagine if a pod running on a control plane node started eating all your CPU. Easy to fix right, just <code>kubectl delete pod</code> and restart it. Except if that CPU outburst has kube-apiserver or Etcd locked up, then you have no way to fix the problem through the normal tools. As such, it's usually just safer to keep unvetted workloads off the control plane nodes.</p>
coderanger
<p>Is there a way to specify a <code>hostPath</code> volume so that it includes the pod's name? For example, something like the following:</p> <pre><code> volumes: - name: vol-test hostPath: path: /basedir/$(POD_NAME) type: Directory </code></pre> <p>Simply using /basedir directly and then have the pod itself query for its name and create the directory doesn't satisfy my needs: I specifically want to map each of my containers' /tmp folders to specific volumes.</p> <p>I know that something similar works with <code>valueFrom</code> for environment variables (see <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">here</a>), but so far I haven't been able to find anything similar for volumes.</p> <p>Please let me know if anyone has any ideas. Thanks!</p>
Behram Mistree
<p>You set it as an env var first via valueFrom and then certain fields understand <code>$(FOO)</code> as an env var reference to be interpolated at runtime.</p> <p>EDIT: And <code>path</code> is not one of those fields. But <code>subPathExpr</code> is.</p>
coderanger
<p>I got error when creating deployment. This is my Dockerfile that i have run and test it on local, i also push it to DockerHub</p> <pre><code>FROM node:14.15.4 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install RUN npm install pm2 -g COPY . . EXPOSE 3001 CMD [ &quot;pm2-runtime&quot;, &quot;server.js&quot; ] </code></pre> <p>In my raspberry pi 3 model B, i have install k3s using <code>curl -sfL https://get.k3s.io | sh -</code> Here is my controller-deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: controller-deployment labels: app: controller spec: replicas: 1 selector: matchLabels: app: controller template: metadata: labels: app: controller spec: containers: - name: controller image: akirayorunoe/node-controller-server ports: - containerPort: 3001 </code></pre> <p>After run this file the pod is error</p> <p>When i log the pod, it said</p> <pre><code>standard_init_linux.go:219: exec user process caused: exec format error </code></pre> <p>Here is the reponse from describe pod</p> <pre><code>Name: controller-deployment-8669c9c864-sw8kh Namespace: default Priority: 0 Node: raspberrypi/192.168.0.30 Start Time: Fri, 21 May 2021 11:21:05 +0700 Labels: app=controller pod-template-hash=8669c9c864 Annotations: &lt;none&gt; Status: Running IP: 10.42.0.43 IPs: IP: 10.42.0.43 Controlled By: ReplicaSet/controller-deployment-8669c9c864 Containers: controller: Container ID: containerd://439edcfdbf49df998e3cabe2c82206b24819a9ae13500b0 13b9bac1df6e56231 Image: akirayorunoe/node-controller-server Image ID: docker.io/akirayorunoe/node-controller-server@sha256:e1c5115 2f9d596856952d590b1ef9a486e136661076a9d259a9259d4df314226 Port: 3001/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 21 May 2021 11:24:29 +0700 Finished: Fri, 21 May 2021 11:24:29 +0700 Ready: False Restart Count: 5 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-txm85 (ro ) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-txm85: Type: Secret (a volume populated by a Secret) SecretName: default-token-txm85 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m33s default-scheduler Successfully ass igned default/controller-deployment-8669c9c864-sw8kh to raspberrypi Normal Pulled 5m29s kubelet Successfully pul led image &quot;akirayorunoe/node-controller-server&quot; in 3.072053213s Normal Pulled 5m24s kubelet Successfully pul led image &quot;akirayorunoe/node-controller-server&quot; in 3.018192177s Normal Pulled 5m6s kubelet Successfully pul led image &quot;akirayorunoe/node-controller-server&quot; in 3.015959209s Normal Pulled 4m34s kubelet Successfully pul led image &quot;akirayorunoe/node-controller-server&quot; in 2.921116157s Normal Created 4m34s (x4 over 5m29s) kubelet Created containe r controller Normal Started 4m33s (x4 over 5m28s) kubelet Started containe r controller Normal Pulling 3m40s (x5 over 5m32s) kubelet Pulling image &quot;a kirayorunoe/node-controller-server&quot; Warning BackOff 30s (x23 over 5m22s) kubelet Back-off restart ing failed container </code></pre> <p>Here is the <a href="https://i.stack.imgur.com/o2Nwv.png" rel="nofollow noreferrer">error images</a></p>
Hoang Pham Huy
<p>You are trying to launch a container built for x86 (or x86_64, same difference) on an ARM machine. This does not work. Containers for ARM must be built specifically for ARM and contain ARM executables. While major projects are slowly adding ARM support to their builds, most random images you find on Docker Hub or whatever will not work on ARM.</p>
coderanger
<p>I'm running <code>rook-ceph-cluster</code> on top of <code>AWS</code> with <code>3 masters - 3 worker node</code> configuration. I have created my cluster using <a href="https://rook.io/docs/rook/v1.0/ceph-examples.html" rel="nofollow noreferrer">this</a>.</p> <p>Each <code>worker node</code> is <code>100 GiB</code> each.</p> <p>After setting everything up. I have my pods running (6 pods to be exact,3 for master and 3 for nodes).</p> <p><strong>How can I crash/fail/stop those pods manually (to test some functionality)?.</strong></p> <p>Is there is any way <strong>I can add more load manually to those pods so that it can crash?</strong>.</p> <p>Or <strong>can I somehow make them <code>Out Of Memory</code>?.</strong></p> <p>Or <strong>can I simulate intermittent network failures and disconnection of nodes from the network?</strong></p> <p>Or <strong>any other ways like writing some script that might prevent a pod to be created?</strong></p>
Rajat Singh
<p>You can delete pods manually as mentioned by Graham, but the rest are trickier. For simulating an OOM, you could <code>kubectl exec</code> into the pod and run something that will burn up RAM. Or you could set the limit down below what it actually uses. Simulating network issues would be up to your CNI plugin, but I'm not aware of any that allow failure injection. For preventing a pod from being created, you can set an affinity it that is not fulfilled by any node.</p>
coderanger
<p>I would like to develop a playbook rule for addressing how to manage socket securing for Docker and Kubernetes from either the standpoint of Docker For Mac or MiniKube- after auto-updates to any of the pieces of the puzzle. Maybe we need to throw out there the LLVM or (VM in question if we say use Virtual Box <em>and</em> a particular Unix/Linux flavor. Then we may also need look at enforcing SELinux or AppArmor or Seccomp. I just want to see where this rabbit hole may end us up in today, 2019-09-25, as I just embarked on the Kubernetes quest with minishift, co, and micro-services architecture with RHEL (I may sway away from the OS image to Alpine or something someone may suggest as a suitable alternative). The goal here is to provide support to a team from a system administration point of view, potentially from the stance of calming some long lived traditional experience in the world concerns with real manageable solutions to infrastructure migrations for larger business. </p> <p>RHEL D0092 course work. Months of reading documentation with Docker and watching the past four updates on my development machine go by without a workable solution, knowing the inevitability was to get a Kubernetes cluster going after chewing on Kerrisk's bible for a while on the subject matter- Datagrams and Stream sockets and the like (and how people like to port it well for their individual use cases). I'm a novice system admin here, don't let me fool you. I just am not afraid to ride the big waves, that's all.</p> <pre class="lang-sh prettyprint-override"><code>kubectl --namespace=kubedemo set image deployment/myvertex myvertex=burr/myvertx:v2 </code></pre> <p>or </p> <pre class="lang-py prettyprint-override"><code>import subprocess import sys import os if len(sys.argv) &gt; 1: name = sys.argv[1] else: # fd = input(the_path_passed_stdin) name = input("Update path/name for docker build:") # test this with a "dryrun" first, you don't need a bunch of image clutter or control -c's # No such file or directory trace means we need to call docker directly proc = subprocess.run([f"error_test docker build -it {name} ."], encoding='utf-8', stdout=subprocess.PIPE) </code></pre> <p>Just a thought on automation of a playbook rule in some type of sequential fashion- in python if I can get suggestions, but bash is fine too.</p>
Rudy
<p>Kubernetes works over an HTTP API and normal IP sockets, not a local domain socket. So there isn't really anything to lock down. DfM's control socket is already tightly locked down to the active GUI user because DfM runs inside a Mac app container (totally different use of the term "container"). Basically there isn't anything you need to do, it's already as safe as it's getting.</p>
coderanger
<p>External firewall logs show blocked connection from &lt; node IP >:&lt; big port >. The current cluster uses calico networking.</p> <p>How do I detect which pod trying to connect?</p>
Pav K.
<p>This would usually be pretty hard to work out, you would have to check the NAT table on the node where the packets exited to the public internet.</p>
coderanger
<p>I have two services name Product and Order. Order table inside OrderDb has price and productId columns for storing product price and product id that ordered. Order services has 3 replica.</p> <p>Now, suppose a product is ordered and it's id 80, and a series of sequential update event fired from product service to order services for that particular product:</p> <pre><code>event1:{productId: 80, price: 500} event2:{productId: 80, price: 600} event3:{productId: 80, price: 400} event4:{productId: 80, price: 900} event5:{productId: 80, price: 100} </code></pre> <p>so the end price should be 100 for that product, but sometimes these events are processed in random order such as</p> <pre><code>event1:{productId: 80, price: 500} event2:{productId: 80, price: 600} event5:{productId: 80, price: 100} event4:{productId: 80, price: 900} event3:{productId: 80, price: 400} </code></pre> <p>since event 3 processed last, price become 400.</p>
Rafiq
<p>This is generally up to your database. I see you put NATS in the tags so I'm assuming you mean you have some kind of worker queue model, but you probably have a database of record behind that with its own consistency model. With event streaming systems where you want to defend against out of order or multi-delivery, you would include more info in the queue message such as an object version, or just the previous price. In the latter, simpler case it would be like</p> <pre><code>event1:{productId: 80, price: 500, oldPrice: 0} event2:{productId: 80, price: 600, oldPrice: 500} event3:{productId: 80, price: 400, oldPrice: 600} event4:{productId: 80, price: 900, oldPrice: 400} event5:{productId: 80, price: 100, oldPrice: 900} </code></pre> <p>That would let your code refuse to apply the operation if the base state no longer matches. But that's pretty limiting, you don't want everything to fail after a re-order, you just want convergent behavior. This the point where I just yell &quot;VECTOR CLOCKS&quot; and leap out of the window. Designing distributed, convergent systems is indeed quite hard, look up the term CRDT as a starting point.</p>
coderanger
<p>On my master node I am running <strong>Kubernetes v1.16.3</strong> which users will submit some jobs to our servers from time to time, which is working right now. However, in my case I have many users and they should not only have priorites on their jobs, but also a <strong><em>minimum</em> lifetime</strong> of each job / pod.</p> <p>The <em><strong>minimum</strong></em> lifetime should guarantee that a job will run at least for example 5 hours. If our resources are fully used and a user submits a job with a higher priority than the current running jobs, then only those running jobs that have exceeded the minimum lifetime should be candidates to be evicted.</p> <p>I am not able to find a solution for this. I only can find a <em>maximum</em> lifetime solution (<a href="https://medium.com/@ptagr/give-your-kubernetes-pods-a-lifetime-8c039d622faf" rel="nofollow noreferrer">https://medium.com/@ptagr/give-your-kubernetes-pods-a-lifetime-8c039d622faf</a>) , which represents the duration of the lifetime on which the job / pod will be evicted after the provided time has expired. But that's not what I want. I would like to create a job / pod that is protected for 5 hours to be run and after the specified time has elapsed the job / pod <em><strong>should still be running</strong></em>, but when another new job comes by (that has for example a minimum lifetime of 3 hours) the old running job / pod should be evicted and the newly created job / pod should take its place and run for at least 3 hours before of being a candidate of getting killed by another job.</p> <p>Is this even possible to achieve this in Kubernetes? Or is there a workaround on achieving that?</p>
TheHeroOfTime
<p>Kubernetes' scheduler doesn't understand time so not directly. You can set PriorityClasses and PodDisruptionBudgets (in this case the budget being maxDisruptions 0) which control voluntary evictions. It might be possible to write something which changes the PriorityClass value after a certain amount of time but I don't know anything off-the-shelf for that, would be a custom operator.</p>
coderanger
<p>I am using k8s with version 1.11 and CephFS as storage.</p> <p>I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config in the deployment configuration</p> <p>Volume</p> <pre><code>{ "name": "cephfs-0", "cephfs": { "monitors": [ "10.0.1.165:6789", "10.0.1.103:6789", "10.0.1.222:6789" ], "user": "cfs", "secretRef": { "name": "ceph-secret" }, "readOnly": false, "path": "/cfs/data/conf" } } </code></pre> <p>volumeMounts</p> <pre><code>{ "mountPath": "/opt/myapplication/conf", "name": "cephfs-0", "readOnly": false } </code></pre> <p>Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.</p> <p>I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.</p> <p>Is it possible?</p> <p>I went through the docker document and it mentions that </p> <blockquote> <p>Populate a volume using a container If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.</p> </blockquote> <p>This matches with my requirement but how to achieve it with k8s volumes?</p>
Yudi
<p>Unfortunately Kubernetes' volume system differs from Docker's, so this is not possible directly.</p> <p>However, in case of a single file <code>foo.conf</code> you can use:</p> <ul> <li>a <code>mountPath</code> ending in this file name and</li> <li>a <code>subPath</code> containing this file name, like this:</li> </ul> <pre class="lang-yaml prettyprint-override"><code> volumeMounts: - name: cephfs-0 mountPath: /opt/myapplication/conf/foo.conf subPath: foo.conf </code></pre> <p>Repeat that for each file. But if you have a lot of them, or if their names can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.</p>
coderanger
<p><a href="https://i.stack.imgur.com/bd6P3.jpg" rel="nofollow noreferrer">enter image description here</a>I don't' see why adding selector when we are creating <strong>replicaSet</strong> I thought maybe we can select different pod but we can't so I don't see what is used for</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: name: rs-name spec: replicas: 2 selector: // here the selector matchLabels: podptr: my-pod-l template: metadata: name: pods-name labels: podptr: my-pod-l // here it's lable spec: containers: - name: cons-name image: nginx ports: - containerPort: 50 </code></pre>
Mohamed Ghatbaoui
<p>Because you can have more than one label on the resulting pods and it doesn't know which of them to use for tracking. The selector labels are read-only once initialized, but other labels you can add and remove as needed for other purposes.</p>
coderanger
<p>I have an NFS based PVC in a kubernetes cluster that I need to freeze to take a snapshot of. I tried fsfreeze, but I get &quot;operation not supported&quot;. I assume because it is trying to freeze the entire nfs instead of just the mount. I have checked and I can freeze the filesystem on the side of the NFS server. Is there a different way that I can stop writes to the filesystem to properly sync everything?</p>
deef0000dragon1
<p>From <a href="https://github.com/vmware-tanzu/velero/issues/2042" rel="nofollow noreferrer">https://github.com/vmware-tanzu/velero/issues/2042</a> and some other quick poking around, fsfreeze doesn't support NFS mounts. In general it seems to mostly on work with real local volumes which you'll almost never use with Kubernetes.</p>
coderanger
<p>Our goal is to run kubernetes in AWS and Azure with minimal customization (setting up kubernetes managed env), support and maintenance. We need portability of containers across cloud providers. </p> <p>Our preferred cloud provider is AWS. We are planning on running containers in EKS. We wanted to understand the customization effort required to run these containers in AKS. </p> <p>Would you recommend choosing a container management platform like Pivotal Cloud Foundry or Redhat OpenShift or run them on AWS EKS or AKS where customization is less to run containers across different cloud providers.</p>
siv
<p>You need to define a common set of storage classes that map to similar volume types on each provider. If you are using some kind of provider based Ingress controller those can vary so I would recommend using an internal one like nginx or traefik. If you are using customization annotations for things like networking those can vary, but using those is pretty rare. Others k8s is k8s.</p>
coderanger
<p>My question is related to Kubernetes and the units of the metrics used for the HPA (autoscaling).</p> <p>When I run the command</p> <p><code>kubectl describe hpa my-autoscaler</code></p> <p>I get (a part of more information) this:</p> <pre><code>... Metrics: ( current / target ) resource memory on pods: 318067507200m / 1000Mi resource cpu on pods (as a percentage of request): 1% (1m) / 80% ... </code></pre> <p>In this example, when you can see the metrics for the <strong>resource memory on pods</strong>, you can see that the unit for the <code>current</code> value is <code>m</code>, which is "millis" (as is described in the <a href="https://kubernetes.io/docs/reference/glossary/?core-object=true#term-quantity" rel="nofollow noreferrer">official documentation</a>), but the unit used for the <code>target</code> value is <code>Mi</code>, which is "Mebis"</p> <p>Is there any problem with the usage of different units?</p> <p>Thanks!</p>
Lobo
<p>No, they are just different multipliers. The actual code is using a raw number of bytes under the hood.</p>
coderanger
<p>What is the best way to send slack notification when a k8s cluster node is not ready? (Master and worker)</p> <p>I already have Prometheus and alert manager up and running. Ideally I would use them to do that.</p>
ThatChrisGuy
<p>Use kube-state-metrics and an alert rule. The query to start with is something like <code>kube_node_status_condition{condition="Ready",status!="true"} &gt; 0</code> but you can play with it as needed.</p>
coderanger
<p>I'm new to DevOps work and am having a though time figuring out how the whole final architecture should look like. My project currently runs on a single Kubernetes Cluster and a single node with a single pod, in the very common Nginx reverse proxy + UWSGI Django app. I have to implement a scaling architecture. My understanding is that I should use an <code>Ingress Controller</code> behind a <code>LoadBalancer</code> (I'm hosted at OVH, they do provide a built-in LoadBalancer). The <code>Ingress Controller</code> will then distribute the traffic to my pods.</p> <p>Question 1: if my Django app listens on port 8000, setting <code>ReplicaSet</code> to 2 does not work because the port is already taken. This makes me believe I'm only supposed to have one pod per node but <a href="https://stackoverflow.com/questions/53637090/do-replicas-decrease-traffic-on-a-single-node-kubernetes-cluster">some information</a> says otherwise. How can I run multiple replicas on the same node?</p> <p>Question2: let's say I deploy 9 more nodes. Should all my 10 nodes be behind 1 Ingress Controller (and 1 Load Balancer) or should each node have its own Ingress Controller ?</p> <p>Question3: if I have only one Ingress Controller, the Load Balancer does not really &quot;balance&quot; any load, its sole purpose is to expose my service to the Internet, is that normal?</p> <p>Question4: what happens when the Ingress Controller is overloaded? Do I duplicate everything and then the Load Balancer distributes the requests on the 2 Controllers?</p> <p><a href="https://stackoverflow.com/questions/51559671/true-loadbalancing-in-kubernetes">This</a> and <a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer">this</a> is a good starting point, but still does not answer my questions directly.</p>
mrj
<ol> <li><p>Every pod has its own networking setup so two replicas (i.e. two pods) can both listen on the same port. <em>Unless</em> you've enabled host networking mode which should not be used here.</p> </li> <li><p>Not directly, the ingress controller can be a lot of things. If you're using a self-hosted one (I see the ingress-nginx tag so assuming you are using that) then each controller replica is an independent copy of the proxy setup. You would want 2 at least for redundancy but unless you need to break up your traffic because those two can't keep up with it (would have to be truly huge request volume) that's probably all you need.</p> </li> <li><p>Yes, that's fine on the K8s side, though as mentioned if you have multiple nodes available you probably want at least two ingress controller replicas in case one node dies unexpectedly.</p> </li> <li><p>The edge LoadBalancer is round-robin-ing requests between all the nginx proxy instances so if you need more capacity you would spawn more replicas (assuming you have spare CPU on the cluster, if not then more nodes first then more replicas).</p> </li> </ol>
coderanger
<p>I am currently working in dynamic scaling of services with custom metrics and I wanted to send some data <strong>from HPA to my external API service</strong>. Is there any way or any post request which will send the <em>current replica count</em> to my external API ? Like HPA has its sync period and it sends GET request to API to fetch the metric value so is there any way to also have some POST request so that I can get some data from HPA ?</p>
Museb Momin
<p>You don't per se, your service can watch the Kubernetes API for updates like everything else though. All components in Kubernetes communicate through the API.</p>
coderanger
<p>Stateless is the way to go for services running in pods however i have been trying to move a stateful app which needs to perform session persistence if one pod goes does for resiliency reasons.</p> <p>In websphere world IHS can be used to keep track of the session and if a node goes down it can be recreated on the live clone. </p> <p>Is there an industry standard way to handle this issue without having to refactor the applications code by persisting the session using some sidecar pod ?</p>
Sudheej
<p>Cookie-based sessions are just that, based on cookies. Which are stored by the user's browser, not your app. If you mean a DB-based session with a cookie session ID or similar, then you would need to store things in some kind of central database. I would recommend using an actual database like postgres, but I suppose there is nothing stopping you from using a shared volume :)</p>
coderanger
<p>So I have an API that's the gateway for two other API's. Using docker in wsl 2 (ubuntu), when I build my Gateway API.</p> <pre><code>docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway </code></pre> <p>I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.</p> <pre><code> env: - name: A_API_URL value: &lt;need help&gt; - name: B_API_URL value: &lt;need help&gt; </code></pre> <p>I get 500 or 502 errors when accessing then in the network. I tried specifyng the value of the env var as:</p> <ul> <li>their respective service's name.</li> <li>the complete URI (http://$(addr):$(port)</li> <li>the relative path : /something/anotherSomething</li> </ul> <p>Each API is deployed with a Deployment controller and a service I'm at a lost, any help is appreciated</p>
DylanB
<p>You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your <code>docker run</code> example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like <code>http://servicename.namespacename.svc.cluster.local:port/whatever</code>. So if the service is named <code>foo</code> in namespace <code>default</code> with port 8000 and path /api, <code>http://foo.default.svc.cluster.local:8000/api</code>.</p>
coderanger
<p>I am installing Custom Resources through an Operator. However, <code>kubectl apply</code> is blocked on<br> <em>"Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "my-crd.example.com" not found."</em></p> <p>If there were a switch on <code>kubectl apply</code> along the lines of <code>--no-typechecking</code>, it would solve this. I would not cause a problem with a missing CRD, because the <code>apply</code> just sends the Kubernetes objects to etcd. Then, by the time that the Operator actually constructs the Custom Resource, the Custom Resource Definition would be available. (I have other code that guarantees that.) </p> <p>So, can I suspend the typechecking that produces this error?</p>
Joshua Fox
<p>No, you can’t use a CRD API without actually creating the CRD. It’s not a type check, it’s how the system works through and through.</p>
coderanger
<p>I have applied the following pvc yaml.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ebs-claim spec: accessModes: - ReadWriteOnce storageClassName: ebs-sc resources: requests: storage: 4Gi </code></pre> <p>I now want my statefulset to use the PVC I have created. Instead, it is creating new PVC of a different storageclass.</p> <pre><code>apiVersion: v1 kind: statefulset metadata: name: example spec: # Name for the service object created by the operator serviceName: mongodb-service selector: {} # Specifies a size for the data volume different from the default 10Gi volumeClaimTemplates: - metadata: name: ebs-claim template: spec: nodeSelector: eks.amazonaws.com/nodegroup: managed-ng-private-1 </code></pre> <p>How can I get my statefulset to use existing PVCs instead of creating new ones?</p>
michael_fortunato
<p>Specify it like normal in the <code>volumes</code> section of the pod spec template. But you won't get the special behavior of creating a new PVC for each replica since that requires creating new ones.</p>
coderanger
<p>I have a k8s cluster with an nginx based ingress and multiple services (ClusterIP). I want to use Consul as a service mesh and documentation is very clear on how to set up and govern communication between services. What is not clear though is how to setup the nginx ingress to talk to these services via the injected sidecar connect proxies using mutual ssl. I'm using cert-manager to automatically provision and terminate ssl at the ingress. I need to secure the communication between the ingress and the services with Consul provisioned mutual SSL. Any documentation related to this scenario will definitely help.</p>
Harindaka
<p>You would inject the sidecar into the ingress-nginx controller and have it talk to backend services just like any other service-to-service thing. This will probably require overriding a lot of the auto-generated config so I'm not sure it will be as useful as you hope.</p>
coderanger
<p>Let's say that a Service in a Kubernetes cluster is mapped to a group of cloned containers that will fulfil requests made for that service from the outside world.</p> <p>What are the steps in the journey that a request from the outside world will make into the Kubernetes cluster, then through the cluster to the designated container, and then back through the Kubernetes cluster out to the original requester in the outside world?</p> <p>The documentation indicates that <code>kube-controller-manager</code> includes the <code>Endpoints controller</code>, which joins services to Pods. But I have not found specific documentation illustrating the steps in the journey that each request makes through a Kubernetes cluster.</p> <p>This is important because it affects how one might design security for services, including the configuration of routing around the control plane.</p>
CodeMed
<p>Assuming you are using mostly the defaults:</p> <ol> <li>Packet comes in to your cloud load balancer of choice.</li> <li>It gets forwarded to a random node in the cluster.</li> <li>It is received by the kernel and run through iptables.</li> <li>Iptables defines a mapping rule to forward the packet to a container IP.</li> <li>Unless it randomly happens to be on the same box, it then goes through your CNI network, usually some kind of overlay possibly with a wrapping and unwrapping.</li> <li>It eventually gets to the container IP, and then is delivered to whatever the process inside the container is.</li> </ol> <p>The Services and Endpoints system is what creates and manages the iptables rules and the cloud load balancers so that the LB knows the right node IPs and the iptables rules know the right container IPs.</p>
coderanger
<p>I am using <code>client-go</code> to read K8s container resource usage using the <code>Get</code> method of the core client-go client, but re-fetching the K8s object on an interval seems like the wrong approach. What is the better approach for periodically fetching status changes to a K8s container?</p>
Dnlhoust
<p>You would use an Informer which runs an API watch that receives push updates from kube-apiserver. Though that said, you might find it easier to use one of the operator support frameworks like Kubebuilder, or at least directly use the underlying controller-runtime library as raw client-go can be a little gnarly to configure stable informers. Doable either way though.</p>
coderanger
<p>I have a umbrella chart and I want to know if it's possible to update an existing helm deployment through my requirements.yaml in my umbrella chart. <a href="https://i.stack.imgur.com/zpbXj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zpbXj.png" alt="First app tier should be updated by my umbrella chart, not creating a new one."></a></p> <p><a href="https://i.stack.imgur.com/YuAOR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YuAOR.png" alt="First, app tier chart and second, my umbrellar chart containing app tier"></a></p>
Bruno Macedo
<p>Not directly. If you did some kind of funky CRD with one of the existing Helm operators then maybe, but overall releases don't know about each other.</p>
coderanger
<p>I am using kubernetes and its resources like secrets. During deployment one secret has been created (say test-secret) with some values inside it. Now I need to renamed this secretes (dev-secret) within the same namespace. How can I rename the secret or how can I copy test-secret value to dev-secret.</p> <p>Please let me know the correct approach for this.</p>
keepmoving
<p>There is no specific way to do this. The Kubernetes API does not have &quot;rename&quot; as an operation. In this particular case you would <code>kubectl get secret test-secret -o yaml</code>, clean up the <code>metadata:</code> sections that don't apply anymore, edit the name of the secret, and <code>kubectl apply</code> it again.</p>
coderanger
<p>I have a template part like:</p> <pre><code> spec: containers: - name: webinspect-runner-{{ .Values.pipeline.sequence }} ... env: - name: wi_base_url valueFrom: secretKeyRef: name: webinspect key: wi-base-url - name: wi_type valueFrom: secretKeyRef: name: webinspect key: wi-type </code></pre> <p>The <code>webinspect/wi_type</code> secret may be missing. I want the container also don't have the wi_type envvar or get a default value (better) when the secret is missing, but k8s just reports <code> CreateContainerConfigError: couldn't find key wi-type in Secret namespace/webinspect</code> and the pod fails.</p> <p>Is there a way to use a default value, or skip the block if the secret does not exist?</p>
Romulus Urakagi Ts'ai
<p>Two options, the first is add <code>optional: true</code> to the secretKeyRef block(s) which makes it skip. The second is a much more complex approach using the <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/" rel="nofollow noreferrer"><code>lookup</code> template function in Helm</a>. Probably go with the first :)</p>
coderanger
<p>I want to deploy eureka server on kubernetes, and want to specify the service name to the clients , so that whenever any client wants to connect to eureka server , it should do it using the service name of eureka server.</p>
Parosh Dey
<p>It looks like the official images for Eureka haven’t been updated in years, so I think you’re on your own and will have to work this out from scratch. There are a few Helm charts but they all seem to reference private images and are pretty simple so not sure how much that will help.</p>
coderanger
<p>I came across this page regarding the kube auto-scaler: <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca</a></p> <p>From this, I can tell that part of the reason why some of my nodes aren't being scaled down, is because they have local-data set on them...</p> <p>By default, the auto-scaler ignores nodes that have local data. I want to change this to false. However, I can't find instructions anywhere about <em>how</em> to change it.</p> <p>I can't find any obvious things to edit. There's nothing in <code>kube-system</code> that implies it is about the autoscaler configuration - just the autoscaler status ConfigMap.</p> <p>Could somebody help please? Thank you!</p>
Jty.tan
<p>You cannot. The only option the GKE gives is a vague &quot;autoscaling profile&quot; choice between the default and &quot;optimize utilization&quot;. You can, however, override it with per-pod annotations.</p>
coderanger
<p>I have a Kubernetes cluster and I am figuring out in what numbers have pods got scaled up using the <strong>Kubectl</strong> command.</p> <p>what is the possible way to get the details of all the scaled-up and scaled-down pods within a month?</p>
Sudarshan Sharma
<p>That is not information Kubernetes records. The Events system keeps some debugging messages that include stuff about pod startup and sometimes shutdown, but that's only kept for a few hours. For long term metrics look at something like Prometheus + kube-state-metrics.</p>
coderanger
<p>I am running a Kubernetes cluster including metrics server add-on and Prometheus monitoring. I would like to know which Kubernetes components or activities use/can use monitoring data from the cluster.</p> <p><em>What do I mean with &quot;Kubernetes components or activities&quot;?</em></p> <p>Obviously, one of the main use cases for monitoring data are all autoscaling mechanisms, including Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler. I am searching for further components or activities, which use live monitoring data from a Kubernetes cluster, and potentially a short explanation why they use it (if it is not obvious). Also it would be interesting to know, which of those components or activities must work with monitoring data and which can work with monitoring data, i.e. can be configured to work with monitoring data.</p> <p><em>What do I mean with &quot;monitoring data&quot;?</em></p> <p>Monitoring data include, but are not limited to: Node metrics, Pod metrics, Container metrics, network metrics and custom/application-specific metrics (e.g. captured and exposed by third-party tools like Prometheus).</p> <p>I am thankful for every answer or comment in advance!</p>
shiggyyy
<p><code>metrics-server</code> data is used by <code>kubectl top</code> and by the HorizontalPodAutoscaler system. I am not aware of any other places the use the metrics.k8s.io API (technically doesn't have to be served by <code>metrics-server</code> but usually is).</p>
coderanger
<p>For some context, I'm trying to build a staging / testing system on kubernetes which starts with deploying a mariadb on the cluster with some schema and data. I have a trunkated / clensed db dump from prod to help me with that. Let's call that file : dbdump.sql which is present in my local box in the path /home/rjosh/database/script/ . After much reasearch here is what my yaml file looks like:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: m3ma-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 30Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: m3ma-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 30Gi --- apiVersion: v1 kind: Service metadata: name: m3ma spec: ports: - port: 3306 selector: app: m3ma clusterIP: None --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: m3ma spec: selector: matchLabels: app: m3ma strategy: type: Recreate template: metadata: labels: app: m3ma spec: containers: - image: mariadb:10.2 name: m3ma env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: m3ma volumeMounts: - name: m3ma-persistent-storage mountPath: /var/lib/mysql/ - name: m3ma-host-path mountPath: /docker-entrypoint-initdb.d/ volumes: - name: m3ma-persistent-storage persistentVolumeClaim: claimName: m3ma-pv-claim - name: m3ma-host-path hostPath: path: /home/smaikap/database/script/ type: Directory </code></pre> <p>The MariaDB instance is coming up but not with the schema and data that is present in /home/rjosh/database/script/dbdump.sql.</p> <p>Basically, the mount is not working. If I connect to the pod and check /docker-entrypoint-initdb.d/ there is nothing. How do I go about this?</p> <p>A bit more details. Currently, I'm testing it on minikube. But, soon it will have to work on GKE cluster. Looking at the documentation, hostPath is not the choice for GKE. So, what the correct way of doing this?</p>
smaikap
<p>Are you sure your home directory is visible to Kubernetes? Minikube generally creates a little VM to run things in, which wouldn't have your home dir in it. The more usual way to handle this would be to make a very small new Docker image yourself like:</p> <pre><code>FROM mariadb:10.2 COPY dbdump.sql /docker-entrypoint-initdb.d/ </code></pre> <p>And then push it to a registry somewhere, and then use that image instead.</p>
coderanger
<p>I am new to the writing a custom controllers for the kubernetes and trying to understand this. I have started referring the sample-controller <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller</a>.</p> <p>I want to extend the sample-controller to operate VM resource in cloud using kubernetes. It could create a Vm if new VM kind resource is detected. Update the sub resources or delete if user want.</p> <p>Schema should be like the below:</p> <pre><code>apiVersion: samplecontroller.k8s.io/v1alpha1 kind: VM metadata: name: sample spec: vmname: test-1 status: vmId: 1234-567-8910 cpuUtilization: 50 </code></pre> <p>Any suggestions or help is highly appreciable :)</p>
john snowker
<p>Start from <a href="https://book.kubebuilder.io/" rel="nofollow noreferrer">https://book.kubebuilder.io/</a> instead. It's a much better jumping off point than sample-controller.</p>
coderanger
<p>I'm running Kubernetes service using exec which have few pods in statefulset. If I kill one of the master pod used by service in exec, it exits with code 137. I want to forward it to another pod immediately after killing or apply wait before exiting. I need help. Waiting for answer. Thank you.</p>
Sabir Piludiya
<p>137 means your process exited due to SIGKILL, usually because the system ran out of RAM. Unfortunately no delay is possible with SIGKILL, the kernel just drops your process and that is that. Kubernetes does detect it rapidly and if you're using a Service-based network path it will usually react in 1-2 seconds. I would recommend looking into why your process is being hard-killed and fixing that :)</p>
coderanger
<p>I want to ssh minikube/docker-desktop, but I cant. How can i do that?</p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME minikube Ready control-plane,master 4m47s v1.20.2 192.168.49.2 &lt;none&gt; Ubuntu 20.04.2 LTS 4.19.121-linuxkit docker://20.10.6 ssh minikube ssh: Could not resolve hostname minikube: nodename nor servname provided, or not known </code></pre> <p>I am learning K8s and able to ssh while working on K8s hands-on labs available online. I'd like t test some stuff on my local environment.</p>
cosmos-1905-14
<p><code>minikube</code> is the node name within the Kubernetes API, not a hostname in this case. Minikube offers a wrapper <a href="https://minikube.sigs.k8s.io/docs/commands/ssh/" rel="nofollow noreferrer"><code>minikube ssh</code></a> command to automate pulling the IP and whatnot. Docker Desktop does not offer an official way to get a shell in the VM as it's a single-purpose appliance and they want it in a known state, but you can fake it by running a super-superuser container like <code>docker run -it --rm --privileged --pid=host justincormack/nsenter1</code> to break out into the host namespaces.</p>
coderanger
<p>tl;dr: I have a server that handles WebSocket connections. The nature of the workload is that it is necessarily stateful (i.e., each connection has long-running state). Each connection can last ~20m-4h. Currently, I only deploy new revisions of this service at off hours to avoid interrupting users too much.</p> <p>I'd like to move to a new model where deploys happen whenever, and the services gracefully drain connections over the course of ~30 minutes (typically the frontend can find a &quot;good&quot; time to make that switch over within 30 minutes, and if not, we just forcibly disconnect them). I can do that pretty easily with K8s by setting gracePeriodSeconds. However, what's less clear is how to do rollouts such that new connections only go to the most recent deployment. Suppose I have five replicas running. Normal deploys have an undesirable mode where a client is on R1 (replica 1) and then K8s deploys R1' (upgraded version) and terminates R1; frontend then reconnects and gets routed to R2; R2 terminates, frontend reconnects, gets routed to R3.</p> <p>Is there any easy way to ensure that after the upgrade starts, new clients get routed only to the upgraded versions? I'm already running Istio (though not using very many of its features), so I could imagine doing something complicated with some custom deployment infrastructure (currently just using Helm) that spins up a new deployment, cuts over new connections to the new deployment, and gracefully drains the old deployment... but I'd rather keep it simple (just Helm running in CI) if possible.</p> <p>Any thoughts on this?</p>
Travis DePrato
<p>This is already how things work with normal Services. Once a pod is terminating, it has already been removed from the Endpoints. You'll probably need to tune up your max burst in the rolling update settings of the Deployment to 100%, so that it will spawn all new pods all at once and then start the shutdown process on all the rest.</p>
coderanger
<p>Using nodePort service with ingress, I success expose the service to the out world.</p> <pre><code>--- service NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP default postgres ClusterIP 10.106.182.170 &lt;none&gt; 5432/TCP default user-api NodePort 10.99.12.136 &lt;none&gt; 3000:32099/TCP ingress-nginx ingress-nginx NodePort 10.110.104.0 &lt;none&gt; 80:31691/TCP,443:30593/TCP --- ingress NAME HOSTS ADDRESS PORTS AGE app-ingress example.com 10.110.104.0 80 3h27m </code></pre> <p>The rule of ingress show below.</p> <pre><code> Host Path Backends ---- ---- -------- example.com /user-api user-api:3000 (172.16.117.201:3000) </code></pre> <p>If my user-api have a restful api <code>/v1/health</code> interface, how to access this api inside and outside server?</p>
ccd
<p>From the inside, <a href="http://user-api.default:3000/user-api" rel="nofollow noreferrer">http://user-api.default:3000/user-api</a>. From the outside, use any node external IP (see <code>kubectl get node -o wide</code> for a list of them).</p>
coderanger
<p>my natural thought is that if nginx is just a daemon process on the k8s node, but not a pod(container) in the k8s cluster, looks like it still can fullfill ingress controller jobs. because: if it's a process, because it is on the k8s node, it still can talk to apiserver to fetch service backend pods information, like IP addresses, so it's still can be used as a http proxy server to direct traffic to different services.</p> <p>so 2 questions,</p> <ol> <li>why nginx ingress controller has to be a pod?</li> <li>why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.</li> </ol> <p>Thanks!</p>
ericxu1983
<p>Because Pods are how you run daemon processes (or really, all processes) inside Kubernetes. That's just how you run stuff. I suppose there is nothing stopping you from running it outside the cluster, manually setting up API configuration and authentication, doing all the needed networking bits yourself. But ... why?</p> <p>As for replicas, you should indeed generally have more than one across multiple physical nodes for redundancy. A lot of the tutorials show it with <code>replicas: 1</code> because either it's for a single-node dev cluster like Minikube or it's only an example.</p>
coderanger
<p>I installed the Prometheus helm chart to a kubernetes cluster for monitoring. By default, </p> <ul> <li>persistent volume size for prometheus server is defined as 8Gi.</li> <li>Prometheus server will store the metrics in this volume for 15 days (retention period)</li> </ul> <p>After some days of deploying the chart, the prometheus server pod enetered to a crashloopbackoff state. The reason found from pod logs was:</p> <pre><code>level=error ts=2019-10-09T11:03:10.802847347Z caller=main.go:625 err="opening storage failed: zero-pad torn page: write /data/wal/00000429: no space left on device" </code></pre> <p>That means there is no space available in the disk (persistent volume) to save the data. So I cleared the existing data of the volume and fixed the issue temporarily.</p> <p>What would be the proper solution for this? </p> <p>The <a href="https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects" rel="nofollow noreferrer">Prometheus documentation</a> says:</p> <p><strong>To plan the capacity of a Prometheus server, you can use the rough formula:</strong></p> <pre><code>needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample </code></pre> <p>Can someone explain how to use this formula deliberately?</p> <p>Why the 8Gi size is not enough with 15days retention period?</p> <p><strong>EDIT :</strong></p> <p>The default 8Gi space was 100% used after 6 days.</p>
AnjK
<p>15 days is about 1.3 million seconds. Let’s overestimate 8 bytes per sample. So each metric takes about 10mb. So 8gb would let you store 800 metrics. You probably have more than that. Multiply the number of series you want to store by 10 and that’s the number of megabytes you need. Roughly, that will get you the right order of magnitude at least.</p>
coderanger
<p>Inside my Dockerfile I have:</p> <pre><code>FROM python:3.7 RUN apt update RUN apt install -y git RUN groupadd -g 1001 myuser RUN useradd -u 1001 -g 1001 -ms /bin/bash myuser USER 1001:1001 USER myuser WORKDIR /home/myuser COPY --chown=myuser:myuser requirements.txt ./ ENV PYTHONPATH=&quot;/home/myuser/.local/lib/python3.7/site-packages:.:$PYTHONPATH&quot; RUN python3.7 -m pip install -r requirements.txt COPY --chown=myuser:myuser . . ENV PATH=&quot;/home/myuser/.local/bin/:$PATH&quot; ENV HOME=/home/myuser ENV PYTHONHASHSEED=1 EXPOSE 8001 CMD [ &quot;python3.7&quot;, &quot;app.py&quot; ] </code></pre> <p>During the build, <code>pip list</code> displays all the libraries correctly:</p> <pre><code>basicauth 0.4.1 pip 21.1.1 python-dateutil 2.8.1 pytz 2019.1 PyYAML 5.1.1 requests 2.22.0 setuptools 56.0.0 six 1.16.0 urllib3 1.25.11 wheel 0.36.2 </code></pre> <p>But once OpenShift deploys the container, I only get the following libraries installed:</p> <pre><code>WARNING: The directory '/home/myuser/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. Package Version ---------- ------- pip 21.1.1 setuptools 56.0.0 wheel 0.36.2 </code></pre> <p>The CMD command runs as expected, but none of the packages are installed...</p> <pre><code>Traceback (most recent call last : File &quot;app.py&quot;, line 16, in ‹module&gt; import requests ModuleNotFoundError: No module named 'requests' </code></pre>
Eduardo Morales
<p>A revised Dockerfile more in line with standard practices:</p> <pre><code>FROM python:3.7 RUN apt update &amp;&amp; \ apt install -y --no-install-recommends git &amp;&amp; \ rm -rf /var/lib/apt/lists/* WORKDIR /app COPY requirements.txt . RUN python3.7 -m pip install -r requirements.txt COPY . . ENV PYTHONHASHSEED=1 USER nobody CMD [ &quot;python3.7&quot;, &quot;app.py&quot; ] </code></pre> <p>I combined the initial <code>RUN</code> layers for a smaller image, and cleaned up the apt lists before exiting the layer. Packages are installed globally as root, and then only after that it changes to runtime user. In this case unless you very specifically need a homedir, I would stick with <code>nobody</code>/65534 as the standard way to express &quot;low privs runtime user&quot;.</p> <p>Remember that OpenShift overrides the container-level <code>USER</code> info <a href="https://www.openshift.com/blog/a-guide-to-openshift-and-uids" rel="nofollow noreferrer">https://www.openshift.com/blog/a-guide-to-openshift-and-uids</a></p>
coderanger
<p>I have an <em>unsecured</em> <code>Postfix</code> instance in a container that listens to port <code>25</code>. This port is not exposed using a <code>Service</code>. The idea is that only a <code>PHP</code> container that runs inside the same pod should be able to connect to Postfix and there is no need for additional <code>Postfix</code> configuration .</p> <p>Is there any way for other processes that run in the same network or <code>Kubernetes</code> cluster to connect to this <em>hidden</em> port? </p> <p>From what I know, only other containers in the same Pod can connect to an unexposed port, via <code>localhost</code>.</p> <p>I'm interested from a security point of view.</p> <p><em>P.S. I now that one should make sure it has multiple levels of security in place but I'm interested only theoretically if there is some way to connect to this port from outside the pod.</em></p>
Popi
<p>Yes, you can use <code>kubectl port-forward</code> to set up a tunnel directly to it for testing purposes. </p>
coderanger
<p>I need to update a file in a container running in k8s using my local editor and save back the updates to the original file in the container <em>without</em> restarting/redeploying the container.</p> <p>Right now I do:</p> <pre><code>$ kubectl exec tmp-shell -- cat /root/motd &gt; motd &amp;&amp; vi motd &amp;&amp; kubectl cp motd tmp-shell:/root/motd </code></pre> <p>Is there some better way to do this?</p> <p>I have looked at:</p> <p><a href="https://github.com/ksync/ksync" rel="nofollow noreferrer">https://github.com/ksync/ksync</a></p> <p>but seems heavyweight for something this &quot;simple&quot;.</p> <p><strong>Notice:</strong></p> <p>I don't want to use the editor that might or might <em>not</em> be available inside the container - since an editor is not guaranteed to be available.</p>
u123
<p>One option that might be available is <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">ephemeral debug containers</a> however they are an alpha feature so probably not enabled for you at time of writing. Barring that, yeah what you said is an option. It probably goes without saying but this is a very bad idea, might not work at all if the target file isn't writable (which it shouldn't be in most cases) either because of file permissions, or because the container is running in immutable mode. Also this would only matter if the thing using the file will detect the change without reloading.</p> <p>A better medium term plan would be to store the content in the ConfgMap and mount it into place. That would let you update it whenever you want.</p>
coderanger
<p>I have certain requirements where volumeMounts: should be an optional field. </p> <pre><code>spec: volumes: - name: aaa secret: secretName: aaa-certs containers: - name: my-celery volumeMounts: - name: aaa mountPath: /tmp/aaa_certs readOnly: true </code></pre> <p>If secret is present then it will mount, else create an empty folder. Is this possible </p>
Goutam
<p>No, that is not possible. You would need a higher level system like Helm or an operator to manage that kind of dynamic configuration.</p>
coderanger
<p>I tried to do a simple deployment of nextcloud on a k8s cluster hosted using minikube on my local machine for learning purposes. This deployment doesn't have any database/storage attached to it. I'm simply looking to open the nextcloud homepage on my local machine. However, I am unable to do so. Here are my yamls.</p> <p>Deployment yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nextcloud-deployment labels: app: nextcloud spec: replicas: 1 selector: matchLabels: app: nextcloud template: metadata: labels: app: nextcloud spec: containers: - name: nextcloud image: nextcloud:latest ports: - containerPort: 80 </code></pre> <p>Service yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nextcloud-service spec: selector: app: nextcloud type: LoadBalancer ports: - protocol: TCP port: 8000 targetPort: 80 nodePort: 30000 </code></pre> <p>I can see that it is up and running, however when i navigate to localhost:30000, i see that the page is unavailable. How do i begin to diagnose the issue?</p> <p>This was the output of <code>kubectl get service</code>:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3d5h nextcloud-service LoadBalancer 10.104.40.61 &lt;pending&gt; 8000:30000/TCP 23m </code></pre>
user5735224
<p>Run <code>minikube service nextcloud-service</code> and it will open it for you.</p>
coderanger
<p>I'm trying to find a generic best practice for how to:</p> <ol> <li>Take an arbitrary (parent) Dockerfile, e.g. one of the official Docker images that run their containerized service as root,</li> <li>Derive a custom (child) Dockerfile from it (via <code>FROM ...</code>),</li> <li>Adjust the <em>child</em> in the way that it runs the same service as the <em>parent</em>, but as non-root user.</li> </ol> <p>I've been searching and trying for days now but haven't been able to come up with a satisfying solution.</p> <p>I'd like to come up with an approach e.g. similar to the following, simply for adjusting the user the original service runs as:</p> <pre><code>FROM mariadb:10.3 RUN chgrp -R 0 /var/lib/mysql &amp;&amp; \ chmod g=u /var/lib/mysql USER 1234 </code></pre> <p>However, the issue I'm running into again and again is whenever the <em>parent</em> Dockerfile declares some path as <code>VOLUME</code> (in the example above actually <code>VOLUME /var/lib/mysql</code>), that effectively makes it impossible for the <em>child</em> Dockerfile to adjust file permissions for that specific path. The <code>chgrp</code> &amp; <code>chmod</code> are without effect in that case, so the resulting docker container won't be able to start successfully, due to file access permission issues.</p> <p>I understand <a href="https://docs.docker.com/engine/reference/builder/#volume" rel="nofollow noreferrer">that the <code>VOLUME</code> directive works that way by design</a> and also <a href="http://container42.com/2014/11/03/docker-indepth-volumes/" rel="nofollow noreferrer">why it's like that</a>, but to me it seems that it completely prevents a simple solution for the given problem: Taking a Dockerfile and adjusting it in a simple, clean and minimalistic way to run as non-root instead of root.</p> <p>The background is: I'm trying to run arbitrary Docker images on an Openshift Cluster. <a href="https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#use-uid" rel="nofollow noreferrer">Openshift by default prevents running containers as root</a>, which I'd like to keep that way, as it seems quite sane and a step into the right direction, security-wise.</p> <p>This implies that a solution like <code>gosu</code>, expecting the container to be started as root in order to drop privileges during runtime isn't good enough here. I'd like to have an approach that doesn't require the container to be started as root at all, but only as the specified <code>USER</code> or even with a random UID.</p> <p>The unsatisfying approaches that I've found until now are:</p> <ul> <li>Copy the <em>parent</em> Dockerfile and adjust it in the way necessary (effectively duplicating code)</li> <li><code>sed</code>/<code>awk</code> through all the service's config files during build time to replace the original <code>VOLUME</code> path with an alternate path, so the <code>chgrp</code> and <code>chmod</code> can work (leaving the original <code>VOLUME</code> path orphaned).</li> </ul> <p>I really don't like these approaches, as they require to really dig into the logic and infrastructure of the <em>parent</em> Dockerfile and how the service itself operates.</p> <p>So there must be better ways to do this, right? What is it that I'm missing? Help is greatly appreciated.</p>
Oliver
<p>Permissions on volume mount points don't matter at all, the mount covers up whatever underlying permissions were there to start with. Additionally you can set this kind of thing at the Kubernetes level rather than worrying about the Dockerfile at all. This is usually though a PodSecurityPolicy but you can also set it in the SecurityContext on the pod itself.</p>
coderanger
<p>I am setting up my <code>Kubernetes</code> cluster using <code>kubectl -k</code> (kustomize). Like any other such arrangement, I depend on some secrets during deployment. The route I want go is to use the <code>secretGenerator</code> feature of <code>kustomize</code> to fetch my secrets from files or environment variables.</p> <p>However managing said files or environment variables in a secure and portable manner has shown itself to be a challenge. Especially since I have 3 separate namespaces for test, stage and production, each requiring a different set of secrets.</p> <p>So I thought surely there must be a way for me to manage the secrets in my cloud provider's official way (google cloud platform - secret manager).</p> <p>So how would the <code>secretGenerator</code> that accesses secrets stored in the secret manager look like?</p> <p>My naive guess would be something like this:</p> <pre><code>secretGenerator: - name: juicy-environment-config google-secret-resource-id: projects/133713371337/secrets/juicy-test-secret/versions/1 type: some-google-specific-type </code></pre> <ul> <li>Is this at all possible?</li> <li>What would the example look like?</li> <li>Where is this documented?</li> <li><strong>If this is not possible, what are my alternatives?</strong></li> </ul>
Mr. Developerdude
<p>I'm not aware of a plugin for that. The plugin system in Kustomize is somewhat new (added about 6 months ago) so there aren't a ton in the wild so far, and Secrets Manager is only a few weeks old. You can find docs at <a href="https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins</a> for writing one though. That links to a few Go plugins for secrets management so you can probably take one of those and rework it to the GCP API.</p>
coderanger
<p>How do I get rid of them? Docker doesn't think they exist and my kubernetes-fu isn't good enough yet.</p> <p><a href="https://i.stack.imgur.com/BVXDa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BVXDa.png" alt="enter image description here" /></a></p>
Dave Hodgkinson
<p>You cannot remove them while they are in use. You'll have to shut down the Kubernetes system first, that's under Preferences (the gear icon) and then Kubernetes. Once that is done, run <code>docker system prune</code> to clean up unused everything.</p>
coderanger
<p>I want to integrate Kubernetes cluster configured in on-premises environment with gitlab.</p> <p>When adding a cluster, I clicked Add Existing Cluster and filled in all other spaces, and the API URL entered the IP output by the following command.</p> <pre><code>kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}' </code></pre> <pre><code>https://10.0.0.xxx:6443 </code></pre> <p>However, it did not proceed with the error "platform kubernetes api url is blocked: requests to the local network are not allowed".</p> <p>I saw an article in the admin area to do a webhook check, but I'm on the gitlab website, and no matter where I find it, I can't find the admin area. I'm guessing that it only comes with gitlab to install.</p> <p><a href="https://edenmal.moe/post/2019/GitLab-Kubernetes-Using-GitLab-CIs-Kubernetes-Cluster-feature/" rel="nofollow noreferrer">https://edenmal.moe/post/2019/GitLab-Kubernetes-Using-GitLab-CIs-Kubernetes-Cluster-feature/</a></p> <p>When I saw and followed the example, I entered the API URL as "https: //kubernetes.default.svc.cluster.local: 443" and the cluster connection was established. But helm won't install.</p> <p>So I tried to install helm on a Kubernetes cluster manually, but gitlab does not recognize helm.</p> <p>What is the difference between the two API URLs above??</p> <p>How can i solve it ??</p>
윤태일
<p>As mentioned in comments, you are running your CI job on someone else's network. As such, it cannot talk to your private IPs in your own network. You will need to expose your kube-apiserver to the internet somehow. This is usually done using a LoadBalancer service called <code>kubernetes</code> that is created automatically. However that would only work if you have set up something that supports LoadBalancer services like MetalLB.</p>
coderanger
<p>I've started to use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kustomize</a> to update the yaml files. I'm trying to add labels to <code>Deployment</code> object. <code>commonLabels</code> seems pretty good but it applies the label to <code>selector</code> as well (which I don't want)</p> <p>Is there a way to add new label only to <code>metadata.labels</code> or <code>spec.template.metadata.labels</code>?</p> <p>here's a sample <code>deployment</code> object :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: service: servieName owner: ownerName name: myName spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: servieName owner: ownerName &lt;newLableHere&gt;: &lt;newValueHere&gt; ... </code></pre>
Mahyar
<p>You would need to define a patch for that object specifically.</p>
coderanger
<p>I am following this <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">example</a></p> <p>When I run the following command I get the error:</p> <pre class="lang-sh prettyprint-override"><code>➜ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 Error from server (NotFound): the server could not find the requested resource </code></pre> <p>I just installed fresh minikube, kubectl and the data is below:</p> <pre class="lang-sh prettyprint-override"><code>Development/tools/k8s ➜ kubectl get nodes NAME STATUS AGE minikube Ready 2m Development/tools/k8s ➜ kubectl get pods No resources found. Development/tools/k8s ➜ kubectl get rc --all-namespaces No resources found. Development/tools/k8s ➜ kubectl cluster-info Kubernetes master is running at https://192.168.99.101:8443 KubeDNS is running at https://192.168.99.101:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ➜ kubectl version Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"} ➜ minikube version minikube version: v1.7.3 commit: 436667c819c324e35d7e839f8116b968a2d0a3ff </code></pre> <h3>What am I doing wrong?</h3>
DmitrySemenov
<p>Your kubectl is version 1.5 which is very old. It’s trying to use an outdated format for the Deployment resource which no longer exists in the server.</p>
coderanger
<p>I have a deployment with scale=1 but when I run get pods, i have 2/2... When I scale the deployment to 0 and than to 1, I get back 2 pods again... how is this possible? as i can see below prometeus-server has 2:</p> <pre><code>PS C:\dev\&gt; kubectl.exe get pods -n monitoring NAME READY STATUS RESTARTS AGE grafana-6c79d58dd-5k8cs 1/1 Running 0 3d21h prometheus-alertmanager-5584c7b8d-k7zrn 2/2 Running 0 3d21h prometheus-kube-state-metrics-6b46f67bf6-kt5dq 1/1 Running 0 3d21h prometheus-node-exporter-fj5zv 1/1 Running 0 3d21h prometheus-node-exporter-vgjtt 1/1 Running 0 3d21h prometheus-node-exporter-xfm5h 1/1 Running 0 3d21h prometheus-node-exporter-zp9mw 1/1 Running 0 3d21h prometheus-pushgateway-6c9764ff46-s295t 1/1 Running 0 3d21h prometheus-server-b647558d5-jxgtl 2/2 Running 0 2m18s </code></pre> <p>The deployment is:</p> <pre><code>PS C:\dev&gt; kubectl.exe describe deployment prometheus-server -n monitoring Name: prometheus-server Namespace: monitoring CreationTimestamp: Thu, 16 Jul 2020 11:46:58 +0300 Labels: app=prometheus app.kubernetes.io/managed-by=Helm chart=prometheus-11.7.0 component=server heritage=Helm release=prometheus Annotations: deployment.kubernetes.io/revision: 1 meta.helm.sh/release-name: prometheus meta.helm.sh/release-namespace: monitoring Selector: app=prometheus,component=server,release=prometheus Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=prometheus chart=prometheus-11.7.0 component=server heritage=Helm release=prometheus Service Account: prometheus-server Containers: prometheus-server-configmap-reload: Image: jimmidyson/configmap-reload:v0.3.0 Port: &lt;none&gt; Host Port: &lt;none&gt; Args: --volume-dir=/etc/config --webhook-url=http://127.0.0.1:9090/-/reload Environment: &lt;none&gt; Mounts: /etc/config from config-volume (ro) prometheus-server: Image: prom/prometheus:v2.19.0 Port: 9090/TCP Host Port: 0/TCP Args: --storage.tsdb.retention.time=15d --config.file=/etc/config/prometheus.yml --storage.tsdb.path=/data --web.console.libraries=/etc/prometheus/console_libraries --web.console.templates=/etc/prometheus/consoles --web.enable-lifecycle Liveness: http-get http://:9090/-/healthy delay=30s timeout=30s period=15s #success=1 #failure=3 Readiness: http-get http://:9090/-/ready delay=30s timeout=30s period=5s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /data from storage-volume (rw) /etc/config from config-volume (rw) Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-server Optional: false storage-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: prometheus-server ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: prometheus-server-b647558d5 (1/1 replicas created) NewReplicaSet: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 5m32s deployment-controller Scaled down replica set prometheus-server-b647558d5 to 0 Normal ScalingReplicaSet 5m14s deployment-controller Scaled up replica set prometheus-server-b647558d5 to 1 </code></pre> <p>the weird thing is, that as shown above, k8s thinks it's 1 pod, if looks like a manual operation which was made. I have no idea what now :/</p>
toto
<p>Two containers, one pod. You can see them both listed under <code>Containers:</code> in the describe output too. One is Prometheus itself, the other is a sidecar that trigger a reload when the config file changes because Prometheus doesn't do that itself.</p>
coderanger
<p>I need to create pods on demand in order to run a program. it will run according to the needs, so it could be that for 5 hours there will be nothing running, and then 10 requests will be needed to process, and I might need to limit that only 5 will run simultaneously because of resources limitations.</p> <p>I am not sure how to build such a thing in kubernetes.</p> <p>Also worth noting is that I would like to create a new docker container for each run and exit the container when it ends.</p>
Moshe Shaham
<p>There are many options and you’ll need to try them out. The core tool is HorizontalPodAutoscaler. Systems like KEDA build on top of that to manage metrics more easily. There’s also Serverless tools like knative or kubeless. Or workflow tools like Tekton, Dagster, or Argo.</p> <p>It really depends on your specifics.</p>
coderanger
<p>I have a command that generates a secrets file that contains multiple <code>yaml</code> documents separated by <code>---</code> that looks like the following output:</p> <pre><code>## THE COMMAND THAT RUNS kubectl kustomize . ## SENDS THE FOLLOWING TO STDOUT data: firebase-dev-credentials.json: | MY_BASE_64_CREDENTIALS kind: Secret metadata: name: nodejs-firebase-credentials-989t6dg7m2 type: Opaque --- apiVersion: v1 data: AGENDA_DATABASE_URL: | MY_BASE_64_URL kind: Secret metadata: name: nodejs-secrets-c229fkh579 type: Opaque </code></pre> <p>Now, I have to pipe the <code>yaml</code> document to another command for processing, something like:</p> <p><code>kubectl kustomize . | kubeseal</code></p> <p>However, the <code>kubeseal</code> command does not accept multi-document <code>yaml</code>. So I figured I'd need to split the <code>yaml</code> by <code>---</code> and send each individual document to the <code>kubeseal</code> command?</p> <p>Using <code>bash</code>, what is the preferred approach to accomplish this task?</p> <p>For example, I suppose I cannot do something simple like this as the <code>stdout</code> is multiline?</p> <pre><code>export IFS=&quot;;&quot; sentence=&quot;one;two;three&quot; for word in $sentence; do echo &quot;$word&quot; done </code></pre>
Jordan Lewallen
<p>You can pipe through <code>yq eval 'select(di == N)'</code> to select out document N. But really, you wouldn't do this. You would include the already sealed data in your Kustomize config, it's not something you want to run automatically.</p>
coderanger
<p>I have a service account with a Policy Rule that works perfectly in mynamespace. But it also works perfectly in other namespaces, which I want to prevent.</p> <pre><code>--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: myapp namespace: mynamespace rules: - apiGroups: ["extensions"] resources: ["deployments"] verbs: ["get", "patch"] --- apiVersion: v1 kind: ServiceAccount metadata: name: myapp namespace: mynamespace --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: myapp namespace: mynamespace subjects: - kind: ServiceAccount name: myapp namespace: mynamespace roleRef: kind: ClusterRole name: myapp apiGroup: rbac.authorization.k8s.io </code></pre>
grahamoptibrium
<p>You use a RoleBinding instead of a ClusterRoleBinding.</p>
coderanger
<p>I am running simple native Java 8 code to spawn number of threads. These threads connect to database through their own dedicated OJDBC connection. As Database is continuously populated with records so thee threads use this dedicated connection to perform various tasks in database. each thread poll database after sometime to fetch records then process it and then re poll database. the connection remains same for lifetime of thread. This whole setup works fine if i run it on simple VM. There are no connection closures but as soon as i moved this java code to Kubernetes problems start arising . After few times each thread start throwing following exception</p> <pre><code>java.sql.SQLRecoverableException: Closed Connection at oracle.jdbc.driver.PhysicalConnection.rollback(PhysicalConnection.java:3694) </code></pre> <p>There is no re initiation of JDBC connection because threads assume that connection is dedicated and our backend system does not close the connection. This connection closure only happen in Kubernetes and at random so i am curious is there any specific network settings i need to do to make dedicated connections work in Kubernetes ?</p>
Hassnain Alvi
<p>Echoing down from comments, you probably want to turn on TCP keepalives but if that's impossible look at the <code>net.netfilter.nf_conntrack_tcp_timeout_established</code> sysctl and similar conntrack settings. You can also potentially bypass the proxy mesh using a headless-mode Service instead though that would likely impact your failover process so be sure to check that carefully.</p>
coderanger
<p>I am trying to build docker image with postgres data included.I am following below link.</p> <pre><code>https://sharmank.medium.com/build-postgres-docker-image-with-data-included-489bd58a1f9e </code></pre> <p>It is built with data as below.</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE db_image_with_data latest 48b5d776e7d6 2 hours ago 1.67GB </code></pre> <p>I could see data is available in images using below command</p> <pre><code> docker run --env PGDATA=postgres -p 5432:5432 -I db_image_with_data </code></pre> <p>I have pushed same image to docker hub and I have configured same image in kubernets .</p> <p>But only db has created there is no data populated in db.</p> <pre><code>kubectl exec my-database-7c4bb7bdd7-m8sjd -n dev-app -- sh -c 'psql -U &quot;postgres&quot; -d &quot;devapp&quot; -c &quot;select * from execution_groups&quot;' id | groupname | grouptype | user_id | tag | created_at | updated_at | group_id ----+-----------+-----------+---------+--------+------------+------------+---------- (0 rows) </code></pre> <p>Is there anything i am missing out here?</p>
User1984
<p>In the many years since that post came out, it looks like the <code>postgres</code> community container image has been tweaked to automatically provision a volume for the data when running under Docker (via the <code>VOLUME</code> directive). This means your content was stored outside the running container and thus is not part of the saved image. AFAIK this cannot be disabled so you'll have to build your own base image (or find another one to use).</p>
coderanger
<p>I want to create an environment variable in the pod created by my sts which would consist of its FQDN. As the dns domain - ".cluster.local" can be different for various clusters, like '.cluster.abc', or '.cluster.def', I want to populate that dynamically based on the cluster settings. I checked that .fieldRed.fieldPath doesn't consist of anything like it.</p> <p>Is there any other option to do that?</p>
Akshay Nagpal
<p>This is not a feature of Kubernetes. You would need to use a higher level templating system like Kustomize or Helm.</p>
coderanger
<p>I'm aware that I can use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/update-existing-nodepools" rel="nofollow noreferrer">k8s node affinity</a> to select for nodes that have a value of a label <code>gt</code> than some value. However is it possible to make k8s prefer to schedule on nodes that have the <em>greatest</em> value? I.e. if one node has <code>schedule-on-me: 5</code> and another has <code>schedule-on-me: 6</code> it will prefer to schedule on the one with the higher value?</p>
user13468984
<p>Not directly. You can tune weights in the scheduler but I don't think any those would help here so would need a custom scheduler extender webhook.</p>
coderanger
<p>I have an Kubernetes Cluster with a working Ingress config for one REST API. Now I want to add a port forward to my mqtt adapter to this config, but I have problems finding a way to add an TCP rule to the config. The Kubernetes docs only show a HTTP example. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>I'm pretty new to Kubernetes and I have problems adapting other configs, because whatever I find looks totally different from that what I found in the Kubernetes Docs.</p> <p>I have used a regular nginx webserver with letsencrypt to secure TCP connections. I hope this works with the ingress controller, too.</p> <p>My goal is to send messages via MQTT with TLS to my cluster. Does someone have the right docs for this or knows how to add the config?</p> <p>My config looks like this:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ratings-web-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - example.com secretName: ratings-web-cert rules: - host: example.com http: paths: - backend: serviceName: test-api servicePort: 8080 path: / </code></pre>
JWo
<p>the Ingress system only handles HTTP traffic in general. A few Ingress Controllers support custom extensions for non-HTTP packet handling but it's different for each. <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a> shows how to do this specifically for ingress-nginx, as shown there you configure it entirely out of band via some ConfigMaps, not via the Ingress object(s).</p> <p>What you probably actually want is a LoadBalancer type Service object instead.</p>
coderanger
<p>I'm projecting a token into a pod in order to use this token to authenticate into an external system. I do not fully trust the code that can potentially run into this pod, so I would like to use the token projection to perform the authentication and then remove the projected token so that the code that will run at a later time cannot use it.</p> <p>When deleting the projected token I receive an answer that the filesystem is read only:</p> <pre><code>rm: can't remove '/var/run/secrets/tokens/..data': Read-only file system rm: can't remove '/var/run/secrets/tokens/vault-token': Read-only file system rm: can't remove '/var/run/secrets/tokens/..2019_12_06_09_50_26.580875372/vault-token': Read-only file system </code></pre> <p>When mounting the file system I specified that I want to mount it read write (I use a PodPreset to inject the projected folder into pods):</p> <pre><code>apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: name: pod-preset namespace: my-namespace spec: selector: matchLabels: my-pod: job env: volumeMounts: - name: token-mounter mountPath: /var/run/secrets/tokens readOnly: false volumes: - name: token-mounter projected: sources: - serviceAccountToken: path: vault-token expirationSeconds: 7200 audience: vault </code></pre> <p>Is there any way to make the projected file system writable or, in general, to remove the projected token?</p>
BPas
<p>No, as it says it uses a read only ramdisk so you can’t change things. I’m not 100% sure this is possible but you could try using an initContainer to copy the token to a r/w ramdisk volume and then skip mounting the token volume in the main container entirely.</p>
coderanger
<p>DockerFile</p> <pre><code>FROM centos RUN yum install java-1.8.0-openjdk-devel -y RUN curl --silent --location http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo | tee /etc/yum.repos.d/jenkins.repo RUN rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key RUN yum install jenkins --nogpgcheck -y RUN yum install jenkins -y RUN yum install -y initscripts CMD /etc/init.d/jenkins start &amp;&amp; /bin/bash </code></pre> <p>the output of describing command</p> <p><a href="https://i.stack.imgur.com/iLyR2.jpg" rel="nofollow noreferrer">enter image description here</a></p> <p>output of logs Starting Jenkins [ OK ]</p>
Ajinkya Khandave
<p>There isn't an init system inside a container so this isn't going to work. Likely the specific issue is that with plain Docker you are using <code>docker run -it</code> so there is a stdin, so <code>bash</code> starts in interactive mode and keeps running. In Kubernetes there is no input so bash exits immediately and the container exits with it. You can't run stuff in the background like that. Maybe just use the <a href="https://hub.docker.com/r/jenkins/jenkins" rel="nofollow noreferrer">official <code>jenkins/jenkins</code> image</a>? Or at least check out how it's built.</p>
coderanger
<p>My source k8s cluster version is v1.11.5 and the dest k8s cluster version is v1.15.2 with in-place upgrade.</p> <p>K8S Cluster Status.Three master nodes with k8s controll plane:</p> <pre><code>NAME STATUS ROLES AGE VERSION a1 Ready master 23h v1.11.5 a2 Ready master 23h v1.11.5 a3 Ready master 22h v1.11.5 </code></pre> <p>I didn't use kubeadm upgrade because the enforced k8s skew policies.I followed the steps below:</p> <p>[step 0] kubectl drain node a3 and remove all k8s components on a3.</p> <p>[step 1] use kubeadm init to install v1.15.2 k8s on node a3 and undrain node a3.</p> <p>[step 2] redo the steps above on node a2 and a1.</p> <p>After installed v1.15.2 k8s on each node,K8S Cluster Status becomes:</p> <pre><code>NAME STATUS ROLES AGE VERSION a1 Ready master 23h v1.15.2 a2 Ready master 23h v1.15.2 a3 Ready master 22h v1.15.2 </code></pre> <p><strong>So my question is is there any problem on this upgrade solution?</strong></p> <p>Because the k8s version skew policy says k8s does not support upgrading on cross y version.For example,I have to upgrade k8s from v1.11 to v1.12 and then from v1.12 to v1.13.</p>
Miro
<p>Yes, you should generally do each version upgrade. Not only can there be intermediary upgrades that need to happen for things like CNI plugins, running Kubelet with more than one version of skew is not supported. So if you were going to do this, you would instead have to drain and stop every node, then do the upgrade, then start them again :) That would obviously mean a complete downtime, as opposed to doing each hop which allows rolling upgrades, which are usually preferred.</p>
coderanger
<p>When kops is used to create a k8s cluster, a <code>/srv/kubernetes</code> folder gets created and distributed to all the nodes, populated with files automatically created by the provisioning process. My question is whether it's possible for the cluster admin to add files to this volume so that such files can be referenced by passing command-line arguments to kubernetes processes? If so, how to add files so that they are ready when the nodes boot up?</p>
Michael Martinez
<p>Use the <a href="https://godoc.org/k8s.io/kops/pkg/apis/kops#FileAssetSpec" rel="nofollow noreferrer"><code>fileAssets</code></a> key in your Cluster manifest:</p> <pre><code>fileAssets: - content: | asdf name: something path: /srv/kubernetes/path/to/file </code></pre>
coderanger
<blockquote> <p><em>How to change the Docker <code>ENTRYPOINT</code> in a Kubernetes deployment, without changing also the Docker <code>CMD</code>?</em></p> </blockquote> <p>In the Pod I would do</p> <pre><code>image: &quot;alpine&quot; entrypoint: &quot;/myentrypoint&quot; </code></pre> <p>but this overwrites either <code>ENTRYPOINT</code> <em><strong>and</strong></em> the <code>CMD</code> from the <code>Dockerfile</code>.</p> <p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">documentation</a> doesn't mention it, but seems like a big use-case missed.</p>
Kamafeather
<p>That's not a thing.</p> <ul> <li><code>ENTRYPOINT</code> (in Dockerfile) is equal to <code>command:</code> (in PodSpec)</li> <li><code>CMD</code> (in Dockerfile) equals <code>args:</code> (in PodSpec)</li> </ul> <p>So just override <code>command</code> but not <code>args</code>.</p>
coderanger
<p>I'm reading a lot of documentation about <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">CRD Controller</a></p> <p>I've implemented one with my business logic, and sometimes I got this race condition:</p> <ul> <li>I create a Custom Object, let's call it <code>Foo</code> with name <code>bar</code></li> <li>My business logic applies, let's says that it creates a <code>Deployment</code> with a <strong>generated</strong> name, and I save the name (as reference) it in the <code>Foo</code> object</li> <li>I remove the Custom Object</li> <li>I quickly recreate it with the same name, and sometimes I get this log:</li> </ul> <pre><code>error syncing 'default/bar': Operation cannot be fulfilled on Foo.k8s.io "bar": the object has been modified; please apply your changes to the latest version and try again, requeuing </code></pre> <p>The thing is because my <code>Deployment</code> has a generated name and maybe the save (Foo) has failed, I got two <code>Deployment</code> with two names.</p> <p>I did not found a way to fix it for now, but it raises a question.</p> <p><strong>How if I've multiple controllers running ?</strong> </p> <p>I've started two controllers, and I got the same race condition by just creating a new object.</p> <p>So, what is the best design to scale a CRD controller and avoid this kind of race conditions?</p>
XciD
<p>Generally you only run one copy of a controller, or at least only one is active at any given time. As long as you were careful to write your code convergently then it shouldn't technically matter, but there really isn't much reason to run multiple.</p>
coderanger
<p>I have applications needs to give each pod a public ip and expose ports on this public ip.</p> <p>I am trying not to use virtual machines.</p> <p>matellb has similar feature. But, it binds a address to a service not pod. And, it wastes a lot of bandwidth.</p>
Lod
<p>Technically this is up to your CNI plugin, however very few support this. Pods generally live in the internal cluster network and are exposed externally through either NodePort or LoadBalancer services, for example using MetalLB. Why do you think this &quot;wastes bandwidth&quot;? If you're concerned about internal rerouting, you may want to enable externalTrafficPolicy: Local to reduce internal bounces but your internal network probably has a lot more bandwidth available than your internet connection so it that's not usually a reason to worry.</p>
coderanger
<p>I'm working on a personal project involving a browser-based code editor (think <a href="https://repl.it" rel="nofollow noreferrer">https://repl.it</a>). My plan:</p> <p>1) Constantly stream the code being written to a remote docker volume on kubernetes.</p> <p>2) Execute this code when the user presses "run".</p> <p>I've already started working on the streaming infrastructure, and have a good grasp on how I'd like to do it. Regarding the code execution, however, I'm in need of some guidance.</p> <p>Idea A: I was thinking that I could have two docker containers, one web server and one "environment", sitting on the same VM. When a request would come into the webserver, it would then run a <code>docker exec ...</code> on the environment. </p> <p>Idea B: I use <code>kubectl</code>, specifically <code>kubectl exec</code> to execute the code on the container.</p> <p>A few things to note. I want to make the "environment" container interchangeable, that is, my app should be able to support python, js, etc.. Any thoughts? </p>
Jay K.
<ol> <li>THIS IS A VERY BAD IDEA DO NOT DO IT</li> <li>You would want to run each snippet in a new container for maximum isolation.</li> </ol>
coderanger
<p>I've a application Java/Spring boot that is running in a Kubernetes pod, the logs is configure to stdout, fluentd get logs from default path:</p> <pre><code>&lt;source&gt; @type tail path /var/log/containers/*.log pos_file /pos/containers.pos time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ tag kubernetes.* format json read_from_head true &lt;/source&gt; </code></pre> <p>In my logback xml configs i've a appender json file: </p> <pre><code>&lt;appender name="jsonAppender" class="ch.qos.logback.core.rolling.RollingFileAppender"&gt; &lt;file&gt;${LOG_PATH}/spring-boot-logger.log&lt;/file&gt; &lt;encoder class="net.logstash.logback.encoder.LogstashEncoder"/&gt; &lt;rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy"&gt; &lt;maxIndex&gt;1&lt;/maxIndex&gt; &lt;fileNamePattern&gt;${LOG_PATH}.%i&lt;/fileNamePattern&gt; &lt;/rollingPolicy&gt; &lt;KeyValuePair key="service" value="java-app" /&gt; &lt;triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"&gt; &lt;MaxFileSize&gt;1MB&lt;/MaxFileSize&gt; &lt;/triggeringPolicy&gt; &lt;/appender&gt; </code></pre> <p>How do I integrate this separate log file other than stdout in my Kubernete settings along with Fluentd, to send my json logs in a different path</p>
Pedro Henrique
<p>You need to: </p> <ol> <li>move that file onto an emptyDir volume (or hostPath I guess but use emptyDir) and then </li> <li>run fluentd/bit as a sidecar which reads from that volume and </li> <li>forwards to the rest of your fluentd setup.</li> </ol>
coderanger
<p>When I deploy the new release of the Kubernetes app I got that error</p> <pre><code>Error: secret &quot;env&quot; not found </code></pre> <p><a href="https://i.stack.imgur.com/7TbF4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7TbF4.png" alt="enter image description here" /></a></p> <p>even I have env in <strong>Custom Resource Definitions</strong> --&gt; <strong>sealedsecrets.bitnami.com</strong></p> <p><a href="https://i.stack.imgur.com/BMtg4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BMtg4.png" alt="enter image description here" /></a></p> <p><strong>env.yaml</strong></p> <pre><code>apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: creationTimestamp: null name: env namespace: api spec: encryptedData: AUTH_COGNITO: AgCIxZX0Zv6gcK2p ---- template: metadata: creationTimestamp: null name: env namespace: api type: Opaque </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ .Release.Name }} labels: app: {{ .Release.Name }} spec: revisionHistoryLimit: 2 replicas: {{ .Values.replicas }} selector: matchLabels: app: {{ .Release.Name }} template: metadata: labels: app: {{ .Release.Name }} spec: containers: - name: {{ .Release.Name }} image: &quot;{{ .Values.imageRepository }}:{{ .Values.tag }}&quot; env: {{- include &quot;api.env&quot; . | nindent 12 }} resources: limits: memory: {{ .Values.memoryLimit }} cpu: {{ .Values.cpuLimit }} requests: memory: {{ .Values.memoryRequest }} cpu: {{ .Values.cpuRequest }} {{- if .Values.healthCheck }} livenessProbe: httpGet: path: /healthcheck port: 4000 initialDelaySeconds: 3 periodSeconds: 3 timeoutSeconds: 3 {{- end }} imagePullSecrets: - name: {{ .Values.imagePullSecret }} {{- if .Values.tolerations }} tolerations: {{ toYaml .Values.tolerations | indent 8 }} {{- end }} {{- if .Values.nodeSelector }} nodeSelector: {{ toYaml .Values.nodeSelector | indent 8 }} {{- end }} </code></pre> <p><strong>UPDATE to my question</strong> my secrets I don't have secret called <code>env</code></p> <p>plus that error in <code>regcred</code> inside <code>Sealedsecrets.bitnami.com</code></p> <pre><code>Failed to unseal: no key could decrypt secret (.dockerconfigjson) </code></pre> <p><a href="https://i.stack.imgur.com/zfiAo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zfiAo.png" alt="enter image description here" /></a></p>
Mina Fawzy
<p>You ran <code>kubeseal</code> against the wrong Kubernetes cluster or you tried to edit the name or namespace after encrypting without enabling those in the encryption mode. More likely the first.</p>
coderanger
<p>I want to run two commands in my cronjob.yaml one after each other. The first command runs a python-scipt and the second changes an environment variable in another pod. The commands added separately work.</p> <p>This is what I'm trying right now (found the syntax in <a href="https://stackoverflow.com/q/33887194/11242263">How to set multiple commands in one yaml file with Kubernetes?</a> ) but it gives me an error.</p> <pre><code>command: - "/bin/bash" - "-c" args: ["python3 recalc.py &amp;&amp; kubectl set env deployment recommender --env="LAST_MANUAL_RESTART=$(date)" --namespace=default"] </code></pre> <p>The error I get in cloudbuild:</p> <pre><code>error converting YAML to JSON: yaml: line 30: did not find expected ',' or ']' </code></pre> <p>(for the long line)</p>
Florian
<p>You have nested double quotes, try something more like this:</p> <pre><code>command: - /bin/bash - -c - python3 recalc.py &amp;&amp; kubectl set env deployment recommender --env="LAST_MANUAL_RESTART=$(date)" --namespace=default </code></pre> <p>i.e. without the outer double quotes.</p>
coderanger
<p>I am <a href="https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/" rel="nofollow noreferrer">reading about blue green deployment with database changes on Kubernetes.</a> It explains very clearly and in detail how the process works:</p> <ol> <li>deploy new containers with the new versions while still directing traffic to the old containers</li> <li>migrate database changes and have the services point to the new database</li> <li>redirect traffic to the new containers and remove the old containers when there are no issues</li> </ol> <p>I have some questions particularly about the moment we switch from the old database to the new one.</p> <p>In step 3 of the article, we have <code>person-v1</code> and <code>person-v2</code> services that both still point to the unmodified version of the database (postgres v1):</p> <p><a href="https://i.stack.imgur.com/pCvg1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCvg1.png" alt="before database migration" /></a></p> <p>From this picture, having <code>person-v2</code> point to the database is probably needed to establish a TCP connection, but it would likely fail due to incompatibility between the code and DB schema. But since all incoming traffic is still directed to <code>person-v1</code> this is not a problem.</p> <p>We now modify the database (to postgres v2) and switch the traffic to <code>person-v2</code> (step 4 in the article). <strong>I assume that both the DB migration and traffic switch happen at the same time?</strong> That means it is impossible for <code>person-v1</code> to communicate with postgres v2 or <code>person-v2</code> to communicate with postgres v1 at any point during this transition? Because this can obviously cause errors (i.e. inserting data in a column that doesn't exist yet/anymore).</p> <p><a href="https://i.stack.imgur.com/PcJO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcJO5.png" alt="after database migration" /></a></p> <p>If the above assumption is correct, then <strong>what happens if during the DB migration new data is inserted in postgres v1</strong>? Is it possible for data to become lost with unlucky timing? Just because the traffic switch happens at the same time as the DB switch, does not mean that any ongoing processes in <code>person-v1</code> can not still execute DB statements. It would seem to me that any new inserts/deletes/updates would need to propagate to postgres v2 as well for as long as the migration is still in progress.</p>
Babyburger
<p>Even when doing blue-green for the application servers, you still have to follow normal rules of DB schema compatibility. All schema changes need to be backwards compatible for whatever you consider one full release cycle to be. Both services talk to the same DB during the switchover time but thanks to careful planning each can understand the data from the other and all is well.</p>
coderanger
<p>Currently I have a Spring containers running in a Kubernetes cluster. I am going through Udacity's Spring web classes and find the Eureka server interesting.</p> <p>Is there any benefit in using the Eureka server within the cluster?</p> <p>any help will be appreciated.</p> <p>Thank you</p>
Ricardo Carballo
<p>This is mostly an option question but ... probably not? The core Service system does most of the same thing. But if you're specifically using Eureka's service metadata system then maybe?</p>
coderanger
<p>Declarative definitions for resources in a kubernetes cluster such as <code>Deployments</code>, <code>Pods</code>, <code>Services</code> etc.. What are they referred to as in the Kubernetes eco-system? </p> <p>Possibilities i can think of: </p> <ul> <li>Specifications (specs)</li> <li>Objects</li> <li>Object configurations</li> <li>Templates</li> </ul> <p>Is there a consensus standard?</p> <hr> <p>Background</p> <p>I'm writing a small CI tool that deploys single or multiple k8s YAML files. I can't think of what to name these in the docs and actual code.</p>
AndrewMcLagan
<p>The YAML form is generally a manifest. In a Helm chart they are templates for manifests (or more often just "templates"). When you send them to the API you parse the manifest and it becomes an API object. Most types/kinds (you can use either term) have a sub-struct called <code>spec</code> (eg. <code>DeploymentSpec</code>) that contains the declarative specification of whatever that type is for. However that is not required and some core types (ConfigMap, Secret) do not follow that pattern.</p>
coderanger
<p>I am deploying Metrics Server <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/metrics-server</a> and notice there is one Requirements in README: <code>Metrics Server must be reachable from kube-apiserver by container IP address (or node IP if hostNetwork is enabled).</code> Wonder how to make it except deploying cni in kube-apiserver host or setting container network to hostNetwork?</p>
wartih
<p>You do it by doing it, there's no specific answer. If you run kube-apiserver within the cluster (e.g. as a static pod) then this is probably already the case. Otherwise you have to arrange for this to work in whatever networking layout you have.</p>
coderanger
<p>I know a scenario of kubernetes headless service <strong>with</strong> selector. But what’s the usage scenario of kubernetes headless service <strong>without</strong> selector?</p>
Cain
<p>Aliasing external services into the cluster DNS.</p>
coderanger
<p>If I move a relevant config file and run <code>kubectl proxy</code> it will allow me to access the Kubernetes dashboard through this URL:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre> <p>However if I try to access the node directly, without <code>kubectl proxy</code>, I will get a 403 Forbidden.</p> <pre><code>http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login </code></pre> <p>Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running <code>kubectl proxy</code> is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.</p> <p>What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?</p>
Liz Av
<p>You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).</p>
coderanger
<p>Is there a way to configure the minikube cluster to automatically pull &quot;all&quot; the latest docker images from GCR for all the pods in the cluster and restarted those pods once you start your minikube cluster?</p>
efgdh
<p>You can use the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#alwayspullimages" rel="nofollow noreferrer">AlwaysPullImages admission controller</a> which forces the <code>imagePullPolicy</code> to <code>Always</code> which will repull images on pod restart. And then just restart all your pods.</p>
coderanger
<p>Assuming that the pods have exposed port 80.</p> <p>How to send a requets to all the running pods, rather than 1.</p> <p>Since the load balancer would route the traffic to only 1 pod. (Note : using HAproxy load balancer here, FYI)</p>
shrw
<p>There is no particular way, <code>kubectl exec</code> only works one container at a time so you will need to call it into a loop if you want to use it on many.</p>
coderanger
<pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: testingHPA spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: my_app minReplicas: 3 maxReplicas: 5 targetCPUUtilizationPercentage: 85 </code></pre> <p>Above is the normal hpa.yaml structure, is it possible to use kind as a pod and auto scale it ??</p>
Museb Momin
<p>A single Pod is only ever one Pod. It does not have any mechanism for horizontal scaling because it is that mechanism for everything else.</p>
coderanger
<p>GKE uses Calico for networking by default. Is there an option to use some other CNI plugin ?</p>
Manohar
<p>No, GKE does not offer such option, you have to use the provided Calico.</p>
coderanger
<p>I am having rough time trying to create a docker image that exposes <a href="https://developers.cloudflare.com/argo-tunnel/downloads/" rel="nofollow noreferrer">Cloudflare's Tunnel</a> executable for linux. Thus far I got to this stage with my docker image for it (image comes from <a href="https://github.com/jakejarvis/docker-cloudflare-argo/blob/master/Dockerfile" rel="nofollow noreferrer">https://github.com/jakejarvis/docker-cloudflare-argo/blob/master/Dockerfile</a>)</p> <pre><code>FROM ubuntu:18.04 LABEL maintainer="Jake Jarvis &lt;jake@jarv.is&gt;" RUN apt-get update \ &amp;&amp; apt-get install -y --no-install-recommends wget ca-certificates \ &amp;&amp; rm -rf /var/lib/apt/lists/* RUN wget -O cloudflared.tgz https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz \ &amp;&amp; tar -xzvf cloudflared.tgz \ &amp;&amp; rm cloudflared.tgz \ &amp;&amp; chmod +x cloudflared ENTRYPOINT ["./cloudflared"] </code></pre> <p>And following <a href="https://developers.cloudflare.com/argo-tunnel/reference/sidecar/" rel="nofollow noreferrer">official documentation for their kubernetes setup</a> I added it to my deployment as a sidecar via: (here <code>cloudflare-argo:5</code> is image built from dockerfile above)</p> <pre><code> - name: cloudflare-argo image: my-registry/cloudflare-argo:5 imagePullPolicy: Always command: ["cloudflared", "tunnel"] args: - --url=http://localhost:8080 - --hostname=my-website - --origincert=/etc/cloudflared/cert.pem - --no-autoupdate volumeMounts: - mountPath: /etc/cloudflared name: cloudflare-argo-secret readOnly: true resources: requests: cpu: "50m" limits: cpu: "200m" volumes: - name: cloudflare-argo-secret secret: secretName: my-website.com </code></pre> <p>However once I deploy I get <code>CrashLoopBackOff</code> error on my pod with following <code>kubectl describe</code> output</p> <blockquote> <p>Created container cloudflare-argo</p> <p>Error: failed to start container "cloudflare-argo": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"cloudflared\": executable file not found in $PATH": unknown</p> </blockquote>
Ilja
<p>In the dockerfile it is <code>./cloudflared</code>, so that would be:</p> <pre><code> command: - ./cloudflared - tunnel - --url=http://localhost:8080 - --hostname=my-website - --origincert=/etc/cloudflared/cert.pem - --no-autoupdate </code></pre> <p>(also there is no reason to use both <code>command</code> and <code>args</code>, just pick one, if you drop the first item then use <code>args</code>).</p>
coderanger
<p>Helm Chart has the concept of Version and appVersion.</p> <p>We are using the Version to document that a content of the Helm Chart changed or not (for ex, a template, deployment.yaml has new Envrionment value or a configmap.yaml has an additional) value, for these scenarios Version number should increase. We are using appVersion to document docker image tag changes (so actual Version of the Business Application, I know there can be multiple container images but we are able identify the one of those as main application and use its tag)...</p> <p>Now in our development process there can be multiple valid images of the Business Application (feature development, lets say, feature1, feature2, feature3), so we can have a constellation like the following, [Helm Chart: myChart Version: 5.1 appVersion: feature1], [Helm Chart: myChart Version: 5.1 appVersion: feature2], [Helm Chart: myChart Version: 5.1 appVersion: feature3] most of deployment are automated but there can be cases that we have to say somebody deploy feature2.</p> <p>Now here comes the dilema, in our Helm Repository we would have these 3 charts.</p> <pre><code>5.1-&gt;feature1 5.1-&gt;feature2 5.1-&gt;feature3 </code></pre> <p>but when I look to the Helm Commands &quot;Helm Install&quot;, &quot;Helm Upgrade&quot;, &quot;Helm Pull&quot; I only see &quot;--version&quot; as parameter but no &quot;--appVersion&quot;, so it is not possible to install</p> <pre><code>helm upgrade -i myChartFeature2 myChart --version 5.1 --appVersion feature2 </code></pre> <p>We don't want to version our charts, &quot;5.1.0-feature1&quot; because then we will loose our ablity to identify, we have a new Chart while something in the templates has changed or we have a new version while Business Logic has changed...</p> <p>So my question is</p> <ul> <li>is there a way to say I want to install this specific appVersion of my Chart?</li> </ul> <p>Thx for answers...</p>
posthumecaver
<p>appVersion doesn't work this way, it isn't involved in dependency management. It's just for humans to know &quot;this chart packages version 1.2 of Foobar&quot;, though these days many charts support multiple versions of the underlying thing so it usually is just set to the default one, if it's set at all.</p>
coderanger
<p>I have a service deployed in Kubernetes and I am trying to optimize the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">requested cpu resources</a>.</p> <p>For now, I have deployed 10 instances and set <code>spec.containers[].resources.limits.cpu</code> to <code>0.1</code>, based on the "average" use. However, it became obvious that this average is rather useless in practice because under constant load, the load increases significantly (to 0.3-0.4 as far as I can tell).</p> <p>What happens consequently, when multiple instances are deployed on the same node, is that this node is heavily overloaded; pods are no longer responsive, are killed and restarted etc.</p> <p>What is the best practice to find a good value? My current best guess is to increase the requested cpu to 0.3 or 0.4; I'm looking at Grafana visualizations and see that the pods on the heavily loaded node(s) converge there under continuous load. However, how can I know if they would use more load if they could before becoming unresponsive as the node is overloaded?</p> <p>I'm actually trying to understand how to approach this in general. I would expect an "ideal" service (presuming it is CPU-focused) to use close to <code>0.0</code> when there is no load, and close to 1.0 when requests are constantly coming in. With that assumption, should I set the <code>cpu.requests</code> to <code>1.0</code>, taking a perspective where actual constant usage is assumed?</p> <p>I have read some <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">Kubernetes best practice guides</a>, but none of them seem to address how to set the actual value for cpu requests in practice in more depth than "find an average".</p>
Carsten
<p>Basically come up with a number that is your lower acceptable bound for how much the process runs. Setting a request of <code>100m</code> means that you are okay with a lower limit of your process running 0.1 seconds for every 1 second of wall time (roughly). Normally that should be some kind of average utilization, usually something like a P99 or P95 value over several days or weeks. Personally I usually look at a chart of P99, P80, and P50 (median) over 30 days and use that to decide on a value.</p> <p>Limits are a different beast, they are setting your CPU timeslice quota. This subsystem in Linux has some persistent bugs so unless you've specifically vetted your kernel as correct, I don't recommend using it for anything but the most hostile of programs.</p>
coderanger
<p>An existing Pod(<code>P</code>) is running 3 containers for API.</p> <p>To scale Pod <code>P</code> horizonatally,</p> <p>Is it possible to add one(or n) more container to an existing Pod(running 3 containers)?</p> <p>or</p> <p>Is Pod replica set concept supposed to be applied for this scenario(to scale horizontally)?</p>
overexchange
<p>No, you don't use multi-container Pods for scaling. Pods with multiple containers are for cases where you need multiple daemons running together (on the same hardware) for a single &quot;instance&quot;. That's pretty rare for new users so you almost certainly want 3 replicas of a Pod with one container.</p>
coderanger
<p>By default most people seem to avoid running anything on the masters. These nodes are less likely to be re-provisioned or moved around than the rest of the cluster. It would make them a perfect fit for ingress controllers.</p> <p>Is there any security and/or management implications/risks in using the masters as ingress nodes?</p>
ITChap
<p>As always, the risk is that if your Ingress Controller eats all your IOPS (or memory or CPU but in this case it would probably be IOPS) then your control plane can go unavailable, leaving you fewer ways to fix the problem.</p>
coderanger
<p>I have a problem: I use eviction policies (evict-soft and evict-hard) and when my pods are being evicted out beacause of resource lack on one node, pod dies and starts on another node, so, that during this time service is down. What can I do to make pod first start on another node before being killed on the first?</p>
Stas Fanin
<p>Run two of them and use a pod disruption budget so it won’t do a soft evict of both at the same time (or use affinity settings so they run on different nodes, or both).</p>
coderanger
<p>I am running a very basic blogging app using Flask. Its runs fine when I run it using Docker i.e. <code>docker run -it -d -p 5000:5000 app</code>.</p> <pre class="lang-py prettyprint-override"><code> * Serving Flask app 'app' (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on * Running on all addresses. WARNING: This is a development server. Do not use it in a production deployment. * Running on http://10.138.0.96:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 144-234-816 </code></pre> <p>This runs on my localhost:5000 just fine.</p> <p>But when I deploy this in Minikube, it says</p> <p><code>This site can’t be reached 34.105.79.215 refused to connect.</code></p> <p>I use this workflow in Kubernetes</p> <pre><code>$ eval $(minikube docker-env) $ docker build -t app:latest . $ kubectl apply -f deployment.yaml (contains deployment &amp; service) </code></pre> <p><code>kubectl logs app-7bf8f865cc-gb9fl</code> returns</p> <pre><code>* Serving Flask app &quot;app&quot; (lazy loading) * Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. * Debug mode: on * Running on all addresses. WARNING: This is a development server. Do not use it in a production deployment. * Running on http://172.17.0.3:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 713-503-298 </code></pre> <p><strong>Dockerfile</strong></p> <pre><code>FROM ubuntu:18.04 WORKDIR /app COPY . /app RUN apt-get update &amp;&amp; apt-get -y upgrade RUN apt-get -y install python3 &amp;&amp; apt-get -y install python3-pip RUN pip3 install -r requirements.txt EXPOSE 5000 ENTRYPOINT [&quot;python3&quot;] CMD [&quot;app.py&quot;] </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-service spec: selector: app: app ports: - protocol: &quot;TCP&quot; port: 5000 targetPort: 5000 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: app spec: selector: matchLabels: app: app replicas: 1 template: metadata: labels: app: app spec: containers: - name: app image: app:latest imagePullPolicy: Never ports: - containerPort: 5000 </code></pre> <p>Also I noticed that on running from Docker container when I do <code>docker ps</code> I get PORTS as <code>0.0.0.0:5000-&gt;5000/tcp</code> but the kubernetes ports shows <code>127.0.0.1:32792-&gt;22/tcp, 127.0.0.1:32791-&gt;2376/tcp, 127.0.0.1:32790-&gt;5000/tcp, 127.0.0.1:32789-&gt;8443/tcp, 127.0.0.1:32788-&gt;32443/tcp</code></p>
Tanuj Dutta
<p>The <code>port:</code> on a Service only controls the internal port, the one that's part of the ClusterIP service. By default the node port is randomly assigned from the available range. This is because while the <code>port</code> value only has to be unique within the Service itself (couldn't have the same port go to two places, would make no sense), node ports are a global resource and have to be globally unique. You can override it via <code>nodePort: whatever</code> in the Service definition but I wouldn't recommend it.</p> <p>Minikube includes a helper to manage this for you, run <code>minikube service app-service</code> and it will load the URL in your browser mapped through the correct node port.</p>
coderanger
<pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: portal-ingress-home namespace: portal annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" #nginx.ingress.kubernetes.io/rewrite-target: /$2 ingress.kubernetes.io/whitelist-source-range: "213.#####9/20" spec: tls: - hosts: - portal secretName: portal-tls rules: - host: portal - http: paths: - path: / backend: serviceName: customer servicePort: 80 - path: /cust(/|$)(.*) backend: serviceName: customer servicePort: 80 </code></pre> <p>/ path is not going to backend , where as /cust/ is going to back end. I tried every regex pattern also to make default / go to customre service, not working. I'm sure I'm missing something. Pls help....</p>
Naren Karanam
<p>You put the two bits under rules: in two different list items. Remove the second -.</p>
coderanger
<p>I would like to understand one thing on OpenId Connect and GKE to better manage IAM and RBAC.I cannot find any info on this:</p> <ol> <li>I'm Project Owner in my GCP project</li> <li>After applying <code>gcloud container clusters get-credentials $CLUSTER --zone $ZONE</code> my <em>.kube.conf</em> is populated.</li> <li>I can get my <em>id_token</em> <code>gcloud config config-helper --format=json</code></li> <li>Using  for example jwt.io I decode id_token, where my payload part is:</li> </ol> <pre><code>{   &quot;iss&quot;: &quot;https://accounts.google.com&quot;,   &quot;azp&quot;: &quot;3[redacted]9.apps.googleusercontent.com&quot;,   &quot;aud&quot;: &quot;3[redacted]9.apps.googleusercontent.com&quot;,   &quot;sub&quot;: &quot;1[redacted]9&quot;,   &quot;hd&quot;: &quot;[redacted].sh&quot;,   &quot;email&quot;: &quot;maciej.leks@[redacted].sh&quot;,   &quot;email_verified&quot;: true,   &quot;at_hash&quot;: &quot;e[redacted]g&quot;,   &quot;iat&quot;: 1[redacted]6,   &quot;exp&quot;: 1[redacted]6 } </code></pre> <ol start="5"> <li>According to <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">OpenID Connect Tokens</a> the only thing is passed to k8s API Server is my <em>id_token</em> when I use kubectl</li> </ol> <p>So, the question is what really happens there that my jwt sub or email is mapped into the right Group inside the cluster while there is no [Cluster]RoleBinding with me (e.g. email) as a subject? In this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#default_discovery_roles" rel="nofollow noreferrer">Default discovery roles</a> of GKE doc there is something about it but it's so enigmatic that I still do not understand what happens between sending my <em>id_token</em> as Baerer and finally being properly authenticated as me and authorized as a Group (which Group: system:basic-user, system:masters,...?)</p>
Maciek Leks
<p>I don't think they have ever said publicly. Kubernetes does natively support group claims in OIDC tokens however as you said, there are none in Google tokens so we know it isn't using that. Given the deep integration between GCP IAM and GKE it's generally assumed the Google internal fork has some custom admission control plugins, either tweaking or replacing the core code with GCP-specific bits. It's also possible they use the webhook token authentication mode with a custom receiver. Either way, Google doesn't generally discuss such things so we will probably never know in a confirmable way.</p>
coderanger
<p>I am new to kubernetes. Basically,I am trying to add windows node to cluster(contains linux node). My host machine is linux. For now, i am trying to add only 1 windows node but in future it should work for multiple windows nodes). <strong>while joining windows node to the kubernetes cluster using kubeadm</strong> it's throwing error message, </p> <p>As it is trying to execute "kubeadm join.." on windows node, i am trying to install kubeadm on windows machine. but no luck.</p> <p>it's throwing error as </p> <pre><code>"fatal: [windows]: FAILED! =&gt; { "changed": true, "cmd": "kubeadm join &lt;IP&gt;:&lt;port&gt; --token &lt;jdhsjhsjdhsd&gt; --discovery-token-ca-cert-hash sha256:&lt;somekey&gt; --node-name &lt;kubernetes_node_hostname&gt;", "delta": "0:00:00.732545", "end": "2018-12-27 07:39:26.496097", "msg": "non-zero return code", "rc": 1, "start": "2018-12-27 07:39:25.763552", "stderr": "kubeadm : The term 'kubeadm' is not recognized as the name of a cmdlet, function, script file, or operable program. \r\nCheck the spelling of the name, or if a path was included, verify that the path is correct and try again.\r\nAt line:1 char:65\r\n+ ... :InputEncoding = New-Object Text.UTF8Encoding $false;" </code></pre>
anuja tol
<p>You can download all the various binaries from links in the Changelog for each release. <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131</a> is the latest 1.13 as this writing.</p> <p><a href="https://dl.k8s.io/v1.13.1/kubernetes-node-windows-amd64.tar.gz" rel="nofollow noreferrer">https://dl.k8s.io/v1.13.1/kubernetes-node-windows-amd64.tar.gz</a> are the node binaries in particular which includes Kubeadm as well as other things needed to run a node.</p>
coderanger
<p>I currently have a single-node Kubernetes instance running on a VM. The disk attached to this VM is 100GB, but 'df -h' shows that the / partition only has 88GB available (other stuff is used for OS overhead etc...)</p> <p>I have a kubernetes manifest that creates a 100GB local Persistent Volume. I also have a pod creating a 100GB Persistent Volume Claim.</p> <p>Both of these deploy and come up normally even though the entire VM does not even have 100GB available.</p> <p>To make things even more complicated, the VM is thin provisioned, and only using 20 GB on the actual disk right now...</p> <p>HOW IS THIS WORKING !?!?</p>
Ian Campbell
<p>The local provisioner does no size checks, nor is the size enforced anyway. The final &quot;volume&quot; is just a bind mount like with hostPath. The main reason <code>local</code> PVs exist is because hostPath isn't something the scheduler understands so in a multi-node scenario it won't restrict topology.</p>
coderanger
<p>Here is my use case: We have a customer, where each of their services has to be available on dedicated subdomain. Naming convention should be <code>service-name.customerdomain.com</code>, where <code>service-name</code> is the deployed service and <code>customerdomain.com</code> is the customer domain. When a new service is created, it should be available <strong>automatically</strong>, i.e. once <code>service-name</code> service is deployed into the cluster, it has to be available on <code>service-name.customerdomain.com</code>.</p> <p>I know, this can be achieved <strong>manually</strong> by following steps:</p> <ol> <li><p>Add Ingress controller to the cluster </p></li> <li><p>Create wildcard DNS <code>*.customerdomain.com</code> and point it to the Ingress controller</p></li> <li>Map subdomain for each running service. For every existing service from the cluster create a separate section into Ingress resource file <code>ingress.yaml</code>, e.g.</li> </ol> <blockquote> <pre><code>Spec: rules: - host: helloworld.awesome-customer.com http: paths: - path: /* backend: serviceName: helloworld servicePort: 8080 - host: nextfineapp.awesome-customer.com http: paths: - path: /* backend: serviceName: nextfineapp servicePort: 8080 - [...] </code></pre> </blockquote> <ol start="4"> <li>Add Ingress resource file new <code>-host</code> section for each newly deployed service</li> <li>Remove Ingress resource file <code>-host</code> section for each removed service</li> </ol> <p>Basically - I would like to automate steps 4 &amp; 5. I am aware Ingress cannot handle this by itself, however, googling around, it appears that updating <code>ingress.yaml</code> file each time a new service is deployed / an existing one is removed can be achieved via <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> and its values files.</p> <p>I would appreciate if a sample solution can be pointed out / described below.</p>
Angel Todorov
<p>You would generally do this by having a template for the Ingress resource as a part of your base application chart. You can have more than one Ingress object and they will all get muxed at run time to build the routing table for your controller.</p>
coderanger
<p>I have a ansible script to deploy my microservices to the customer cluster... I have a lot of k8s deployment files, one for each microservice. It's a good idea to deploy a PVCs and PVs to the customer cluster?</p>
Clarencio
<p>The feature does what it says in docs. If that is a good idea depends entirely on your needs and how you use it.</p>
coderanger