Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>We have a GKE cluster with 4 deployments/pods that need to be updated when we deploy new code. I know it's best practice to deploy the image with the latest digest for the images we are deploying but I'm wondering if anyone knows of a better way of updating the yaml file with that digest other than manually updating it. I can get the fully_qualified_digest using:</p>
<p><code>gcloud container images describe gcr.io/xxxx/uwsgi</code></p>
<p>It really sucks to have to manually update yaml files with the latest digest hash each time we deploy. If someone knows a better way I'd love to hear it.</p>
<p>Side note: It's 2019 and Kubernetes should be able to grab the digest hash form /latest without having to explicitly define it.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: uwsgi
name: uwsgi
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
minReadySeconds: 5
template:
metadata:
labels:
io.kompose.service: uwsgi
spec:
containers:
- env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: certs/gcp.json
- name: ENV
value: prod
image: gcr.io/xxxx/uwsgi:latest <------ needs to be fully_qualified_digest
name: uwsgi
ports:
- containerPort: 9040
readinessProbe:
httpGet:
path: /health/
port: 9040
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /health/
port: 9040
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 15
resources:
requests:
memory: "1000Mi"
cpu: "1800m"
limits:
memory: "1200Mi"
cpu: "2000m"
hostname: uwsgi
restartPolicy: Always
terminationGracePeriodSeconds: 60
status: {}
</code></pre>
| Jason Girdner | <p>There's a number of tools which will watch your Docker repository and will update things when a new image is available. The most commonly used is probably <a href="https://github.com/weaveworks/flux/" rel="nofollow noreferrer">https://github.com/weaveworks/flux/</a>. Kubernetes itself does not provide this feature as it would potentially be non-convergent.</p>
<p>That said, you <em>can</em> use <code>:latest</code> in a pod spec just fine. The reason to avoid it is Kubernetes won't know to restart your pods when the image changes (also cache issues but you can avoid those with an image pull policy in spec). If you don't actually want automatic deployment of new images, then what you have is fine.</p>
| coderanger |
<p>I am trying to port a monolithic app to k8s pods. In theory, pods are considered ephemeral and it is suggested to use service concept to provide a static IP. But in my test so far, I have not seen the POD IP being changed. So now the question when will k8s assign a new IP to my POD?</p>
<p>I have created POD (without using any controller) with fixed hostname and they are bounded to a single node. So the node and the hostname will never change and the POD will never be deleted? So in this unique case, when can the POD IP change. I looked at the documentation and this is not clear to me.</p>
| Jay Rajput | <p>The IP won't change as long as the pod is running, but there are no promises that your pod will stay running. The closest there is to a stable network name is with a StatefulSet. That will create a consistent pod name, which means a consistent DNS name in kubedns/coredns. There is no generic way in Kubernetes to get long-term static IP on a pod (or on a service for that matter), though it's technically up to your CNI networking plugin so maybe some of those have special cases?</p>
| coderanger |
<p>When I run <code>sudo minikube start --vm-driver=none</code> it gives me this error and I am using Ubuntu 16.0.4.</p>
<pre><code>Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-ebtables]: ebtables not found in system path
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.1.1:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[WARNING Port-10250]: Port 10250 is in use
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR Port-2380]: Port 2380 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1
</code></pre>
| Adarsha Jha | <p>The <code>none</code> driver makes a lot of assumptions that would normally be handled by the VM setup process used by all other drivers. In this case you can see that some of the ports it expects to use are already in use so it won't continue. You would need to remove whatever is using those ports. The <code>none</code> driver is generally used for very niche situations, almost always in an ephemeral CI environment, though maybe also check out KinD as a newer tool that might address that use case better. If you just want to run a local dev environment on Linux without an intermediary VM, maybe try k3s or microk8s instead.</p>
| coderanger |
<p>I am trying to get a simple progress bar working in my Python container on Kubernetes. However, it doesn't output anything until the job is done and the progress bar reaches 100%. The same exact code works perfectly in a local Docker container. So what is it that Kubernetes has in place that prevents me from seeing the progress bar updating in real time in its logs?</p>
<p>Progress bar code:</p>
<pre><code>import sys
import time
class Color:
BOLD = '\033[1m'
ENDC = '\033[0m'
ERASE_LINE = '\x1b[2K\r'
def percentage(current, total):
percent = 100 * float(current) / float(total)
return percent
class ProgressBar:
def __init__(self, total):
self._total = total
self._current = 0
self.print()
def update(self):
self._current += 1
self.print()
def print(self):
percent = percentage(self._current, self._total)
sys.stdout.write("\r")
sys.stdout.write(Color.BOLD + "[%-50s] %d%%" % ('=' * int(percent / 2), percent) + Color.ENDC)
sys.stdout.flush()
if __name__=="__main__":
print("Ready to run soon...")
time.sleep(10)
print("Running!")
pbar = ProgressBar(255)
for i in range(255):
time.sleep(0.03)
pbar.update()
</code></pre>
| altskop | <p>When logging, rather than displaying things in a TTY to a human, you generally need to log in complete lines ending in <code>\n</code>. Rather than a progress bar, I would usually recommend something like printing out <code>10%...\n20%...\n etc</code>. Up to you how often you print the current state.</p>
<p><strong>Update:</strong></p>
<p>You can update your script to detect if the terminal is TTY, and change the behaviour accordingly</p>
<p>Use this:</p>
<pre class="lang-py prettyprint-override"><code>import os, sys
if os.isatty(sys.stdout.fileno()):
</code></pre>
| coderanger |
<p>I am trying to install Traefik as an Ingress Controller for my self-installed Kubernetes cluster. For convenience I try to install the <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">helm chart of Traefik</a> and this works excellent without the acme part; this is my variables yml now:</p>
<pre><code>externalIP: xxx.xxx.xx.xxx
dashboard:
enabled: true
domain: traefik-ui.example.com
ssl:
enabled: true
enforced: true
acme:
enabled: true
challengeType: http-01
email: [email protected]
staging: true
persistence.enabled: true
logging: true
</code></pre>
<p>Installed with:</p>
<pre><code>helm install --name traefik --namespace kube-traefik --values traefik-variables.yml stable/traefik
</code></pre>
<p>But with <code>helm status traefik</code> I can see the <code>v1/PersistentVolumeClaim</code> named <code>traefik-acme</code> stays pending and the certificate is never assigned.</p>
| Joost Döbken | <p>It is highly recommended you use <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer"><code>cert-manager</code></a> instead of the built-in ACME support in Traefik, at least at this time. It is much better at dealing with multiple copies of Traefik, which you probably want. Ingress-shim (which is a default part of cert-manager) will handle Traefik-backed Ingresses just fine.</p>
| coderanger |
<p>I need to disable interactive session/ssh access to a Kubernetes pod.</p>
| Souvik Dey | <p>It’s controlled via the RBAC system, via the pods/exec subresource. You can set up your policies however you want.</p>
| coderanger |
<p>Below is my Nodejs microservices pattern:</p>
<pre class="lang-js prettyprint-override"><code>
// api.ts
import { Router } from 'express';
const router = Router();
router.get(':id', ...doSomething);
router.post(':id', ...doSomething);
export default router;
// index.ts
import * as Express from 'express';
import API from './api.js';
basePath = process.env.basePath; // usually is project name
const app = Express();
// handle external call
app.use(basePath, API) // wish to remove this line
// handle internal call from microservices
app.use(API) // preferred to be like this
...continue
</code></pre>
<p>Below is my kubeDeploy file inherit from colleague</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: $CI_PROJECT_NAME
template:
metadata:
labels:
app: $CI_PROJECT_NAME
spec:
imagePullSecrets:
- name: gitlabcred
containers:
- image: registry.gitlab.com/$GROUP_NAME/$CI_PROJECT_NAME:$CI_COMMIT_REF_NAME
imagePullPolicy: Always
name: $CI_PROJECT_NAME
ports:
- containerPort: $PORT
env:
- name: basePath
value: "$URL_PATH"
resources: $KUBE_RESOURCES
livenessProbe: $KUBE_LIVENESS
---
apiVersion: v1
kind: Service
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
spec:
ports:
- port: $PORT
protocol: TCP
name: http
selector:
app: $CI_PROJECT_NAME
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: $KUBE_DNS_NAME
http:
paths:
- path: /$URL_PATH
backend:
serviceName: $CI_PROJECT_NAME
servicePort: $PORT
- http:
paths:
- backend:
serviceName: $CI_PROJECT_NAME
servicePort: $PORT
</code></pre>
<p>Above code and settings work fine in both internal and external call like below:</p>
<p><a href="http://publicUrl.com/projectA/someId" rel="nofollow noreferrer">http://publicUrl.com/projectA/someId</a> // external call, microservice receive request.path as "/projectA/someId"</p>
<p><a href="http://publicUrl.com/projectB/someId" rel="nofollow noreferrer">http://publicUrl.com/projectB/someId</a> // external call, microservice receive request.path as "/projectB/someId"</p>
<p>http://projectA/someId // internal call, microservice receive request.path as "/someId"</p>
<p>http://projectB/someId // internal call, microservice receive request.path as "/someId"</p>
<p>I wish to remove "app.use(basePath, API)" from my microservice to make it environment independant.</p>
<p>Is the anyway I can change my kubeDeploy to make it change the path I received inside my microservice from an external call to become "/someId"?</p>
<p>Update: Below is latest kubeDeploy updated by devops</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: $CI_PROJECT_NAME
namespace: $KUBE_NAMESPACE
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: $KUBE_DNS_NAME
http:
paths:
- path: /$URL_PATH(/|$)(.*)
backend:
serviceName: $CI_PROJECT_NAME
servicePort: $PORT
- http:
paths:
- backend:
serviceName: $CI_PROJECT_NAME
servicePort: $PORT
</code></pre>
<p>I tried above but I don't understand it become a redirect from browser side.
Example: I when open <a href="http://publicUrl.com/projectA/help" rel="nofollow noreferrer">http://publicUrl.com/projectA/help</a> on my browser, somehow the url turn become <a href="http://publicUrl.com/help" rel="nofollow noreferrer">http://publicUrl.com/help</a> on browser address bar which it show "default backend - 404" due to Ingress unable to find a match path.</p>
| Simon | <p>You can use the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite annotations</a>, but keep in mind these are a custom extension of the nginx controller and are not portable to all other implementations.</p>
<p>From their example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
</code></pre>
| coderanger |
<p>I have an application that has 14 different services. Some of the services are dependent on other services. I am trying to find a good way to deploy these in the right sequences without using thread sleeps. </p>
<ol>
<li>Is there a way to tell kuberentes a service deployment tree like don't deploy service B or service C until Service A is in a container and the status is running?\</li>
<li>Is there s good way to use kubectl to poll service A so I can do a while loop until I know it's up and running then run the scripts to deploy service B and C? </li>
</ol>
| Hizzy | <p>This is not how Kubernetes works. You can kind of shim it with an initContainer that blocks until dependencies are available (usually via <code>kubectl</code> in a while loop, but you get fancy you can try using <code>--wait</code>).</p>
<p>But the expectation is that you set up your applications to be "eventually consistent" when it comes to inter-service dependencies. In practical terms, this usually means just crashing if a dependent service isn't available, and it will just be restarted until things succeed.</p>
| coderanger |
<p>How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?</p>
<p>e.g. Duplicate default to my-namespace which will have the same content.</p>
<p>I'm interested just by services and deployments, so
when I try with method with kubectl get all and with api-resources i have error with services IP like :</p>
<pre><code>Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated
</code></pre>
| Inforedaster | <p>There is no specific way to do this. You could probably get close with something like <code>kubectl get all -n sourcens -o yaml | sed -e 's/namespace: sourcens/namespace: destns/' | kubectl apply -f -</code> but <code>get all</code> is always a bit wonky and this could easily miss weird edge cases.</p>
| coderanger |
<p>I am running a Python app on production but my pod is restarting frequently on production environment. While on a staging environment it's not happening.</p>
<p>So I thought it could be CPU & Memory limit issue. I have updated that also.</p>
<p>Further debug I got <code>137</code> exit code.</p>
<p>For more debug I go inside Kubernetes node and check for container.</p>
<p>Command used: <code>docker inspect < container id ></code></p>
<p>Here is output:</p>
<pre><code> {
"Id": "a0f18cd48fb4bba66ef128581992e919c4ddba5e13d8b6a535a9cff6e1494fa6",
"Created": "2019-11-04T12:47:14.929891668Z",
"Path": "/bin/sh",
"Args": [
"-c",
"python3 run.py"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 137,
"Error": "",
"StartedAt": "2019-11-04T12:47:21.108670992Z",
"FinishedAt": "2019-11-05T00:01:30.184225387Z"
},
</code></pre>
<p>OOMKilled is false so I think that is not issue. </p>
<p>Using GKE master version: <code>1.13.10-gke.0</code> </p>
| Harsh Manvar | <p>Technically all the 137 means is your process was terminated as a result of a SIGKILL. Unfortunately this doesn't have enough info to know where it came from. Tools like auditd or Falco on top of that can help gather that data by recording those kinds of system calls, or at least get you closer. </p>
| coderanger |
<p>I am building containers that are designed to build and publish things. So i need to configure the <code>.pypirc</code>, etc files. </p>
<p>I am trying to do it with a configmap. Creating a configmap with each of the dot files is easy enough, my problem is mapping it into the pod. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
generateName: jnlp-
labels:
name: jnlp
label: jnlp
spec:
containers:
- name: jnlp
image: '(redacted)/agent/cbuild-okd311-cmake3-py36:0.0.2.7'
tty: true
securityContext:
runAsUser: 1001
allowPrivilegeEscalation: false
volumeMounts:
- name: dotfiles
mountPath: "/home/jenkins"
volumes:
- name: dotfiles
configMap:
name: jenkins.user.dotfiles
</code></pre>
<p>heres my map (redacted)</p>
<pre><code>apiVersion: v1
data:
.pypirc: |-
[distutils]
index-servers = local
[local]
repository: https://(redacted)/api/pypi/pypi
username: (redacted)
password: (redacted)
.p4config: |-
P4CLIENT=perf_pan
P4USER=(redacted)
P4PASSWD=(redacted)
P4PORT=p4proxy-pa.(redacted):1666
P4HOST=(redacted).com%
kind: ConfigMap
metadata:
name: jenkins.user.dotfiles
namespace: jenkins
</code></pre>
<p>im pretty sure that the mount command is blowing away everything else thats in the <code>/home/jenkins</code> folder. But im trying to come up with a mount that will create a dot file for each entry in my configmap.</p>
<p>Thanks</p>
| scphantm | <p>Your suspicion is correct. What you can use to fix that is <code>subPath</code> <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath</a></p>
<p>But the downside is you do need a volumeMount entry for each of the dotfiles.</p>
| coderanger |
<p>I'm writing Kustomize configs for several apps and using overlays to overwrite a base configuration for staging and production environments. The base config creates a secret from a file called dev.json and names that secret -dev-config. The staging environment also runs in dev mode and uses the same secret. Production creates a secret from a file named production.json and names that file -prod-config.</p>
<p>When I spin up an app in the production environment, the dev secret and the prod secret are being created. What do I need to add to the prod kustomization.yaml to tell it to ignore the base secretGenerator? It doesn't seem like that much of a security hole to have the dev config on the prod servers, but I'd like to avoid it anyway.</p>
| Kyle Baran | <p>I don’t think you can. You would move the dev config to a dev overlay instead. If you really don’t want to, you can use a jsonpatch to delete the content.</p>
| coderanger |
<p>I'm running a Kubernetes cluster on Google cloud. My cluster has a deployment that exposing the health-check interface (over HTTP). In my deployment <code>yaml</code> file, I configured:</p>
<pre><code>livenessProbe:
# an http probe
httpGet:
path: /hc
port: 80
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 60
periodSeconds: 90
</code></pre>
<p>If my health check endpoint return anything but 200, the pod will be killed and restarted.</p>
<p>Currently, after pod restart, it just counts it on the "restart" counter but not notify anyone. I want to notify the sysadmin that this event has happened. I thought to notify with a webhook.</p>
<p>Is this possible? If not, what is my other notification alternative?</p>
| No1Lives4Ever | <p>The somewhat convoluted standard answer to this is Kubernetes -> kube-state-metrics -> Prometheus -> alertmanager -> webhook. This might sound like a lot for a simple task, but Prometheus and its related tools are used much more broadly for metrics and alerting. If you wanted a more narrow answer, you could check out Brigade perhaps? But probably just use kube-prometheus (which is Prom with a bunch of related components all setup for you).</p>
| coderanger |
<p>I would like to know, what are the actual memory and CPU capacity in mi and m in the following results:</p>
<pre><code>Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2050168Ki
pods: 20
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 2
memory: 2050168Ki
pods: 20
</code></pre>
| HamiBU | <p>2 CPUs (2 cores) and 2050168Kb of RAM (more simply, 2GB). Which also happens to be the Minikube defaults.</p>
| coderanger |
<p>I have tried to deploy one of the local container images I created but keeps always getting the below error</p>
<blockquote>
<p>Failed to pull image "webrole1:dev": rpc error: code = Unknown desc =
Error response from daemon: pull access denied for webrole1,
repository does not exist or may require 'docker login': denied:
requested access to</p>
</blockquote>
<p>I have followed the below article to containerize my application and I was able to successfully complete this but when I try to deploy it to k8s pod I don't succeed</p>
<p>My pod.yaml looks like below </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: learnk8s
spec:
containers:
- name: webrole1dev
image: 'webrole1:dev'
ports:
- containerPort: 8080
</code></pre>
<p>and below are some images from my PowerShell </p>
<p><a href="https://i.stack.imgur.com/BErEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BErEC.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/TsNAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TsNAw.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/XmKmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XmKmj.png" alt="enter image description here"></a></p>
<p>I am new to dockers and k8s so thanks for the help in advance and would appreciate if I get some detailed response.</p>
| Mo Haidar | <p>When you're working locally, you can use an image name like <code>webrole</code>, however that doesn't tell Docker where the image came from (because it didn't come from anywhere, you built it locally). When you start working with multiple hosts, you need to push things to a Docker registry. For local Kubernetes experiments you can also change your config so you build your image in the same Docker environment as Kubernetes is using, though the specifics of that depend on how you set up both Docker and Kubernetes.</p>
| coderanger |
<p>I download a pod's log, it has size bigger then 1G, how to shrike/truncate the pod's log? so I can download it faster?</p>
<p>PS: I need erase the old part of logs so I can free the disk space</p>
| Yu Jiaao | <p><code>kubectl logs --tail=1000</code> to show only the most recent 1000 lines. Adjust as needed.</p>
<blockquote>
<p>--tail=-1: Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise
10, if a selector is provided.</p>
</blockquote>
| coderanger |
<p>Currently I have an Kubernetes cluster which is to analyze Video Feeds and send particular results based on the video. I wish to send an HTTP request from my Kubernetes pod from from time to time if the requested Video needs to be retrieved over the internet. However all of these requests seem to Fail. When I issued a CURL command in the shell of the container I receive an error message saying </p>
<pre><code>could not resolve host
</code></pre>
<p>I have looked into a few answers and many of them involve exposing a port in the container running a server to the Kubernetes node and making it publicly available through a external IP which I have already Implemented.</p>
<p>I am still very much a Novice in this area. So any guidance is appreciated</p>
| Ranika Nisal | <p>Answered in comments, kube-dns was unavailable due to resource constraints.</p>
| coderanger |
<p>Is it possible to modify a Kubernetes pod descriptor postponing the restart to a later event? </p>
<p>Basically I want to defer the changes to scheduled restart event.</p>
| pditommaso | <p>You cannot. That is not a feature of Kubernetes.</p>
| coderanger |
<p>Lets I have four separate microservice application which I want to deploy in K8's clusters. These applications interact with each other a lot. So is it recommended to have them in the same pod or individual pod for each microservcie</p>
| Sam | <p>Different pods. You only combine containers into one pod when they are very specifically for the same thing. Generally this means sidecars supporting the primary container in the pod for things like metrics, logging, or networking.</p>
| coderanger |
<p>The goal:
Using <strong>Helm Chart</strong> <strong>pre-install</strong> <strong>hook</strong>, take a file on the filesystem, encode it and place it as a resource file (referenced by a configMap.</p>
<p>Questions:</p>
<ul>
<li>Can a Helm Chart pre-install hook access files that are not under the root chart?</li>
<li>Can a Helm Chart pre-install hook modify or add a file under the Chart root?</li>
<li>Other then implicitly writing the bash script inside the chart resource yaml, can the pre-install hook execute a bash script if it is placed in the chart?</li>
</ul>
| user1015767 | <p>No, Hooks run as Jobs inside the Kubernetes cluster, so they have no access to your workstation. What you want is the Events system (<a href="https://github.com/helm/community/blob/master/helm-v3/002-events.md" rel="nofollow noreferrer">https://github.com/helm/community/blob/master/helm-v3/002-events.md</a>) which is still a WIP I think.</p>
| coderanger |
<p>I would like to deploy a RESTService in kubernetes behind a gateway and a service discovery. There is a moment where I will have my RestService version 1 and my RestService version 2. </p>
<p>Both will have the exact same URLs, but I might deploy them in pods where I label the version. When I make a call to the RESTService I would like to add something in the HTTP header indicating that I want to use my V2.</p>
<p>Is there any way I can route the traffic properly to the set of pods? (I'm not sure if using label is the right way). I also have to keep in mind that in the future I will have a V3 with new services and my urls will change, it cannot be something configured statically. I will also have serviceA with v1
and servicesB with v3. Both behind the same service discovery, both must be routed properly using the header parameter (or similar).</p>
<p>I'm not sure if Envoy is the right component for this, or is there anything else? and I'm not sure in which moment I should place this component.
I'm missing something, I'm still quite confused with kubernetes. Does anybody have and example from something similar?</p>
| Elena | <p>Yes, a Service takes a label selector, which you can use if you set labels based on your versions. Most Ingress Controllers or other proxies than use a Service (or rather then Endpoints it manages) to pick the backend instances.</p>
| coderanger |
<p>I have a kubernetes cluster with a few different pod types. </p>
<ul>
<li>An Nginx frontend, </li>
<li>A flask backend on gunicorn, </li>
<li>Redis, and </li>
<li>A Redis queue (RQ).</li>
</ul>
<p>Based on what I can tell, the default liveness probing for the frontend, and flask backend are sufficient (200 OK returning, as I have created a '/' backend that returns 200 and all my tasks should run quickly). Crash detection works well. </p>
<p>Additionally, I have a setup aliveness monitor that pings Redis with the Redis-cli. That also is working well.</p>
<p>However, I am not sure if the default configuration for the RQ is good enough. The pod has restarted itself a few times and is generally well behaved, but since I don't know the mechanism that is used, I'm worried.</p>
<p>My questions are: what is the liveness probe used by something like an RQ worker and what might I do to make sure it's robust?</p>
<p>Should I be using something like Supervisor or systemd? Any recommendations on which one?</p>
| JonathanC | <p>It would appear that RQ sets a heartbeat key in Redis: <a href="https://github.com/rq/rq/blob/e43bce4467c3e1800d75d9cedf75ab6e7e01fe8c/rq/worker.py#L545-L561" rel="nofollow noreferrer">https://github.com/rq/rq/blob/e43bce4467c3e1800d75d9cedf75ab6e7e01fe8c/rq/worker.py#L545-L561</a></p>
<p>You could check if this exists somehow. This would probably require an exec probe though, and at this time I wouldn't recommend that as exec probes have several open bugs that cause zombie processes leading to escalating resource usage over time.</p>
| coderanger |
<p>I am deploy a eureka pod in kubernetes cluster(v1.15.2),today the pod turn to be pending state and the actual state is running.Other service could not access to eureka, the eureka icon to indicate pod status shows:<code>this pod is in a pending state</code>.This is my stateful deploy yaml:</p>
<pre><code>{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta2",
"metadata": {
"name": "eureka",
"namespace": "dabai-fat",
"selfLink": "/apis/apps/v1beta2/namespaces/dabai-fat/statefulsets/eureka",
"uid": "92eefc3d-4601-4ebc-9414-8437f9934461",
"resourceVersion": "20195760",
"generation": 21,
"creationTimestamp": "2020-02-01T16:55:54Z",
"labels": {
"app": "eureka"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "eureka"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "eureka"
}
},
"spec": {
"containers": [
{
"name": "eureka",
"image": "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0",
"ports": [
{
"name": "server",
"containerPort": 8761,
"protocol": "TCP"
},
{
"name": "management",
"containerPort": 8081,
"protocol": "TCP"
}
],
"env": [
{
"name": "APP_NAME",
"value": "eureka"
},
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "APP_OPTS",
"value": " --spring.application.name=${APP_NAME} --eureka.instance.hostname=${POD_NAME}.${APP_NAME} --registerWithEureka=true --fetchRegistry=true --eureka.instance.preferIpAddress=false --eureka.client.serviceUrl.defaultZone=http://eureka-0.${APP_NAME}:8761/eureka/"
},
{
"name": "APOLLO_META",
"valueFrom": {
"configMapKeyRef": {
"name": "fat-config",
"key": "apollo.meta"
}
}
},
{
"name": "ENV",
"valueFrom": {
"configMapKeyRef": {
"name": "fat-config",
"key": "env"
}
}
}
],
"resources": {
"limits": {
"cpu": "2",
"memory": "1Gi"
},
"requests": {
"cpu": "2",
"memory": "1Gi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 10,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"imagePullSecrets": [
{
"name": "regcred"
}
],
"schedulerName": "default-scheduler"
}
},
"serviceName": "eureka-service",
"podManagementPolicy": "Parallel",
"updateStrategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"partition": 0
}
},
"revisionHistoryLimit": 10
},
"status": {
"observedGeneration": 21,
"replicas": 1,
"readyReplicas": 1,
"currentReplicas": 1,
"updatedReplicas": 1,
"currentRevision": "eureka-5976977b7d",
"updateRevision": "eureka-5976977b7d",
"collisionCount": 0
}
}
</code></pre>
<p>this is the describe output of the pending state pod:</p>
<pre><code>$ kubectl describe pod eureka-0
Name: eureka-0
Namespace: dabai-fat
Priority: 0
Node: uat-k8s-01/172.19.104.233
Start Time: Mon, 23 Mar 2020 18:40:11 +0800
Labels: app=eureka
controller-revision-hash=eureka-5976977b7d
statefulset.kubernetes.io/pod-name=eureka-0
Annotations: <none>
Status: Running
IP: 172.30.248.8
IPs: <none>
Controlled By: StatefulSet/eureka
Containers:
eureka:
Container ID: docker://5e5eea624e1facc9437fef739669ffeaaa5a7ab655a1297c4acb1e4fd00701ea
Image: registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0
Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka@sha256:7cd4878ae8efec32984a2b9eec623484c66ae11b9449f8306017cadefbf626ca
Ports: 8761/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 23 Mar 2020 18:40:18 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 2
memory: 1Gi
Environment:
APP_NAME: eureka
POD_NAME: eureka-0 (v1:metadata.name)
APP_OPTS: --spring.application.name=${APP_NAME} --eureka.instance.hostname=${POD_NAME}.${APP_NAME} --registerWithEureka=true --fetchRegistry=true --eureka.instance.preferIpAddress=false --eureka.client.serviceUrl.defaultZone=http://eureka-0.${APP_NAME}:8761/eureka/
APOLLO_META: <set to the key 'apollo.meta' of config map 'fat-config'> Optional: false
ENV: <set to the key 'env' of config map 'fat-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xnrwt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
Volumes:
default-token-xnrwt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xnrwt
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 360s
node.kubernetes.io/unreachable:NoExecute for 360s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16h default-scheduler Successfully assigned dabai-fat/eureka-0 to uat-k8s-01
Normal Pulling 16h kubelet, uat-k8s-01 Pulling image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0"
Normal Pulled 16h kubelet, uat-k8s-01 Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0"
Normal Created 16h kubelet, uat-k8s-01 Created container eureka
Normal Started 16h kubelet, uat-k8s-01 Started container eureka
</code></pre>
<p>how could this happen? what should I do to avoid this situation? After I restart the eureka pod,this problem disappeared,but I still want to know the reason cause this problem.</p>
| Dolphin | <p>Sounds like a Kubernetes bug? Try to reproduce it on the current version of Kubernetes. You can also dive into the kubelet logs to see if there is anything useful on those.</p>
| coderanger |
<p>Have a pretty simple <code>test.sql</code>:</p>
<pre><code>SELECT 'CREATE DATABASE test_dev'
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'test_dev')\gexec
\c test_dev
CREATE TABLE IF NOT EXISTS test_table (
username varchar(255)
);
INSERT INTO test_table(username)
VALUES ('test name');
</code></pre>
<p>Doing the following does what I expected it to do: </p>
<p><strong>Dockerfile.dev</strong></p>
<pre><code>FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
</code></pre>
<pre><code>docker build -t testproj/postgres -f db/Dockerfile.dev .
</code></pre>
<pre><code>docker run -p 5432:5432 testproj/postgres
</code></pre>
<p>This creates the database, switches to it, creates a table, and inserts the values.</p>
<p>Now I'm trying to do the same in Kubernetes with Skaffold, but nothing really seems to happen: no error messages, but nothing changed in postgres</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: init-script
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
</code></pre>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
- name: init-script
persistentVolumeClaim:
claimName: init-script
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: init-script
mountPath: /docker-entrypoint-initdb.d
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p>What I am doing wrong here?</p>
<p>Basically tried to follow the answers here, but isn't panning out. Sounded like I needed to move the <code>.sql</code> to a persistent volume.</p>
<p><a href="https://stackoverflow.com/a/53069399/3123109">https://stackoverflow.com/a/53069399/3123109</a></p>
| cjones | <p>You don’t want to mount a volume over the entry point folder. You are basically masking the script in your image with an empty folder. Also you aren’t using your modified image, so it wouldn’t have your script in the first place.</p>
| coderanger |
<p>According to "<a href="https://cloud.google.com/docs/authentication/production#finding_credentials_automatically" rel="nofollow noreferrer">Finding credentials automatically</a>" from Google Cloud:</p>
<blockquote>
<p>...ADC (Application Default Credentials) is able to implicitly find the credentials as long as the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, <strong>or as long as the application is running on</strong> Compute Engine, <strong>GKE</strong>, App Engine, or Cloud Functions.</p>
</blockquote>
<p>Do I understand correctly that <code>GOOGLE_APPLICATION_CREDENTIALS</code> does not need to be present, if I want to call Google Cloud APIs in current Google Cloud project?</p>
<p>Let's say I'm in a container in a pod, what can I do from within acontainer to test that calling Google Cloud APIs just work™?</p>
| oldhomemovie | <p>Check out <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity</a> for how to up permissions for your pods. You have to do some mapping a so Google knows which pods get which perks, but after that it’s auto-magic as you mentioned. Otherwise calls will use the node-level google permissions which are generally minimal.</p>
| coderanger |
<p>My Kubernetes cluster is failing to deploy new applications with insufficient CPU on the cluster</p>
<p>After digging around rancher and kubectl I have found that I am using 5% of CPU, but Reserved 96% CPU. </p>
<p>This is due to wrongly configured values in my micro-services values.yaml</p>
<p>If there a way to find out how much the micro-services are using when idle and when at load</p>
<pre><code>resources:
requests:
memory: {{ .Values.resources.requests.memory | quote}}
cpu: {{ .Values.resources.requests.cpu | quote}}
limits:
memory: {{ .Values.resources.limits.memory | quote}}
cpu: {{ .Values.resources.requests.cpu | quote}}
</code></pre>
<p>I have tried using kubectl to describe the node
I am monitoring netdata, but that is real time and hard to gauge limits from that.</p>
<p>If anyone had suggestions, that would be great</p>
| user3292394 | <p>The built in tool is <code>kubectl top</code> but this requires you have metrics-server running, which you probably do if you are using a hosted kube option but might not if running it yourself. Beyond that, Prometheus and tools like node-exporter and cadvisor can get you the data</p>
| coderanger |
<p>i am trying to setup a complete GitLab Routine to setup my Kubernetes Cluster with all installations and configurations automatically incl. decommissioning at the end. </p>
<p>However the Creation and Decommissioning Progress is one of the most time consuming because i am basically waiting for the provisioning till i can execute further commands. </p>
<p>as i have some times troubles in the bases of the Kubernetes Setup, i currently decomission my cluster and create a new one. But this is pretty un-economical and time consuming.</p>
<p>Question:
Is there a command or a series of commands to completely reset a Kubernetes to his state after creation ? </p>
| Bliv_Dev | <p>The closest is probably to do all your work in a new namespace and then delete that namespace when you are done. That automatically deletes all objects inside it.</p>
| coderanger |
<p>By mistake I created a service account to give admin permission for dashboard. But now I am unable to delete it.</p>
<p>The reason I want to get rid of that service account is if I follow the steps here <a href="https://github.com/kubernetes/dashboard" rel="noreferrer">https://github.com/kubernetes/dashboard</a>. When I jump to the URL it doesn't ask for config/token anymore.</p>
<pre><code>$ kubectl get serviceaccount --all-namespaces | grep dashboard
NAMESPACE NAME SECRETS AGE
kube-system kubernetes-dashboard 1 44m
$ kubectl delete serviceaccount kubernetes-dashboard
Error from server (NotFound): serviceaccounts "kubernetes-dashboard" not found
</code></pre>
| rgaut | <p>You have to specify the namespace when deleting it:</p>
<pre><code>kubectl delete serviceaccount -n kube-system kubernetes-dashboard
</code></pre>
| coderanger |
<p>I've raised the pod replicas to something like 50 in a cluster, watched it scale out, and then dropped the replicas back to 1. As it turns out I've disabled scale-down for one node. I've noticed that k8s will leave the remaining replica on that node. However, I've seen it remove that node when the annotation to prevent scale-down is not present. So somehow k8s makes decisions based on some kind of knowledge of nodes, or at least that the oldest POD is the one on the given node. Or something else altogether.</p>
<p>After a scale down of k8s pod replicas how does k8s choose which to terminate?</p>
| lucidquiet | <p>Roughly speaking it tries to keep things spread out over the nodes evenly. You can find the code in <a href="https://github.com/kubernetes/kubernetes/blob/edbbb6a89f9583f18051218b1adef1def1b777ae/pkg/controller/replicaset/replica_set.go#L801-L827" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/edbbb6a89f9583f18051218b1adef1def1b777ae/pkg/controller/replicaset/replica_set.go#L801-L827</a> If the counts are the same, it's effectively random though.</p>
| coderanger |
<p>I'm installing Prometheus on GKE with Helm using the standard chart as in</p>
<p><code>helm install -n prom stable/prometheus --namespace hal</code></p>
<p>but I need to be able to pull up the Prometheus UI in the browser. I know that I can do it with port forwarding, as in</p>
<p><code>kubectl port-forward -n hal svc/prom-prometheus-server 8000:80</code></p>
<p>but I'm being told "No, just expose it." Of course, there's already a service so just doing</p>
<p><code>kubectl expose deploy -n hal prom-prometheus-server</code> </p>
<p>isn't going to work. I assume there's some value I can set in values.yaml that will give me an external IP, but I can't figure out what it is. </p>
<p>Or am I misunderstanding when they tell me "Just expose it"?</p>
| NickChase | <p>It is generally a very bad idea to expose Prometheus itself as it has no authentication mechanism, but you can absolutely set up a LoadBalancer service or Ingress aimed at the HTTP port if you want.</p>
<p>More commonly (and supported by the chart) you'll use Grafana for the public view and only connect to Prom itself via port-forward when needed for debugging.</p>
| coderanger |
<p>I have been using saltstack for a few years with bare metal server. Now we need to setup a whole new environment on AWS. I'd prefer to use saltstack set everything up because I like the orchestration of salt and the event based stuff, like beacons & reactors. Plus it's easy to write your own customised python module. We will also be running kubernetes clusters on EC2 instances. Can someone provide some best practices for using salt with AWS and k8s?</p>
| laocius | <p>There’s a few reusable setups floating around, last I remember <a href="https://github.com/valentin2105/Kubernetes-Saltstack" rel="nofollow noreferrer">https://github.com/valentin2105/Kubernetes-Saltstack</a> was the most complete of them. But all of them are less solid than tools closer to the community mainstream (kops, kubespray) so beware of weird problems. I would recommend going through Kubernetes The Hard Way just so you have some familiarity with the underlying components that make up Kubernetes so you’ll have a better chance of debugging them :)</p>
| coderanger |
<p>I want to deploy a distributed system with one master and N followers. All the followers run the same image but with different arguments. Since they all terminate after a successful run, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer"><code>Jobs</code></a> seems to be a good fit.</p>
<p>However, the examples I can find, e.g., <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">this</a> and <a href="https://codeblog.dotsandbrackets.com/one-off-kubernetes-jobs/" rel="nofollow noreferrer">this</a>, are all limited to homogeneous pods (also with the same args):</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: primes
spec:
template:
metadata:
name: primes
spec:
containers:
- name: primes
image: ubuntu
command: ["bash"]
args: ["-c", "current=0; max=70; echo 1; echo 2; for((i=3;i<=max;)); do for((j=i-1;j>=2;)); do if [ `expr $i % $j` -ne 0 ] ; then current=1; else current=0; break; fi; j=`expr $j - 1`; done; if [ $current -eq 1 ] ; then echo $i; fi; i=`expr $i + 1`; done"]
restartPolicy: Never
</code></pre>
<p>I'm new to Kubernetes. Is what I need possible with Jobs? Can somebody point me to some examples for heterogeneous pods?</p>
| qweruiop | <p>There is support for a single job with multiple types of pods. You would want to make either two different jobs or a single job where the scripting inside the container detects on its own if it should be a leader or follower. If the leader/follower config is part of the command-line arguments, the former is probably easier. Make one job, wait for it to start, get the hostname of the pod, then start the followers job with that hostname in the podspec.</p>
| coderanger |
<p>Is there a way to do active and passive load balancing between 2 PODs of a micro-service. Say I have 2 instance(PODs) running of Micro-service, which is exposed using a K8s service object. Is there a way to configure the load balancing such a way that one pod will always get the request and when that pod is down , the other pod will start receiving the request?</p>
<p>I have ingress object also on top of that service.</p>
| P.Das | <p>This is what the Kubernetes Service object does, which you already mentioned you are using. Make sure you set up a readiness probe in your pod template so that the system can tell when your app is healthy.</p>
| coderanger |
<p>I have two pods in a cluster. Lets call them A and B. I've installed kubectl inside pod A and I am trying to run a command inside pod B from pod A using <code>kubectl exec -it podB -- bash</code>.
I am getting the following error </p>
<p><code>Error from server (Forbidden): pods "B" is forbidden: User "system:serviceaccount:default:default" cannot create pods/exec in the namespace "default"</code></p>
<p>I've created the following Role and RoleBinding to get access.
Role yaml</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: api-role
namespace: default
labels:
app: tools-rbac
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>RoleBinding yaml</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: global-rolebinding
namespace: default
labels:
app: tools-rbac
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Any help is greatly appreciated. Thank you</p>
| samsri | <p>You would need to give access to the <code>pods/exec</code> subresource in addition to <code>pods</code> like you have there. That said, this is a very weird thing to do and probably think very hard as to if this is the best solution.</p>
| coderanger |
<p>Here we have a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/jobs" rel="nofollow noreferrer">sample of the job</a> </p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: pi
image: perl
command: ["perl"]
args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
# Do not restart containers after they exit
restartPolicy: Never
</code></pre>
<p>I want to run a MySQL script as a command:
<code>mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql</code></p>
<p>But Kubernetes documentation is silent about piping a file to stdin. How can I specify that in Kubernetes job config?</p>
| egor10_4 | <p>Would set your command to something like <code>[bash, -c, "mysql -hlocalhost -u1234 -p1234 --database=customer < script.sql"]</code>, since input redirection like that is actually a feature of your shell.</p>
| coderanger |
<p>If I run a minikube instance on ubuntu do I need a VM like virtualbox?</p>
<p>If I run <code>minikube start</code> it complains that I need to install a VM, seems weird to have to do that on linux tho.</p>
| Alexander Mills | <p>While it is possible to run without a VM via <code>--vm-driver=none</code> it really isn't recommended outside of ephemeral CI instances. Minikube expects to be able to take over the system pretty fully to do its thang. I would recommend checking out one of the other tools to manage a local environment like microk8s (available as a Snap too), Kind, or possibly k3s.</p>
| coderanger |
<p>i am relatively new to <code>kubernetes</code> and i am trying to catch things, in my understanding, a <code>pod</code> can have <code>single</code> container as well as <code>multiple</code> containers. Lets say i have a <code>pod</code> with <code>5</code> tightly coupled containers and is it possible to <code>autoscale</code> only <code>2</code> of them based on the <code>usage</code>. or <code>autoscale</code> will only happen <code>pod</code>wise.</p>
| doc_noob | <p>No, the definition of a pod is co-located containers. You can vertically scale them differently but there will always be exactly one of each per pod.</p>
| coderanger |
<p>I got a VPS server running ubuntu, which is technically simply a VM. Now I thought about hosting a cluster on that VM instead of using AWS. So I would need to run the cluster directly on ubuntu.</p>
<p>One solution would be to simply use minikube which initializes another VM on top. But I'd prefer to run the kubernetes cluster directly on the VM.</p>
<p>As I am pretty new to all these topics I have no idea how to start at all. I'd guess I need a cluster management tool such as kops (which is used for AWS). </p>
<p><strong>Is it possible to run a kubernetes cluster directly on ubuntu?</strong>
Would be glad if you can help me getting started.</p>
| elp | <p>Microk8s from Ubuntu makes this easy, or check out Kind from sig-testing.</p>
| coderanger |
<p>I have an angular(6) application that is running on Nginx and deployed to Kubernetes. Here are my configs:</p>
<p>Here is my docker file:</p>
<pre><code>FROM node:10-alpine as builder
COPY package.json ./
RUN yarn install && mkdir /myproject && mv ./node_modules ./myproject
WORKDIR /myproject
COPY . .
RUN yarn ng build
FROM nginx:1.15-alpine
COPY ./server.conf /etc/nginx/conf.d/default.conf
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /myproject/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>And my nginx configs are as following:</p>
<pre><code>server {
listen 80;
server_name mywebiste.com www.mywebiste.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name mywebiste.com www.mywebiste.com;
ssl_certificate /etc/letsencrypt/live/mywebiste.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebiste.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
add_header Strict-Transport-Security max-age=15768000;
root /usr/share/nginx/html/myproject;
index.html;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}
</code></pre>
<p>In this approach I sort of have to generate the certificates in my local machine and then copy it to the kubernetes cluster.</p>
<p>I am not sure if there is a better way to handle the SSL certificates here.
I did some research, there is something called In nginx ingress controller, but not sure how to set it up, as I that creates an nginx server too.</p>
| Software Ninja | <p>The most Kubernetes-native way handle this is <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">using <code>cert-manager</code></a>, which can handle creating the LetsEncrypt certs for you. As you noted, some Ingress controllers also have their own integrations with LetsEncrypt which you can use. If using cert-manager, you would create a <code>Certificate</code> object with the required hostnames, which will issue the cert and put it in a <code>Secret</code> for you, which you can then mount into the pod as a volume. Handling this at the Ingress layer is often easier if you're going to be doing a lot of them though, since then you can set up all your backend services without worrying about TLS as much.</p>
| coderanger |
<p>I'm new with Kubernetes and I'm trying to understand about the commands.
Basically what I'm trying to do is to create a Tomcat deployment, add an nfs and after that I copy the war file to the tomcat webapps.<br/>
But its failing</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp11-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: webapp11
spec:
volumes:
- name: www-persistent-storage
persistentVolumeClaim:
claimName: claim-webapp11
containers:
- name: webapp11-pod
image: tomcat:8.0
volumeMounts:
- name: www-persistent-storage
mountPath: /apps/build
command: ["sh","-c","cp /apps/build/v1/sample.war /usr/local/tomcat/webapps"]
ports:
- containerPort: 8080
</code></pre>
<p>As far as I understand when the image when an image has a command, like catalina.sh run on the Tomcat image, it will have a conflict with a command from the kubernetes.<br/>
Is that correct?<br/>
There is anyway to run a command after the pod starts?<br/><br/>
thanks</p>
| radicaled | <p>No, what you want is probably something like this:</p>
<pre><code>command: ["sh","-c","cp /apps/build/v1/sample.war /usr/local/tomcat/webapps && exec /whatever/catalina.sh"]
</code></pre>
<p>Or you could move the <code>cp</code> into an <code>initContainer</code> so you don't have to override the default command for your Tomcat container.</p>
| coderanger |
<p>Considering <code>Kubernetes 1.13</code>, how does compare the features of the <code>Job</code> object to the features available around the <code>units</code> of <code>systemd</code>? And vice-versa? </p>
| piroux | <p>They do not meaningfully compare. The closest you could say is that a Job object is vaguely similar to a <code>oneshot</code>-mode service unit in systemd, in that both run a process until completion and that's it. But a service unit can't run multiple copies and systemd's containment features are different from K8s/Docker.</p>
| coderanger |
<p>my Kubernetes setup:</p>
<ul>
<li>v1.16.2 on bare metal</li>
<li>1 master node: used for Jenkins Master + Docker registry</li>
<li>5 slave nodes: used for Jenkins JNPL slaves</li>
</ul>
<p>I use kubernetes-plugin to run slave docker agents. All slave k8 nodes labeled as "jenkins=slave". When I use nodeSelector ("jenkins=slave") for podTemplate, kubernetes always schedule new pod on same node regardless the amount of started Jenkins jobs.</p>
<p>Please give me advice, how I can configure kubernetes or kubernetes-plugin to schedule each next build by round-robin (across all labeled nodes in kubernetes cluster)</p>
<p>Thank you.</p>
| Yaroslav Berezhinskiy | <p>This is generally handled by the inter-pod anti affinity configuration <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity</a>. You would set this in the pod template for your builder deployment. That said, it's more common to use the Kubernetes plugin for Jenkins which runs each build as a temporary pod, rather than having long-lived JNLP builders.</p>
| coderanger |
<p>I have a query, What happens to traefik if Kubernetes goes down? Will it keep working or will it crash/stop serving traffic?</p>
<p>Currently, I am using EKS for Kubernetes, with ALB ingress controller. My understanding is that, if Kubernetes master goes down, there won't be any upscaling-downscaling of PODs, no new deployments. But the existing applications deployed will at least keep serving traffic, as ALB is in place, nodes are there with PODs present and application is running. </p>
<p>But in case of traefik, traefik uses Kubernetes ingress resource to check for routing rules, and if Kubernetes itself is down, I don't think, traefik will get any response/data. In which case, it will either crash or clear its own config or it keeps serving traffic based on the last config it has stored. I am not sure about what happens to traefik if Kubernetes goes down.</p>
<p>Correct me if I am missing something, or I am wrong somewhere.</p>
| kadamb | <p>As long as Traefik keeps running, it will be fine. But if it crashed and was restarted, it would be unable to load the endpoints API as you noted. Fortunately that is exceedingly rare.</p>
| coderanger |
<p>My kubernetes cluster has 3 pods for postgres. I have configured persistent volume outside of the cluster on a separate virtual machine. Now as per kubernetes design, multiple pods will be responding to read/write requests of clients. Is their any dead lock or multiple writes issues that can occur between multiple postgres pods?</p>
| Meta-Coder | <p>You would need a leader election system between them. There can be only one active primary in a Postgres cluster at a time (give or take very very niche cases). I would recommend <a href="https://github.com/zalando-incubator/postgres-operator" rel="nofollow noreferrer">https://github.com/zalando-incubator/postgres-operator</a> instead.</p>
| coderanger |
<p>I have a cronjob that runs and does things regularly. I want to send a slack message with the technosophos/slack-notify container when that cronjob fails.</p>
<p>Is it possible to have a container run when a pod fails?</p>
| Red 5 | <p>There is nothing built in for this that i am aware of. You could use a web hook to get notified when a pod changes and look for state stuff in there. But you would have to build the plumbing yourself or look for an existing third party tool.</p>
| coderanger |
<p>Does the bolt protocol which is used by Neo4j works with Traefik? </p>
<p>TCP is not supported yet by Traefik, but according to the Traefik documention, it supports WebSocket(which operates over a TCP connection), and this confuses me! </p>
<p>Is it possible to run Neo4j databases behind Traeffik and access them using something like <code>bolt://myhost/path/that/traefik/routes/to/my/db</code>?</p>
| UrmLmn | <p>This appears to be up to each client library, and from what I can see it looks like only a few browser-based clients actually use the WebSocket mode. So overall, probably no, pending 2.0 coming out.</p>
| coderanger |
<p>I am new to kubernetes. I have kubenetes and kubelet installed on my linux (RHEL7) system. I want to get kubeadm on my system, but due to the organization's policy, I can't install it via yum or ap-get, etc.
Now, I am trying to find the <code>kubeadm rpm</code> file, which is compatible for my Redhat linux system. This I can install on the system. i found the rpm files <a href="https://www.rpmfind.net/linux/rpm2html/search.php?query=kubernetes-kubeadm" rel="nofollow noreferrer">here</a> but after running it the following error shows:</p>
<blockquote>
<p>"error: kubernetes-kubeadm-1.10.3-1.fc29.ppc64le.rpm: not an rpm package" for every rpm file.</p>
</blockquote>
<p>How do I solve this? Or are these files compatible with Fedora instead?</p>
| anuja tol | <p>You can find links to the official packages for all OSes included RHEL 7 on the docs page: <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a></p>
<pre><code>cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
</code></pre>
| coderanger |
<p>We want to deploy an application that utilizes memory cache using docker and kubernetes with horizontal pod auto-scale, but we have no idea if the containerized application inside the pods would use the same cache since it won't be guaranteed that the pods would be in the same node when scaled by the auto-scaler.</p>
<p>I've tried searching for information regarding cache memory on kubernetes clusters, and all I found is a statement in a <a href="https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16" rel="nofollow noreferrer">Medium article</a> that states </p>
<blockquote>
<p>the CPU and RAM resources of all nodes are effectively pooled and managed by the cluster</p>
</blockquote>
<p>and a sentence in a <a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="nofollow noreferrer">Mirantis blog</a></p>
<blockquote>
<p>Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.</p>
</blockquote>
<p>But I can't find anything regarding pods in different nodes having access to the same cache. And these are all on 3rd party sites and not in the official kubernetes site.</p>
<p>I'm expecting the cache to be shared between all pods in all nodes, but I just want confirmation regarding the matter.</p>
| Ken Gopert Gerada | <p>No, separate pods do not generally share anything even if running on the same physical node. There are ways around this if you are very very careful and fancy but the idea is for pods to be independent anyway. Within a single pod it's easier, you can use normal shmem, but this is pretty rare since there isn't much reason to do that usually.</p>
| coderanger |
<p>If I have 1 Django project and multiple Django apps. Every Django app has it's own <code>requirements.txt</code> and settings. Hence every app has its own docker image. My doubt is, can I execute code from one Django app to other Django app while both apps have a different container?</p>
| Vimox Shah | <p>No, in the context of Django an “app” is a code level abstraction so they all run in one process, which means one image. You can, sometimes, break each app into its own project and have then communicate via the network rather than locally, this is commonly called “microservices” and smaller images is indeed one of the benefits.</p>
| coderanger |
<p>I'm trying to add a new node pool into an existing GKE cluster. Failing with the below error.</p>
<pre><code>Node pool version cannot be set to 1.14.6-gke.1 when releaseChannel REGULAR is set.
</code></pre>
<p>Any advice on how i can get around this?</p>
<p><strong>EDIT:</strong> I finally managed to create a new pool but only after my master was auto-updated. looks like for auto-updated clusters this is a limitation. the new node being created seems to default to the version of the master and if the master is on a deprecated version and is pending auto upgrade, all one can do it wait.</p>
| Denounce'IN | <p>That version was removed from GKE yesterday: <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes#version_updates" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/release-notes#version_updates</a></p>
<pre><code>The following versions are no longer available for new clusters or upgrades.
1.13.7-gke.24
1.13.9-gke.3
1.13.9-gke.11
1.13.10-gke.0
1.13.10-gke.7
1.14.6-gke.1
1.14.6-gke.2
1.14.6-gke.13
</code></pre>
| coderanger |
<p>My first thought was using the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api" rel="nofollow noreferrer">downward API</a>, but that doesn't seem to expose the scale of a deployment / statefulset. I was hoping to be able to avoid adding it in as a separate environment variable manually, or having to rely on pods all discovering each other to determine the scale if possible. </p>
<p>Use-case: Deploying many pods for an application that connects to an external service. Said service does some form of consistent hashing (I believe is the right term?) for sending the data to clients, so clients that connect send an id number from 0 - N-1 and a total number of clients N. In this case, the deployment/statefulset scale would be N.</p>
| user7876637 | <p>You would definitely have to use a StatefulSet for this, and I don't think you can pull it from the DownwardAPI because the replica count isn't part of the pod spec (it's part of the statefulset spec). You could get the parent object name and then set up a service account to be able to query the API to get the replica count, but that seems like more work than putting the value in a label or env var.</p>
| coderanger |
<p>Please, help me to deal with accessibility of my simple application of k8s, via traefik in AWS.</p>
<p>I tried to expose ports 30000-32767 on master node, in security group and app is accessible from the world, doesn't want to work just 80 port of traefik! When I tried to expose 80 port in security group of master, I got <strong>CONNECTION REFUSED</strong>, when try access my app in browser and when I delete exposed port get an error <strong>CONNECTION TIMEOUT</strong> in browser.. what is the problem??? All services of k8s are up and no errors in traefik.</p>
<p>KOPS:</p>
<pre><code>kops create cluster \
--node-count = 2 \
--networking calico \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
</code></pre>
<p>K8S app.yml and traefik.yml:</p>
<ol>
<li>app</li>
</ol>
<p><a href="https://pastebin.com/WtEe633x" rel="nofollow noreferrer">https://pastebin.com/WtEe633x</a></p>
<ol start="2">
<li>traefik</li>
</ol>
<p><a href="https://pastebin.com/pnPJVPBP" rel="nofollow noreferrer">https://pastebin.com/pnPJVPBP</a></p>
<p>When I will type myapp.com, want to get an output of echoserver app on 80 port.</p>
| Stefan | <p>You've set things up using a NodePort service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
# namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
</code></pre>
<p>This doesn't mean that that the service proxy will listen on port 80 from the PoV of the outside world. By default NodePort services automatically allocate their port at random. What you probably want to do is to use a LoadBalancer service instead. Check out <a href="https://github.com/Ridecell/kubernetes/blob/9e034f4d0fb38e49f808ae0852af74366f630d48/manifests/traefik.yml#L152-L171" rel="nofollow noreferrer">https://github.com/Ridecell/kubernetes/blob/9e034f4d0fb38e49f808ae0852af74366f630d48/manifests/traefik.yml#L152-L171</a> for an example.</p>
| coderanger |
<p>I have a node.js application that sends out gmail notifications for certain events. We recently switched from being directly hosted on DO to a Kubernetes cluster. After we switched, we started getting invalid login exceptions in the Node.js app and critical security alerts from Google. After researching the issue, we turned the "Less secure app access" setting on. Now, we are getting the error message that says "Please log in via your web browser and then try again."</p>
<p>I'm not sure where to go from here since I can't log in with a web browser from my Kubernetes cluster.</p>
<p>My code looks like this.</p>
<pre><code>const nodemailer = require('nodemailer');
const mailer = nodemailer.createTransport(config.email);
...
req.app.locals.mailer.sendMail({
from: '[email protected]',
to: emails,
subject: subject + " " + serverName,
text: message,
});
</code></pre>
<p>Note that the code was working before the move to kubernetes.</p>
<p>Thanks in advance for your help.</p>
| mikeb | <p>Answered in comments, user did need to log into Google to acknowledge a blocker message.</p>
| coderanger |
<p>I am using Kubernetes HPA to scale up my cluster. I have set up target CPU utilization is 50% . It is scaling up properly. But, when load decreases and it scales down so fast. I want to set a cooling period. As an example, even the CPU util is below 50% , it should wait for 60 sec before terminating a node.</p>
<p>I have checked this article, but it is not saying that I can change the default value in HPA, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/index.html#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/index.html#termination-of-pods</a></p>
<p>Kops version :- 1.9.1</p>
| Dinesh Ahuja | <p>This is configured at the HPA level: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay</a></p>
<blockquote>
<p>--horizontal-pod-autoscaler-downscale-delay: The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).</p>
</blockquote>
| coderanger |
<p>Try to append some new entries to /etc/hosts in pods, but failed:</p>
<pre><code>$ ips=$(cat ips.txt); kubectl exec -u root myspark-master-5d6656bd84-5zf2h echo "$ips" >> /etc/hosts
-sh: /etc/hosts: Permission denied
</code></pre>
<p>How to fix this?</p>
<p>Thanks</p>
<p><strong>UPDATE</strong></p>
<pre><code>$ ips=$(cat ips.txt); kubectl exec myspark-worker-5976b685b4-8bcbl -- sh -c "echo $ips >> /etc/hosts"
sh: 2: 10.233.88.5: not found
sh: 3: 10.233.96.2: not found
sh: 4: 10.233.86.5: not found
10.233.88.4 myspark-master-5d6656bd84-dxhxc
command terminated with exit code 127
</code></pre>
| BAE | <p>I think you mean to write to the file inside the container, but bash is parsing that on your workstation and try to apply the redirect locally. Use <code>kubectl exec ... -- sh -c “...”</code> instead. </p>
| coderanger |
<p>When we scrape the etcd exposing end point (i. e. "/metrics"), we get a flat text. Is there any way we can structure the whole data to work on it instead of working on string comparison on the required metric?</p>
<p>Note: I don't want to use prometheus for monitoring. Instead I want to create my own framework to monitor etcd.</p>
| Sai Sushanth | <p><a href="https://github.com/prometheus/prom2json" rel="nofollow noreferrer">https://github.com/prometheus/prom2json</a> is probably what you are looking for.</p>
| coderanger |
<p>I understand that Kubernetes make great language-agnostic distributed computing clusters, easy to deploy, etc.</p>
<p>However, it seems that each platform has his own set of tools to deploy and manage Kubernetes.</p>
<p>So for example, If I use Amazon Elastic Container Service for Kubernetes (Amazon EKS), Google Kubernetes engine or Oracle Container Engine for Kubernetes, how easy (or hard) is to switch between them ?</p>
| Daniel Benedykt | <p>"It depends". The core APIs of Kubernetes like pods and services work pretty much the same everywhere, or at least if you are getting into provider specific behavior you would know it since the provider name would be in the annotation. But each vendor does have their own extensions. For example, GKE offers integration with GCP IAM permissions as an alternative to Kuberenetes' internal RBAC system. If you use that, then switching is that much harder. The more provider-specific annotations and extensions you use, the more work it will be to switch.</p>
| coderanger |
<p>I noticed my job status is marked as Running when its pod is still pending for being scheduled. Is there a way to get the actual status from the job resource itself without looking at the pod resource?</p>
<p>Job:</p>
<pre><code>$ kubectl describe jobs sample-job
Name: sample-job
...
Start Time: Sat, 28 Sep 2019 13:19:43 -0700
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m16s job-controller Created pod: sample-job-ppcpl
</code></pre>
<p>Pod:</p>
<pre><code>$ kubectl describe pods sample-job-ppcpl
Name: sample-job-ppcpl
Status: Pending
Controlled By: Job/sample-job
Conditions:
Type Status
PodScheduled False
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 29s (x7 over 6m25s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector.
</code></pre>
| Dagang | <p>Yes, the Job system understands "done and succeeded", "done and failed", and "still going" as statuses. Running means that it has requested that the job run, not that it's literally executing.</p>
| coderanger |
<p>I am new to kubernetes and have installed KOPS and now I want to use traefik as a ingress controller and cant get through it, I have a domain "xyz.com" and nameserver on cloudflare. I tried to create a "CNAME" record with my domain with ELB of aws which I get after deploying all files but it didn't worked and I don't want to use route53. I actually can't get proper guidance regarding this. Have gone through the documentation but somewhere got an idea but not working. Can anyone tell me somewhere like step wise, I will be very grateful. </p>
| Himanshu Sharma | <p>Given your comment, that is correct. You would set your DNS for all relevant domains to CNAME to the ELB generated for the Traefik Service.</p>
| coderanger |
<p>I am using docker and trying to enable kubernetes and set CPU and Memory via command line.</p>
<p>I have looked at this answer but unfortunately cannot find this file.</p>
<p><a href="https://stackoverflow.com/questions/54740934/is-there-any-way-to-enable-kubernetes-on-docker-for-mac-via-terminal">Is there any way to enable Kubernetes on Docker for Mac via terminal?</a></p>
| J. Doe | <p>Docker does not have an app-ified version for Linux that I know of, so there is no relation to the Docker for Mac/Windows app. There are many tools to locally install Kubernetes on Linux so they probably didn't see much reason to make something new. Minikube is the traditional one, but you can also check out microk8s, k3s, KinD, and many others.</p>
| coderanger |
<p>Assume deployment like that:</p>
<ol>
<li>Deployment contains two types of pods Config and App</li>
<li>Each App pod to start needs to have access to Config pod</li>
<li>There is always only one Config pod</li>
<li>Already launched App pods can work without access to Config pod service</li>
</ol>
<p>Situation I would like to manage:</p>
<ol>
<li>Node containing some of App pods and Config pod going down for any reason</li>
<li>On another Node first starts Config pod</li>
<li>After Config pod is successfully started App pods are launched</li>
</ol>
<p>Already read about:</p>
<ol>
<li>InitContainers - couldn't find an information if Config pod would be of type Init if in above situation it would rerun - I think not</li>
<li>StatueFullSet - I cannot find a way how this could help me in that situation</li>
</ol>
<p>From my perspective I was thinking about a loop for App pods before running target application, that would wait for Config pod to come up and in case of unavailability after timeout force them to fail. But I'm not sure if that is best practice, would like better to handle this with Kubernetes configuration rather that with such script.</p>
| kamracik | <p>You would use either code in your app or an initContainer to block until a config pod is available. Combine this with a readinessProbe that checks if the app is up. Doing the block-and-retry loop in your own code is a bit more work but recommended since you can more carefully control the behavior. This means that app pods can launch whenever, but they won't be marked as ready for traffic until the initialize.</p>
| coderanger |
<p>I am perfectly aware of the Kubernetes API and that a manifest can be broken down in several K8S API calls.</p>
<p>I am wondering if there is a way to apply a whole manifest at once in a single API call. A REST equivalent to <code>kubectl apply</code>.</p>
| znat | <p>The feature is still in alpha but yes, it's called "server-side apply". Given the in-flux nature of alpha APIs, you should definitely check the KEPs before using it, but it's a new mode for the PATCH method on objects.</p>
| coderanger |
<p>Right now I have a Docker file and a .gitlab-ci.yml , and SHELL runner</p>
<pre><code>FROM node:latest
RUN cd /
RUN mkdir Brain
COPY . /Brain/
WORKDIR /Brain/
RUN npm install
ENV CASSANDRA_HOST_5="10.1.1.58:9042"
ENV IP="0.0.0.0"
ENV PORT=6282
EXPOSE 6282
CMD npm start
</code></pre>
<p>and ci file</p>
<pre><code>before_script:
- export newver="0.1.0.117"
build:
image: node:latest
stage: build
script:
- docker build -t Brain .
- docker tag pro 10.1.1.134:5000/Brain:$newver
- docker push 10.1.1.134:5000/Brain:$newver
deploy:
stage: deploy
script:
- kubectl create -f brain-dep.yml
- kubectl create -f brain-service.yml
</code></pre>
<p>I dont want create image for every small change, I only want to keep stable images in local registry. now i have multiple version of Brain image, and also how can i have other services beside Brain (elasticsearch and..)</p>
<p>any suggestion</p>
| hesaum saboury | <p>Kubernetes has to be able to pull the image from somewhere. You can use an alternate repo for non-release builds or use some kind of naming scheme, and then clear out non-release builds more frequently.</p>
| coderanger |
<p>As per prometheus storage.md , the recommendation is not to use nfs storage as persistent volume for prometheus.</p>
<p>But solutions like prometheus operator and openshift shows examples which uses nfs as persistent volumes for prometheus.</p>
<p>So what am I missing here? If nfs is not recommended then why do these tools share examples to use nfs as the storage options for prometheus?</p>
<p>Does anyone know what could be the nfs alternative for NetApp/ Trident for prometheus?</p>
| swetad90 | <p>The example in the prom-operator docs is just a hypothetical to show how to manually control the storage provisioning. NFS is generally an option of last resort in all cases :) Check out <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> for more general information on how to use each of the various PV plugins (or if none of those work, look up CSI stuffs), but for NetApp you would probably use the iSCSI interface.</p>
| coderanger |
<p>I created kubernetes cluster on aws ec2 using kubeadm. Now I need to autoscale the K8's cluster when there are not enough resources on nodes to schedule new pods, How can I achieve autoscaling feature for my cluster?</p>
| satya | <p>Unfortunately there isn't a great answer if you mean you manually ran <code>kubeadm</code> on some EC2 instance. <code>cluster-autoscaler</code> is the thing to use, but it requires you deploy your nodes using Autoscaling Groups. It's possible to use ASGs and <code>kubeadm</code> but I don't know of anything off-the-shelf for it.</p>
| coderanger |
<p>I'm new to Kubernetes. Got confused with how CustomResourceDefinations changes got to apply:-)
Ex: If I have a CustomResourceDefinations "Prometheus", it creates a Statefulsets which create one pod. After the CRD changed, I need to use the latest CRD to create my pod again. What is the correct way? Should I completely remove the Statefulsets and pod then recreate them or just simply do "kubectl delete pod" then the change will auto apply when the new pod gets created? Thanks much!</p>
| Whispererli | <p>The operator, or more specifically the custom controller at the heart of the operator, takes care of this. It watches for changes in the Kubernetes API and updates things as needed to respond.</p>
| coderanger |
<p>I am trying to understand security implications of running containers with <code>--cap-add=NET_ADMIN</code>. The containers will be running in k8s cluster and they will execute user provided code (which we can't control and can be malicious).</p>
<p>It is my understanding that unless I add <code>--network host</code> flag, the containers will be able to only change their own networking stack. Therefore, they can break their own networking but can't affect the host or other containers in any way. </p>
<p>Is this correct? Are there any considerations when deciding if this is safe to do?</p>
| Jan Matas | <p>At a minimum, they would be able to turn on promiscuous mode on the pod's network adapter which could then potentially see traffic bound for other containers. Overall this seems like a very very very bad idea.</p>
| coderanger |
<p>In defining a service, can I somehow hook up a value, eg. TargetPort so that if I change the configmap, it will automatically update my service? I don't think so, but maybe I am unclear if I can fully psrameterize my application port.
I can do this with a manual script but w wondering what other solutions there are. </p>
| ergonaut | <p>This is not something you do directly in Kubernetes. You would use a higher-level system like Kustomize or Helm to put the correct value in both places. That said, why would you? It's not like you ever need things to coexist so just pick a port and roll with it.</p>
| coderanger |
<p><code>error: unable to recognize "xxxxx-pod.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused</code>, </p>
<p>I tried the solutions available online but none of them really worked.</p>
| Venkateshreddy Pala | <p>This means your kubeconfig is not correct. It is using the default server URL which is probably not what you intended.</p>
| coderanger |
<p>I want to edit a configmap from <code>aws-auth</code> during a vagrant deployment to give my vagrant user access to the EKS cluster. I need to add a snippet into the existing <code>aws-auth</code> configmap. How do i do this programmatically?</p>
<p>If you do a <code>kubectl edit -n kube-system configmap/aws-auth</code> you get</p>
<pre><code>apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123:role/nodegroup-abc123
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-30T03:00:18Z"
name: aws-auth
namespace: kube-system
resourceVersion: "19055217"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: 0000-0000-0000
</code></pre>
<p>i need to enter this bit in there somehow.</p>
<pre><code> mapUsers: |
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
</code></pre>
<p>I've tried to do a <code>cat <<EOF > {file} EOF</code> then patch from file. But that option doesn't exist in <code>patch</code> only in the <code>create</code> context.</p>
<p>I also found this: <a href="https://stackoverflow.com/q/54571185/267490">How to patch a ConfigMap in Kubernetes</a></p>
<p>but it didn't seem to work. or perhaps i didn't really understand the proposed solutions.</p>
| Eli | <p>There are a few ways to automate things. The direct way would be <code>kubectl get configmap -o yaml ... > cm.yml && patch ... < cm.yml > cm2.yml && kubectl apply -f cm2.yml</code> or something like that. You might want to use a script that parses and modifies the YAML data rather than a literal patch to make it less brittle. You could also do something like <code>EDITOR="myeditscript" kubectl edit configmap ...</code> but that's more clever that I would want to do.</p>
| coderanger |
<p>How to have the input data ready before I deploy a POD on K8S?
As I understand, persistent volume is dynamically created using PVC (persistent volume claim), so in a POD yaml file, we can set the PVC and mount path like <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer">this</a>: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>The problem is, how can I upload the data before I deploy a POD? What I want is to have the data ready and persistent somewhere on K8S, and then when I deploy the POD, and expose it as service, the service can immediately access the data. </p>
| Jaylin | <p>Mount it on another pod somewhere that does the pre-load. Alternatively you could do some fancy stuff with an initContainer.</p>
| coderanger |
<p>Does the following configuration for configMap create the test.json file of type <code>empty array</code> or <code>string []</code></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: myconfigmap
data:
test.json: |-
[]
</code></pre>
<p>The convertor to JSON suggests string:</p>
<pre><code>{
"kind": "ConfigMap",
"apiVersion": "v1",
"metadata": {
"name": "myconfigmap"
},
"data": {
"test.json": "[]"
}
}
</code></pre>
<p>My goal is to create configMap file with empty array.</p>
| maopuppets | <p>Sure, you can make it whatever string you want, it just has to be a string. The thing you <em>can't</em> do is <code>test.json: []</code> since that's an array. The fact that your string happens to be valid JSON is not something K8s knows or cares about.</p>
| coderanger |
<p>I followed the commands mentioned on this page...</p>
<p><a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html</a></p>
<p>elastic service is stared successfully. But I do not see external-ip</p>
<pre><code># /usr/local/bin/kubectl --kubeconfig="wzone2.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 10m
quickstart-es ClusterIP 10.245.97.209 <none> 9200/TCP 3m11s
quickstart-es-discovery ClusterIP None <none> 9300/TCP 3m11s
</code></pre>
<p>I tried port forwarding command but that did not help.</p>
<blockquote>
<p>kubectl port-forward service/quickstart-es 9200</p>
</blockquote>
<p>How do I connect to this elastic server?</p>
| shantanuo | <p>ClusterIP services are only available from inside the cluster. To make it visible from the outside you would need to change it to LoadBalancer type, and have an implementation of that available (read: be running on a cloud provider or use MetalLB).</p>
| coderanger |
<p>I deployed a kubernetes cluster and using 3 replicas for my backend NodeJS.</p>
<p>I am now applying some socket function and want to make sure my redis pub and sub function is working from different pod.</p>
<p>Therefore, I want to display the NodeJS pod name on client side to test whether is working.</p>
<p>*and I am using ReactJS as my frontend (client side)</p>
| potato | <p>The server pod would have to put that in a header or otherwise purposefully send it down to the client.</p>
| coderanger |
<p>We've multiple k8s clusters which are used by many teams for their microservices. We've restricted the <code>kubectl</code> access to the limited members only. But many times we get a request for allowing <code>readonly kubectl</code> access. </p>
<p>Many on k8s clusters are running on <code>ec2</code> & provisioned via <code>kops</code>.
Version details : </p>
<pre><code>$ kubectl version --short
Client Version: v1.13.0
Server Version: v1.11.6
---
$ kops version
Version 1.12.2
</code></pre>
<p>I tried to create a <code>test-pod</code> with <code>kubectl</code> installed in it which readonly <code>clusterrole</code> & <code>clusterrolebinding</code> attached. I can see that the <code>kubectl</code> from within the pod can have readonly access but it needs me to <code>kubectl exec</code> into the pod. So, I don't know how can I restrict this access ?</p>
<p>I have tried <a href="https://medium.com/@rschoening/read-only-access-to-kubernetes-cluster-fcf84670b698" rel="nofollow noreferrer">this</a> but still don't know how to restrict access.</p>
| K.Thanvi | <p>You need to make users in whatever authentication system you are using and then set the role binding to be aimed at those users, not a service account. Service accounts are for services, not humans.</p>
| coderanger |
<p>I am trying to implement pgbouncer on k8s, using a helm chart created deployment,service…now how do I expose the service to outside world? Not much familiar with k8s networking, tried to create an ingress resource and it created an elb in aws…how do I map this elb to the service and expose it?
the service is created with type ClusterIP…the service is a tcp service i.e. not http/https application (edited) </p>
<p>The helm chart used is - <a href="https://github.com/futuretechindustriesllc/charts/tree/master/charts/pgbouncer" rel="nofollow noreferrer">https://github.com/futuretechindustriesllc/charts/tree/master/charts/pgbouncer</a></p>
| nmakb | <p>Ingresses are only used for HTTP and friends. In this case what you want is probably a LoadBalancer type service. That will make a balancer fabric and then expose it via an ELB.</p>
| coderanger |
<p>I want to add some annotations to the metadata block of a service within an existing helm chart (I have to add an annotation for Prometheus so that the service is auto discovered). The chart (it is the neo4j chart) does not offer me a configuration that I can use to set annotations. I also looked into the yaml files and noticed that there is no variable I can use to insert something in the metadata block. The only solution I can see is that I have to fork the chart, insert the annotation data to the correct place and create my own chart out of it. Is that really the only solution or is there some trick I am missing that allows me to modify the helm chart without creating a new one?</p>
| Matthew Darton | <p>In Helm 2, you are correct. Either you would have to fork the chart or pass it through another tool after rendering like Kustomize. Helm 3 has some planned features to improve this in the future.</p>
| coderanger |
<p>on </p>
<pre><code> kops edit ig nodes
</code></pre>
<p>I am getting </p>
<pre><code>error determining default DNS zone: Found multiple hosted zones matching cluster ".domain"; please specify the ID of the zone to use
</code></pre>
<p>cluster looks like this</p>
<pre><code>$ kops get ig
Using cluster from kubectl context: dev3.onchain.live
NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-east-1b Master m4.large 1 1 us-east-1b
nodes Node m4.large 3 3 us-east-1b
</code></pre>
<p>adding </p>
<pre><code> --state=$KOPS_STATE_STORE
</code></pre>
<p>did not help. </p>
| Nabil Sham | <p>It lives in the ClusterSpec YAML file:</p>
<pre><code>// DNSZone is the DNS zone we should use when configuring DNS
// This is because some clouds let us define a managed zone foo.bar, and then have
// kubernetes.dev.foo.bar, without needing to define dev.foo.bar as a hosted zone.
// DNSZone will probably be a suffix of the MasterPublicName and MasterInternalName
// Note that DNSZone can either by the host name of the zone (containing dots),
// or can be an identifier for the zone.
DNSZone string `json:"dnsZone,omitempty"`
</code></pre>
<p>Though having more than one is usually a configuration issue in Route53. Or at least it's not normal to have multiple matching zones.</p>
| coderanger |
<p><strong>I'm stuck at creating a <code>user</code> for my <code>serivice_account</code> which I can use in my kubeconfig</strong></p>
<p><strong>Background</strong>:
I have a cluser-A, which I have created using the <a href="https://github.com/googleapis/google-cloud-python" rel="nofollow noreferrer">google-cloud-python</a> library. I can see the cluster created in the console. Now I want to deploy some manifests to this cluster so i'm trying to use the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes-python</a> client. To create a <code>Client()</code> object, I need to have a KUBECONFIG so I can:</p>
<pre><code>client = kubernetes.client.load_kube_config(<MY_KUBE_CONFIG>)
</code></pre>
<p>I'm stuck at generating a <code>user</code> for this service_account in my kubeconfig. I don't know what kind of authentication certificate/key I should use for my <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">user</a>.</p>
<p>Searched everywhere but still can't figure out how to use my service_account to access my GKE cluster through the kubernetes-python library.</p>
<p><strong>Additional Info</strong>:
I already have a <code>Credentials()</code> object (<a href="https://google-auth.readthedocs.io/en/latest/reference/google.auth.credentials.html#google.auth.credentials.Credentials" rel="nofollow noreferrer">source</a>) created using the <code>service_accounts.Credentails()</code> class (<a href="https://google-auth.readthedocs.io/en/latest/reference/google.oauth2.service_account.html#google.oauth2.service_account.Credentials" rel="nofollow noreferrer">source</a>). </p>
| Panda | <p>A Kubernetes ServiceAccount is generally used by a service within the cluster itself, and most clients offer a version of <a href="https://github.com/kubernetes/client-go/blob/master/rest/config.go#L433-L439" rel="nofollow noreferrer"><code>rest.InClusterConfig()</code></a>. If you mean a GCP-level service account, it is activated as below:</p>
<pre><code>gcloud auth activate-service-account --key-file gcp-key.json
</code></pre>
<p>and then probably you would set a project and use <code>gcloud container clusters get-credentials</code> as per normal with GKE to get Kubernetes config and credentials.</p>
| coderanger |
<p>I have a kubernetes cluster that has many services on it. How can I make one pod publish a message, and receive it from another siblings pods (pods of the same service), using kubernetes-java-client. </p>
<p>Until now, I didn't find a way to do the job done.</p>
<p>Example:
1 Service -> 4 pod (4/4 replicaSet)</p>
<p>Invoke Api in the service, load balance takes the request to 1 Pod, so other pods need to do a reaction because a specific Api in another pod has been activated.</p>
<p>So, the pod publish an event, and other sibling pods consumes the event, an do the reaction. Or the pod communicates directly with its siblings to tell them to do the reaction.</p>
<p>Is this possible, and what is the right way to a similar scenario?</p>
| Ali Adel Abed | <p>Other than using the Kubernetes API to discover the peer pods (usually via the endpoints API), it doesn’t provide anything in particular for the actual comms, that would be up to your code.</p>
| coderanger |
<p>I am writing an exporter and am having trouble with metrics for my collector. From what I can tell, metrics in Prometheus need to be defined beforehand. Is there any way to define them dynamically at runtime instead? I won't know how many metrics I'll have or what metrics I'll need until the code is running. (For example, if k8s finds all volumes connected to the cluster, and I need EACH volume to get its own total_capacity/available_capacity metric).</p>
| Trevor Jordy | <p>You would handle this by using dynamic label values rather than dynamic metric names. That said, you can call <code>prometheus.Register()</code> at runtime just fine, client_golang won't know the difference.</p>
| coderanger |
<p>I have two application, both deployed in same cluster.</p>
<p>Now from web, there is an ajax request to get data from api, but it always return <code>502 Connection refused</code>.</p>
<p>here is my jquery code (web).</p>
<pre><code>$.get("http://10.43.244.118/api/users", function (data) {
console.log(data);
$('#table').bootstrapTable({
data: data
});
});
</code></pre>
<p>Note: when I change the service type to <code>LoadBalancer</code> from <code>ClusterIP</code> then it works fine.</p>
| Programmer | <p>ClusterIP services (usually) only work within the cluster. You can technically make your CNI address space available externally, but that is rare and you probably shouldn't count on it. The correct way to expose something outside the cluster is either a NodePort service or a LoadBalancer (which is a NodePort plus a cloud load balancer).</p>
| coderanger |
<p>Trying to see if there are any recommended or better approaches since <code>docker login my.registry.com</code> creates config.json with user id and password and it's not encrypted. Anyone logged into the node or jumpbox where there images are pushed/pulled from a private registry can easily see the registry credentials. Coming to using those credentials for Kubernetes deployment, I believe only option is to convert that into <code>regcred</code> and refer to that as <code>imagePullSecrets</code> in YAML files. The secret can be namespace scoped but still has the risk of exposing the data to other users who may have access to that namesapce since k8s secrets are simply base64 encoded, not really encrypted.</p>
<p>Are there any recommended tools/plugins to secure and/or encrypt these credentials without involving external API calls?</p>
<p>I have heard about Bitnami sealed secrets but haven't explored that yet, would like to hear from others since this is a very common issue for any team/application that are starting containers journey.</p>
| cnu | <p>There is no direct solution for this. For some specific hosts like AWS and GCP you can use their native IAM system. However Kubernetes has no provisions beyond this (SealedSecrets won't help at all).</p>
| coderanger |
<p>The first and most minimal <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">example of a Deployment in the Kubernetes documentation</a> has the <code>app: nginx</code> line that repeats itself three times. I understand it's a tag, but I haven't found anything that explains why this needs to be specified for all of:</p>
<ol>
<li><code>metadata.labels</code>,</li>
<li><code>spec.selector.matchLabels</code>, and</li>
<li><code>spec.template.metadata.labels</code></li>
</ol>
<p>The example deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
| spro | <p>So 1 and 3 are technically unrelated. 1 is the labels for the deployment object and only matter for your own organizational purposes. 3 are the labels that will be put on the generated pods. As for why Deployments rely on manually specifying a selector against the pod labels, it is to ensure stay stateless. The deployment controller can restart at any time and things will be safe. It could be improved in the future though, if someone has a solid proposal that takes care of all the edge cases.</p>
| coderanger |
<hr />
<h1>Background</h1>
<p>I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.</p>
<hr />
<h1>Problem</h1>
<p>So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:</p>
<ul>
<li>Modify my Python code.</li>
<li>Rebuild the Docker container (which includes my Python service).</li>
<li><code>scp</code> the Docker container over to the Docker Registry server.</li>
<li><code>docker load</code> the image, update tags, and push it to the Registry back-end DB.</li>
<li>Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.</li>
</ul>
<p>This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with <code>PID=1</code>), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).</p>
<hr />
<h1>Question</h1>
<p>Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.</p>
<hr />
| Cloud | <p>There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.</p>
<p>For the first, there are a million and a half tools but Skaffold is probably the most common one.</p>
<p>For the second, you do something like <code>( eval $(minikube docker-env) && docker build -t myimagename . )</code> which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).</p>
<p>Also FWIW using <code>scp</code> and <code>docker load</code> is very not standard, generally that would be combined into <code>docker push</code>.</p>
| coderanger |
<p>I want to modify particular config file from kubernetes running pod at runtime.
How can I get pod name at runtime and I can modify the file from running pod and restart it to reflect the changes? I am trying this in python 3.6.</p>
<p>Suppose,
I have two running pods.
In one pod I have config.json file. In that I have </p>
<blockquote>
<p>{
"server_url" : "<a href="http://127.0.0.1:8080" rel="nofollow noreferrer">http://127.0.0.1:8080</a>"
}</p>
</blockquote>
<p>So I want to replace 127.0.0.1 to other kubernetes service's loadbalancer IP in it. </p>
| ImPurshu | <p>Generally you would do this with an initContainer and a templating tool like envsubst or confd or Consul Templates.</p>
| coderanger |
<p>Is there a way to have the same label key but different values for a pod. For example, can a pod have labels as "app=db" and "app=web". I tried to use kubectl label command but it picks only one label.</p>
| Vikram | <p>Labels are a <code>map[string]string</code> so you are correct, this is not possible.</p>
| coderanger |
<p>I've had my services set with type <code>NodePort</code> however in reality external access it not required - they only need to be able to talk to each other.</p>
<p>Therefore I presume I should change these to the default <code>ClusterIP</code> however the question is - how can I continue to access these services during my local development?</p>
<p>So when i make the change from <code>NodePort</code> to <code>ClusterIP</code> then go to <code>minikube service list</code> it naturally shows <code>no node port</code> however how can I now access - is there some special endpoint address I can get from somewhere?</p>
<p>Thanks.</p>
| userMod2 | <p>You would need to access it like any other out-of-cluster case. Generally this means either <code>kubectl port-forward</code> or <code>kubectl proxy</code>, I favor the former though. In general, ClusterIP services are only accessible from inside the cluster, accessing through forwarders is only used for debugging or infrequent access.</p>
| coderanger |
<p>I am learning rbac to do access control, here is my role definition:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: foobar-role
labels:
# Add these permissions to the "view" default role.
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "patch"]
</code></pre>
<p>the role allow a subject to access secret, but I wonder is it possible to limit access to a specific path: </p>
<pre><code>apiVersion: v1
data:
keycloak.clientSecret: ...
keycloak.url: ...
user.password: ...
kind: Secret
metadata:
creationTimestamp: "2019-04-21T08:07:21Z"
labels:
app: foobar-ce
heritage: Tiller
release: foobar
name: foobar-secret
namespace: default
resourceVersion: "12348"
selfLink: /api/v1/namespaces/default/secrets/foobar-secret
uid: 7d8775d9-640c-11e9-8327-0242b821d21a
type: Opaque
</code></pre>
<p>For example, is it possible to change the role only is able to</p>
<ul>
<li>view <code>{.data.keycloak\.url}</code> (read-only)</li>
<li>update <code>{.data.keycloak\.clientSecret}</code> (write-only)</li>
</ul>
| qrtt1 | <p>You can limit to a single resource (<code>resourceNames</code> in the policy), but not beyond that. I don't think the API even supports partial access.</p>
| coderanger |
<p>I'm trying to automate the process of simultaneously deploying an app onto multiple machines with kubernetes clusters. I'm new to kubernetes.</p>
<p>Which tool/technology should I use for this?</p>
| Queilyd | <p>Kubernetes doesn't really think in terms of machines (nodes in k8s jargon). When you set up a Deployment object, you specify a number of replicas to create, that's how many copies of the Pod to run. So rather than deploying on multiple machines, you create/update the Deployment once and set it to have multiple replicas which then get run on your cluster.</p>
| coderanger |
<p><a href="https://i.stack.imgur.com/eqwQh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eqwQh.png" alt="enter image description here" /></a></p>
<p>Using Kubernetes deploying nginx in several pods. Each pod is mounting access.log file to hostPath in order to read by Filebeat to collect to other output.</p>
<p>If do log rotation in the same cron time in every pod, they are using common access.log file, it works.</p>
<p>I tested with few data in a simple cluster. If large data occurred in production, is it a good plan or something wrong will happen with logrotate's design?</p>
| iooi | <p>This will not usually work well since logrotate can't see the other nginx processes to sighup them. If nginx in particular can detect a rotation without a hup or other external poke then maybe, but most software cannot.</p>
<p>In general container logs should go to stdout or stderr and be handled by your container layer, which generally handles rotation itself or will include logrotate at the system level.</p>
| coderanger |
<p>I got the following architecture:</p>
<pre><code> [Service]
/ | \
[Pod1][Pod2][Pod3]
</code></pre>
<p>We assert the following Pod IPs:</p>
<ul>
<li>Pod 1: 192.168.0.1</li>
<li>Pod 2: 192.168.0.2</li>
<li>Pod 3: 192.168.0.3</li>
</ul>
<p>I am executing a loop like this:</p>
<pre><code>for ((i=0;i<10000;i++)); do curl http://someUrlWhichRespondsWithPodIP >> curl.txt; done;
</code></pre>
<p>This writes the pods IP 10000 times. I expected it to be round robin schemed but it was not. File looks similar to this:</p>
<pre><code>192.168.0.1
192.168.0.1
192.168.0.3
192.168.0.2
192.168.0.3
192.168.0.1
</code></pre>
<p>The service config looks like this:</p>
<pre><code>kind: Service
metadata:
name: service
spec:
type: NodePort
selector:
app: service
ports:
- name: web
protocol: TCP
port: 31001
nodePort: 31001
targetPort: 8080
</code></pre>
<p>Anyone has an idea what kind of internal load balancing Kubernetes is using?</p>
| elp | <p>You are probably using the default <code>iptables</code> mode of kube-proxy, which uses iptables NAT in random mode to implement load balancing. Check out the <code>ipvs</code> support (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs</a>) for a big pile of other modes including round robin.</p>
| coderanger |
<p>In kubernetes, you can listen for events using <code>kubectl get events</code>. This works for other resources, but I would like to know when a Service is created and destroyed.
When I run <code>kubectl describe my-service</code> I get <code>Events: <none></code>. </p>
<p>How can I know when a service was created?</p>
| Nick | <p>Every api object has a creation timestamp in the metadata section. Though that doesn’t tell when it is edited. For that you might want an audit webhook or something like Brigade.</p>
| coderanger |
<p>I'm having issues with kubelet removing docker images because it believes the disk is full:</p>
<pre><code>Dec 29 18:00:14 hostname kubelet: I1229 18:00:14.548513 13836 image_gc_manager.go:300] [imageGCManager]: Disk usage on image filesystem is at 85% which is over the high threshold (85%). Trying to free 2160300032 bytes down to the low threshold (80%).
</code></pre>
<p>However, the partition that docker uses is 1TB and has plenty of space:</p>
<pre><code>$ docker info
...
Docker Root Dir: /scratch/docker
$ df -k /scratch
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 976283900 234476968 741806932 25% /scratch
</code></pre>
<p>It seems kubelet is finding the disk usage for my main partition on <code>/</code>, which also happens to be the partition kubelet itself is installed on:</p>
<pre><code>$ df -k /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme0n1p2 52403200 44032936 8370264 85% /
</code></pre>
<p>So where does kubelet get information about available disk space? I assumed it was using the docker daemon, but based on what I'm seeing the two apps are looking at different partitions. Is there a configuration I can set, or is does it just default to its own partition when doing the disk space check?</p>
<p>This is using Kubernetes 1.17.4 on RedHat 7 and docker 18.06.</p>
| Mike | <p>There's a lot of weird edge case bugs in there, see <a href="https://github.com/kubernetes/kubernetes/issues/66961" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/66961</a> as a starting point.</p>
| coderanger |
<p>Back-off restarting the failed container, the description is Container image <code>mongo:3.4.20</code> already present on the machine</p>
<p>I have removed all container into that system name mongo, removed all POD, svc, deployment, and rc, but getting the same error, also I tried to label another node with a different name and used that label in <code>yaml</code> but I got the same error. </p>
<p>I used below <code>yaml</code> for creating Deployment, in this case, I used to map system with name <code>app=mongodb</code>, also attached one 8 GB disk in AWS as <code>persistentVolumeClaim</code>.</p>
<pre><code>kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- image: mongo:3.4.20
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- mountPath: "/data/db"
name: db-storage
volumes:
- name: db-storage
persistentVolumeClaim:
claimName: db-storage
</code></pre>
<p>why its always going failed and saying Container image already present on the machine, any cache?</p>
| shufilkhan | <p>Addressed in comments, "already present on the machine" is not an error message. That's a pod event and is there only for debugging and tracing to give you an idea of what steps the kubelet is taking during the pod setup process.</p>
| coderanger |
<p>Here's my environment, I have k8s cluster and some physical machines outside k8s cluster. Now I create a pod in k8s, and this pod will act like a master to create some processes in these physical machines outside k8s cluster. And I need to establish rpc connection between the k8s pod and these external processes. I don't want to use k8s service here. So what kind of other approach I can use to connect a pod in k8s from external world. </p>
| zjffdu | <p>You would need to set up your CNI networking in such a way that pod IPs are routable from outside the cluster. How you do this depends on you CNI plugin and your existing network design. You could also use a VPN into the cluster network in some cases.</p>
| coderanger |
<p>I am new to Kubernetes and I am facing a problem that I do not understand. I created a 4-node cluster in aws, 1 manager node (t2.medium) and 3 normal nodes (c4.xlarge) and they were successfully joined together using Kubeadm.</p>
<p>Then I tried to deploy three Cassandra replicas using <a href="https://github.com/kubernetes/examples/blob/master/cassandra/cassandra-statefulset.yaml" rel="nofollow noreferrer">this yaml</a> but the pod state does not leave the pending state; when I do:</p>
<pre><code>kubectl describe pods cassandra-0
</code></pre>
<p>I get the message </p>
<pre><code>0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 Insufficient memory.
</code></pre>
<p>And I do not understand why, as the machines should be powerful enough to cope with these pods and I haven't deployed any other pods. I am not sure if this means anything but when I execute:</p>
<pre><code>kubectl describe nodes
</code></pre>
<p>I see this message:</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
</code></pre>
<p>Therefore my question is why this is happening and how can I fix it.</p>
<p>Thank you for your attention</p>
| João Matos | <p>Each node tracks the total amount of requested RAM (<code>resources.requests.memory</code>) for all pods assigned to it. That cannot exceed the total capacity of the machine. I would triple check that you have no other pods. You should see them on <code>kubectl describe node</code>.</p>
| coderanger |
<p>I have an existing configMap with JSON data. The data could be anything that is allowed in JSON format - arrays, objects, strings, integers, etc.
For example:</p>
<pre><code>{
"channels": ["10", "20", "30"],
"settings": "off",
"expiry": 100,
"metadata": {
"name": "test",
"action": "update"
}
}
</code></pre>
<p>Now I want to update the configMap with newer data.
The catch is that I don't want to update any of the values, but just to add or remove any fields that have been added or removed in the new data.
The reason for this is that the values are defaults and might have been already updated in the configMap by other pods/services.
So for example, if the new data contains the below JSON data (expiry field removed and some values changed):</p>
<pre><code>{
"channels": ["10", "20", "30", "100", "10000"],
"settings": "on",
"metadata": {
"name": "test",
"action": "delete"
}
}
</code></pre>
<p>Then I expect the configMap to be updated to look like this:</p>
<pre><code>{
"channels": ["10", "20", "30"],
"settings": "off",
"metadata": {
"name": "test",
"action": "update"
}
}
</code></pre>
<p>so the values stayed as they were, but the 'expiry' field was removed.</p>
<p>I am using ansible to deploy the kubernetes resources, but I am open to other tools/scripts that could help me achieve what I need.</p>
<p>Thanks in advance</p>
| nsteiner | <p>This is not supported by Kubernetes. As you said, the data is JSON-encoded, it's a string. ConfigMap (and Secrets) only understand strings, not nested data of any kind. That's why you have to encode it before storage. You'll need to fetch the data, decode it, make your changes, and then encode and update/patch in the API.</p>
| coderanger |
<p>I am attempting to set up ThingsBoard on a google k8s cluster following the documentation <a href="https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/README.md" rel="nofollow noreferrer">here</a>.</p>
<p>Everything is set up and running, but I can't seem to figure out which IP I should use to connect to the login page. None of the external ips I can find appear to be working</p>
| ThomasVdBerge | <p>Public access is set up using an Ingress here <a href="https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml#L571-L607" rel="nofollow noreferrer">https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml#L571-L607</a></p>
<p>By default I think GKE sets up ingress-gce which uses Google Cloud Load Balancer rules to implement the ingress system, so you would need to find the IP of your load balancer. That said the Ingree doesn't specify a hostname-based routing rule so it might not work well if you have other ingresses in play.</p>
| coderanger |
<p>Right now I'm deploying applications on k8s using yaml files.</p>
<p>Like the one below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: flow
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: serviceA
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA-ingress
namespace: flow
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
rules:
- host: serviceA.xyz.com
http:
paths:
- path: /
backend:
serviceName: serviceA
servicePort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serviceA-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=serviceA-main
server.port=8080
logging.level.org.springframework.jdbc.core=debug
lead.pg.url=serviceB.flow.svc:8080/lead
task.pg.url=serviceB.flow.svc:8080/task
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: serviceA-deployment
namespace: flow
spec:
selector:
matchLabels:
app: serviceA
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: serviceA
spec:
containers:
- name: serviceA
image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1
command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: serviceA-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: serviceA-application-config
configMap:
name: serviceA-config
items:
- key: application-dev.properties
path: application-dev.properties
restartPolicy: Always
</code></pre>
<p>Is there any automated way to convert this yaml into <code>helm charts</code>.</p>
<p>Or any other workaround or sample template that I can use to achieve this.</p>
<p>Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.</p>
<p>Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.</p>
| mchawre | <p>At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.</p>
| coderanger |