questions
stringlengths
4
1.65k
answers
stringlengths
1.73k
353k
site
stringclasses
24 values
answers_cleaned
stringlengths
1.73k
353k
external-dns Make sure to use 0 5 14 version of ExternalDNS for this tutorial have at least 1 domain registered at TransIP and enabled the API To use the TransIP API you need an account at TransIP and enable API usage as described in the With the private key generated by the API we create a kubernetes secret This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP Enable TransIP API and prepare your API key TransIP
# TransIP This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP. Make sure to use **>=0.5.14** version of ExternalDNS for this tutorial, have at least 1 domain registered at TransIP and enabled the API. ## Enable TransIP API and prepare your API key To use the TransIP API you need an account at TransIP and enable API usage as described in the [knowledge base](https://www.transip.eu/knowledgebase/entry/77-want-use-the-transip-api/). With the private key generated by the API, we create a kubernetes secret: ```console $ kubectl create secret generic transip-api-key --from-file=transip-api-key=/path/to/private.key ``` ## Deploy ExternalDNS Below are example manifests, for both cluster without or with RBAC enabled. Don't forget to replace `YOUR_TRANSIP_ACCOUNT_NAME` with your TransIP account name. In these examples, an example domain-filter is defined. Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter. Refer to the docs for any other command-line parameters you might want to use. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains - --provider=transip - --transip-account=YOUR_TRANSIP_ACCOUNT_NAME - --transip-keyfile=/transip/transip-api-key volumeMounts: - mountPath: /transip name: transip-api-key readOnly: true volumes: - name: transip-api-key secret: secretName: transip-api-key ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains - --provider=transip - --transip-account=YOUR_TRANSIP_ACCOUNT_NAME - --transip-keyfile=/transip/transip-api-key volumeMounts: - mountPath: /transip name: transip-api-key readOnly: true volumes: - name: transip-api-key secret: secretName: transip-api-key ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; this is the name ExternalDNS will create and manage DNS records for. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the TransIP DNS records. ## Verifying TransIP DNS records Check your [TransIP Control Panel](https://transip.eu/cp) to view the records for your TransIP DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain.
external-dns
TransIP This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using TransIP Make sure to use 0 5 14 version of ExternalDNS for this tutorial have at least 1 domain registered at TransIP and enabled the API Enable TransIP API and prepare your API key To use the TransIP API you need an account at TransIP and enable API usage as described in the knowledge base https www transip eu knowledgebase entry 77 want use the transip api With the private key generated by the API we create a kubernetes secret console kubectl create secret generic transip api key from file transip api key path to private key Deploy ExternalDNS Below are example manifests for both cluster without or with RBAC enabled Don t forget to replace YOUR TRANSIP ACCOUNT NAME with your TransIP account name In these examples an example domain filter is defined Such a filter can be used to prevent ExternalDNS from touching any domain not listed in the filter Refer to the docs for any other command line parameters you might want to use Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains provider transip transip account YOUR TRANSIP ACCOUNT NAME transip keyfile transip transip api key volumeMounts mountPath transip name transip api key readOnly true volumes name transip api key secret secretName transip api key Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs watch list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains provider transip transip account YOUR TRANSIP ACCOUNT NAME transip keyfile transip transip api key volumeMounts mountPath transip name transip api key readOnly true volumes name transip api key secret secretName transip api key Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service this is the name ExternalDNS will create and manage DNS records for ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the TransIP DNS records Verifying TransIP DNS records Check your TransIP Control Panel https transip eu cp to view the records for your TransIP DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain
external-dns Follow the to setup ExternalDNS for use in Kubernetes clusters This tutorial describes how to use ExternalDNS with the kube ingress aws controller 1 1 https github com zalando incubator kube ingress aws controller kube ingress aws controller Setting up ExternalDNS and kube ingress aws controller running in AWS Specify the argument so that ExternalDNS will look
# kube-ingress-aws-controller This tutorial describes how to use ExternalDNS with the [kube-ingress-aws-controller][1]. [1]: https://github.com/zalando-incubator/kube-ingress-aws-controller ## Setting up ExternalDNS and kube-ingress-aws-controller Follow the [AWS tutorial](aws.md) to setup ExternalDNS for use in Kubernetes clusters running in AWS. Specify the `source=ingress` argument so that ExternalDNS will look for hostnames in Ingress objects. In addition, you may wish to limit which Ingress objects are used as an ExternalDNS source via the `ingress-class` argument, but this is not required. For help setting up the Kubernetes Ingress AWS Controller, that can create ALBs and NLBs, follow the [Setup Guide][2]. [2]: https://github.com/zalando-incubator/kube-ingress-aws-controller/tree/HEAD/deploy ### Optional RouteGroup [RouteGroup][3] is a CRD, that enables you to do complex routing with [Skipper][4]. First, you have to apply the RouteGroup CRD to your cluster: ``` kubectl apply -f https://github.com/zalando/skipper/blob/HEAD/dataclients/kubernetes/deploy/apply/routegroups_crd.yaml ``` You have to grant all controllers: [Skipper][4], [kube-ingress-aws-controller][1] and external-dns to read the routegroup resource and kube-ingress-aws-controller to update the status field of a routegroup. This depends on your RBAC policies, in case you use RBAC, you can use this for all 3 controllers: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kube-ingress-aws-controller rules: - apiGroups: - extensions - networking.k8s.io resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingresses/status verbs: - patch - update - apiGroups: - zalando.org resources: - routegroups verbs: - get - list - watch - apiGroups: - zalando.org resources: - routegroups/status verbs: - patch - update ``` See also current RBAC yaml files: - [kube-ingress-aws-controller](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/ingress-controller/01-rbac.yaml) - [skipper](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/skipper/rbac.yaml) - [external-dns](https://github.com/zalando-incubator/kubernetes-on-aws/blob/dev/cluster/manifests/external-dns/01-rbac.yaml) [3]: https://opensource.zalando.com/skipper/kubernetes/routegroups/#routegroups [4]: https://opensource.zalando.com/skipper ## Deploy an example application Create the following sample "echoserver" application to demonstrate how ExternalDNS works with ingress objects, that were created by [kube-ingress-aws-controller][1]. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: echoserver spec: replicas: 1 selector: matchLabels: app: echoserver template: metadata: labels: app: echoserver spec: containers: - image: gcr.io/google_containers/echoserver:1.4 imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: echoserver spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: ClusterIP selector: app: echoserver ``` Note that the Service object is of type `ClusterIP`, because we will target [Skipper][4] and do the HTTP routing in Skipper. We don't need a Service of type `LoadBalancer` here, since we will be using a shared skipper-ingress for all Ingress. Skipper use `hostNetwork` to be able to get traffic from AWS LoadBalancers EC2 network. ALBs or NLBs, will be created based on need and will be shared across all ingress as default. ## Ingress examples Create the following Ingress to expose the echoserver application to the Internet. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echoserver spec: ingressClassName: skipper rules: - host: echoserver.mycluster.example.org http: &echoserver_root paths: - path: / backend: service: name: echoserver port: number: 80 pathType: Prefix - host: echoserver.example.org http: *echoserver_root ``` The above should result in the creation of an (ipv4) ALB in AWS which will forward traffic to skipper which will forward to the echoserver application. If the `--source=ingress` argument is specified, then ExternalDNS will create DNS records based on the hosts specified in ingress objects. The above example would result in two alias records being created, `echoserver.mycluster.example.org` and `echoserver.example.org`, which both alias the ALB that is associated with the Ingress object. Note that the above example makes use of the YAML anchor feature to avoid having to repeat the http section for multiple hosts that use the exact same paths. If this Ingress object will only be fronting one backend Service, we might instead create the following: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: external-dns.alpha.kubernetes.io/hostname: echoserver.mycluster.example.org, echoserver.example.org name: echoserver spec: ingressClassName: skipper rules: - http: paths: - path: / backend: service: name: echoserver port: number: 80 pathType: Prefix ``` In the above example we create a default path that works for any hostname, and make use of the `external-dns.alpha.kubernetes.io/hostname` annotation to create multiple aliases for the resulting ALB. ## Dualstack ALBs AWS [supports](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html#ip-address-type) both IPv4 and "dualstack" (both IPv4 and IPv6) interfaces for ALBs. The Kubernetes Ingress AWS controller supports the `alb.ingress.kubernetes.io/ip-address-type` annotation (which defaults to `ipv4`) to determine this. If this annotation is set to `dualstack` then ExternalDNS will create two alias records (one A record and one AAAA record) for each hostname associated with the Ingress object. Example: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: alb.ingress.kubernetes.io/ip-address-type: dualstack name: echoserver spec: ingressClassName: skipper rules: - host: echoserver.example.org http: paths: - path: / backend: service: name: echoserver port: number: 80 pathType: Prefix ``` The above Ingress object will result in the creation of an ALB with a dualstack interface. ExternalDNS will create both an A `echoserver.example.org` record and an AAAA record of the same name, that each are aliases for the same ALB. ## NLBs AWS has [NLBs](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) and [kube-ingress-aws-controller][1] is able to create NLBs instead of ALBs. The Kubernetes Ingress AWS controller supports the `zalando.org/aws-load-balancer-type` annotation (which defaults to `alb`) to determine this. If this annotation is set to `nlb` then ExternalDNS will create an NLB instead of an ALB. Example: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: zalando.org/aws-load-balancer-type: nlb name: echoserver spec: ingressClassName: skipper rules: - host: echoserver.example.org http: paths: - path: / backend: service: name: echoserver port: number: 80 pathType: Prefix ``` The above Ingress object will result in the creation of an NLB. A successful create, you can observe in the ingress `status` field, that is written by [kube-ingress-aws-controller][1]: ```yaml status: loadBalancer: ingress: - hostname: kube-ing-lb-atedkrlml7iu-1681027139.$region.elb.amazonaws.com ``` ExternalDNS will create a A-records `echoserver.example.org`, that use AWS ALIAS record to automatically maintain IP addresses of the NLB. ## RouteGroup (optional) [Kube-ingress-aws-controller][1], [Skipper][4] and external-dns support [RouteGroups][3]. External-dns needs to be started with `--source=skipper-routegroup` parameter in order to work on RouteGroup objects. Here we can not show [all RouteGroup capabilities](https://opensource.zalando.com/skipper/kubernetes/routegroups/), but we show one simple example with an application and a custom https redirect. ```yaml apiVersion: zalando.org/v1 kind: RouteGroup metadata: name: my-route-group spec: backends: - name: my-backend type: service serviceName: my-service servicePort: 80 - name: redirectShunt type: shunt defaultBackends: - backendName: my-service routes: - pathSubtree: / - pathSubtree: / predicates: - Header("X-Forwarded-Proto", "http") filters: - redirectTo(302, "https:") backends: - redirectShunt ```
external-dns
kube ingress aws controller This tutorial describes how to use ExternalDNS with the kube ingress aws controller 1 1 https github com zalando incubator kube ingress aws controller Setting up ExternalDNS and kube ingress aws controller Follow the AWS tutorial aws md to setup ExternalDNS for use in Kubernetes clusters running in AWS Specify the source ingress argument so that ExternalDNS will look for hostnames in Ingress objects In addition you may wish to limit which Ingress objects are used as an ExternalDNS source via the ingress class argument but this is not required For help setting up the Kubernetes Ingress AWS Controller that can create ALBs and NLBs follow the Setup Guide 2 2 https github com zalando incubator kube ingress aws controller tree HEAD deploy Optional RouteGroup RouteGroup 3 is a CRD that enables you to do complex routing with Skipper 4 First you have to apply the RouteGroup CRD to your cluster kubectl apply f https github com zalando skipper blob HEAD dataclients kubernetes deploy apply routegroups crd yaml You have to grant all controllers Skipper 4 kube ingress aws controller 1 and external dns to read the routegroup resource and kube ingress aws controller to update the status field of a routegroup This depends on your RBAC policies in case you use RBAC you can use this for all 3 controllers yaml apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name kube ingress aws controller rules apiGroups extensions networking k8s io resources ingresses verbs get list watch apiGroups extensions networking k8s io resources ingresses status verbs patch update apiGroups zalando org resources routegroups verbs get list watch apiGroups zalando org resources routegroups status verbs patch update See also current RBAC yaml files kube ingress aws controller https github com zalando incubator kubernetes on aws blob dev cluster manifests ingress controller 01 rbac yaml skipper https github com zalando incubator kubernetes on aws blob dev cluster manifests skipper rbac yaml external dns https github com zalando incubator kubernetes on aws blob dev cluster manifests external dns 01 rbac yaml 3 https opensource zalando com skipper kubernetes routegroups routegroups 4 https opensource zalando com skipper Deploy an example application Create the following sample echoserver application to demonstrate how ExternalDNS works with ingress objects that were created by kube ingress aws controller 1 yaml apiVersion apps v1 kind Deployment metadata name echoserver spec replicas 1 selector matchLabels app echoserver template metadata labels app echoserver spec containers image gcr io google containers echoserver 1 4 imagePullPolicy Always name echoserver ports containerPort 8080 apiVersion v1 kind Service metadata name echoserver spec ports port 80 targetPort 8080 protocol TCP type ClusterIP selector app echoserver Note that the Service object is of type ClusterIP because we will target Skipper 4 and do the HTTP routing in Skipper We don t need a Service of type LoadBalancer here since we will be using a shared skipper ingress for all Ingress Skipper use hostNetwork to be able to get traffic from AWS LoadBalancers EC2 network ALBs or NLBs will be created based on need and will be shared across all ingress as default Ingress examples Create the following Ingress to expose the echoserver application to the Internet yaml apiVersion networking k8s io v1 kind Ingress metadata name echoserver spec ingressClassName skipper rules host echoserver mycluster example org http echoserver root paths path backend service name echoserver port number 80 pathType Prefix host echoserver example org http echoserver root The above should result in the creation of an ipv4 ALB in AWS which will forward traffic to skipper which will forward to the echoserver application If the source ingress argument is specified then ExternalDNS will create DNS records based on the hosts specified in ingress objects The above example would result in two alias records being created echoserver mycluster example org and echoserver example org which both alias the ALB that is associated with the Ingress object Note that the above example makes use of the YAML anchor feature to avoid having to repeat the http section for multiple hosts that use the exact same paths If this Ingress object will only be fronting one backend Service we might instead create the following yaml apiVersion networking k8s io v1 kind Ingress metadata annotations external dns alpha kubernetes io hostname echoserver mycluster example org echoserver example org name echoserver spec ingressClassName skipper rules http paths path backend service name echoserver port number 80 pathType Prefix In the above example we create a default path that works for any hostname and make use of the external dns alpha kubernetes io hostname annotation to create multiple aliases for the resulting ALB Dualstack ALBs AWS supports https docs aws amazon com elasticloadbalancing latest application application load balancers html ip address type both IPv4 and dualstack both IPv4 and IPv6 interfaces for ALBs The Kubernetes Ingress AWS controller supports the alb ingress kubernetes io ip address type annotation which defaults to ipv4 to determine this If this annotation is set to dualstack then ExternalDNS will create two alias records one A record and one AAAA record for each hostname associated with the Ingress object Example yaml apiVersion networking k8s io v1 kind Ingress metadata annotations alb ingress kubernetes io ip address type dualstack name echoserver spec ingressClassName skipper rules host echoserver example org http paths path backend service name echoserver port number 80 pathType Prefix The above Ingress object will result in the creation of an ALB with a dualstack interface ExternalDNS will create both an A echoserver example org record and an AAAA record of the same name that each are aliases for the same ALB NLBs AWS has NLBs https docs aws amazon com elasticloadbalancing latest network introduction html and kube ingress aws controller 1 is able to create NLBs instead of ALBs The Kubernetes Ingress AWS controller supports the zalando org aws load balancer type annotation which defaults to alb to determine this If this annotation is set to nlb then ExternalDNS will create an NLB instead of an ALB Example yaml apiVersion networking k8s io v1 kind Ingress metadata annotations zalando org aws load balancer type nlb name echoserver spec ingressClassName skipper rules host echoserver example org http paths path backend service name echoserver port number 80 pathType Prefix The above Ingress object will result in the creation of an NLB A successful create you can observe in the ingress status field that is written by kube ingress aws controller 1 yaml status loadBalancer ingress hostname kube ing lb atedkrlml7iu 1681027139 region elb amazonaws com ExternalDNS will create a A records echoserver example org that use AWS ALIAS record to automatically maintain IP addresses of the NLB RouteGroup optional Kube ingress aws controller 1 Skipper 4 and external dns support RouteGroups 3 External dns needs to be started with source skipper routegroup parameter in order to work on RouteGroup objects Here we can not show all RouteGroup capabilities https opensource zalando com skipper kubernetes routegroups but we show one simple example with an application and a custom https redirect yaml apiVersion zalando org v1 kind RouteGroup metadata name my route group spec backends name my backend type service serviceName my service servicePort 80 name redirectShunt type shunt defaultBackends backendName my service routes pathSubtree pathSubtree predicates Header X Forwarded Proto http filters redirectTo 302 https backends redirectShunt
external-dns Creating a Cloudflare DNS zone Make sure to use 0 4 2 version of ExternalDNS for this tutorial We highly recommend to read this tutorial if you haven t used Cloudflare before Cloudflare DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS
# Cloudflare DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS. Make sure to use **>=0.4.2** version of ExternalDNS for this tutorial. ## Creating a Cloudflare DNS zone We highly recommend to read this tutorial if you haven't used Cloudflare before: [Create a Cloudflare account and add a website](https://support.cloudflare.com/hc/en-us/articles/201720164-Step-2-Create-a-Cloudflare-account-and-add-a-website) ## Creating Cloudflare Credentials Snippet from [Cloudflare - Getting Started](https://api.cloudflare.com/#getting-started-endpoints): >Cloudflare's API exposes the entire Cloudflare infrastructure via a standardized programmatic interface. Using Cloudflare's API, you can do just about anything you can do on cloudflare.com via the customer dashboard. >The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses. If you are registered with Cloudflare, you can obtain your API key from the bottom of the "My Account" page, found here: [Go to My account](https://dash.cloudflare.com/profile). API Token will be preferred for authentication if `CF_API_TOKEN` environment variable is set. Otherwise `CF_API_KEY` and `CF_API_EMAIL` should be set to run ExternalDNS with Cloudflare. You may provide the Cloudflare API token through a file by setting the `CF_API_TOKEN="file:/path/to/token"`. Note. The `CF_API_KEY` and `CF_API_EMAIL` should not be present, if you are using a `CF_API_TOKEN`. When using API Token authentication, the token should be granted Zone `Read`, DNS `Edit` privileges, and access to `All zones`. If you would like to further restrict the API permissions to a specific zone (or zones), you also need to use the `--zone-id-filter` so that the underlying API requests only access the zones that you explicitly specify, as opposed to accessing all zones. ## Throttling Cloudflare API has a [global rate limit of 1,200 requests per five minutes](https://developers.cloudflare.com/fundamentals/api/reference/limits/). Running several fast polling ExternalDNS instances in a given account can easily hit that limit. The AWS Provider [docs](./aws.md#throttling) has some recommendations that can be followed here too, but in particular, consider passing `--cloudflare-dns-records-per-page` with a high value (maximum is 5,000). ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Begin by creating a Kubernetes secret to securely store your CloudFlare API key. This key will enable ExternalDNS to authenticate with CloudFlare: ```shell kubectl create secret generic cloudflare-api-key --from-literal=apiKey=YOUR_API_KEY --from-literal=email=YOUR_CLOUDFLARE_EMAIL ``` And for API Token it should look like : ```shell kubectl create secret generic cloudflare-api-key --from-literal=apiKey=YOUR_API_TOKEN ``` Ensure to replace YOUR_API_KEY with your actual CloudFlare API key and YOUR_CLOUDFLARE_EMAIL with the email associated with your CloudFlare account. Then apply one of the following manifests file to deploy ExternalDNS. ### Using Helm Create a values.yaml file to configure ExternalDNS to use CloudFlare as the DNS provider. This file should include the necessary environment variables: ```yaml provider: name: cloudflare env: - name: CF_API_KEY valueFrom: secretKeyRef: name: cloudflare-api-key key: apiKey - name: CF_API_EMAIL valueFrom: secretKeyRef: name: cloudflare-api-key key: email ``` Use this in your values.yaml, if you are using API Token: ```yaml provider: name: cloudflare env: - name: CF_API_TOKEN valueFrom: secretKeyRef: name: cloudflare-api-key key: apiKey ``` Finally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file: ```shell helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/ ``` ```shell helm repo update ``` ```shell helm upgrade --install external-dns external-dns/external-dns --values values.yaml ``` ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --zone-id-filter=023e105f4ecef8ad9ca31a8372d0c353 # (optional) limit to a specific zone. - --provider=cloudflare - --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...) - --cloudflare-dns-records-per-page=5000 # (optional) configure how many DNS records to fetch per request - --cloudflare-region-key="eu" # (optional) configure which region can decrypt HTTPS requests env: - name: CF_API_KEY valueFrom: secretKeyRef: name: cloudflare-api-key key: apiKey - name: CF_API_EMAIL valueFrom: secretKeyRef: name: cloudflare-api-key key: email ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --zone-id-filter=023e105f4ecef8ad9ca31a8372d0c353 # (optional) limit to a specific zone. - --provider=cloudflare - --cloudflare-proxied # (optional) enable the proxy feature of Cloudflare (DDOS protection, CDN...) - --cloudflare-dns-records-per-page=5000 # (optional) configure how many DNS records to fetch per request - --cloudflare-region-key="eu" # (optional) configure which region can decrypt HTTPS requests env: - name: CF_API_KEY valueFrom: secretKeyRef: name: cloudflare-api-key key: apiKey - name: CF_API_EMAIL valueFrom: secretKeyRef: name: cloudflare-api-key key: email ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: example.com external-dns.alpha.kubernetes.io/ttl: "120" #optional spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the Cloudflare DNS zone created above. The annotation may also be a subdomain of the DNS zone (e.g. 'www.example.com'). By setting the TTL annotation on the service, you have to pass a valid TTL, which must be 120 or above. This annotation is optional, if you won't set it, it will be 1 (automatic) which is 300. For Cloudflare proxied entries, set the TTL annotation to 1 (automatic), or do not set it. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```shell $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Cloudflare DNS records. ## Verifying Cloudflare DNS records Check your [Cloudflare dashboard](https://www.cloudflare.com/a/dns/example.com) to view the records for your Cloudflare DNS zone. Substitute the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage Cloudflare DNS records, we can delete the tutorial's example: ```shell $ kubectl delete -f nginx.yaml $ kubectl delete -f externaldns.yaml ``` ## Setting cloudflare-proxied on a per-ingress basis Using the `external-dns.alpha.kubernetes.io/cloudflare-proxied: "true"` annotation on your ingress, you can specify if the proxy feature of Cloudflare should be enabled for that record. This setting will override the global `--cloudflare-proxied` setting. ## Setting cloudflare-region-key to configure regional services Using the `external-dns.alpha.kubernetes.io/cloudflare-region-key` annotation on your ingress, you can restrict which data centers can decrypt and serve HTTPS traffic. A list of available options can be seen [here](https://developers.cloudflare.com/data-localization/regional-services/get-started/). If not set the value will default to `global`.
external-dns
Cloudflare DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Cloudflare DNS Make sure to use 0 4 2 version of ExternalDNS for this tutorial Creating a Cloudflare DNS zone We highly recommend to read this tutorial if you haven t used Cloudflare before Create a Cloudflare account and add a website https support cloudflare com hc en us articles 201720164 Step 2 Create a Cloudflare account and add a website Creating Cloudflare Credentials Snippet from Cloudflare Getting Started https api cloudflare com getting started endpoints Cloudflare s API exposes the entire Cloudflare infrastructure via a standardized programmatic interface Using Cloudflare s API you can do just about anything you can do on cloudflare com via the customer dashboard The Cloudflare API is a RESTful API based on HTTPS requests and JSON responses If you are registered with Cloudflare you can obtain your API key from the bottom of the My Account page found here Go to My account https dash cloudflare com profile API Token will be preferred for authentication if CF API TOKEN environment variable is set Otherwise CF API KEY and CF API EMAIL should be set to run ExternalDNS with Cloudflare You may provide the Cloudflare API token through a file by setting the CF API TOKEN file path to token Note The CF API KEY and CF API EMAIL should not be present if you are using a CF API TOKEN When using API Token authentication the token should be granted Zone Read DNS Edit privileges and access to All zones If you would like to further restrict the API permissions to a specific zone or zones you also need to use the zone id filter so that the underlying API requests only access the zones that you explicitly specify as opposed to accessing all zones Throttling Cloudflare API has a global rate limit of 1 200 requests per five minutes https developers cloudflare com fundamentals api reference limits Running several fast polling ExternalDNS instances in a given account can easily hit that limit The AWS Provider docs aws md throttling has some recommendations that can be followed here too but in particular consider passing cloudflare dns records per page with a high value maximum is 5 000 Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Begin by creating a Kubernetes secret to securely store your CloudFlare API key This key will enable ExternalDNS to authenticate with CloudFlare shell kubectl create secret generic cloudflare api key from literal apiKey YOUR API KEY from literal email YOUR CLOUDFLARE EMAIL And for API Token it should look like shell kubectl create secret generic cloudflare api key from literal apiKey YOUR API TOKEN Ensure to replace YOUR API KEY with your actual CloudFlare API key and YOUR CLOUDFLARE EMAIL with the email associated with your CloudFlare account Then apply one of the following manifests file to deploy ExternalDNS Using Helm Create a values yaml file to configure ExternalDNS to use CloudFlare as the DNS provider This file should include the necessary environment variables yaml provider name cloudflare env name CF API KEY valueFrom secretKeyRef name cloudflare api key key apiKey name CF API EMAIL valueFrom secretKeyRef name cloudflare api key key email Use this in your values yaml if you are using API Token yaml provider name cloudflare env name CF API TOKEN valueFrom secretKeyRef name cloudflare api key key apiKey Finally install the ExternalDNS chart with Helm using the configuration specified in your values yaml file shell helm repo add external dns https kubernetes sigs github io external dns shell helm repo update shell helm upgrade install external dns external dns external dns values values yaml Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above zone id filter 023e105f4ecef8ad9ca31a8372d0c353 optional limit to a specific zone provider cloudflare cloudflare proxied optional enable the proxy feature of Cloudflare DDOS protection CDN cloudflare dns records per page 5000 optional configure how many DNS records to fetch per request cloudflare region key eu optional configure which region can decrypt HTTPS requests env name CF API KEY valueFrom secretKeyRef name cloudflare api key key apiKey name CF API EMAIL valueFrom secretKeyRef name cloudflare api key key email Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list watch apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above zone id filter 023e105f4ecef8ad9ca31a8372d0c353 optional limit to a specific zone provider cloudflare cloudflare proxied optional enable the proxy feature of Cloudflare DDOS protection CDN cloudflare dns records per page 5000 optional configure how many DNS records to fetch per request cloudflare region key eu optional configure which region can decrypt HTTPS requests env name CF API KEY valueFrom secretKeyRef name cloudflare api key key apiKey name CF API EMAIL valueFrom secretKeyRef name cloudflare api key key email Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname example com external dns alpha kubernetes io ttl 120 optional spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the Cloudflare DNS zone created above The annotation may also be a subdomain of the DNS zone e g www example com By setting the TTL annotation on the service you have to pass a valid TTL which must be 120 or above This annotation is optional if you won t set it it will be 1 automatic which is 300 For Cloudflare proxied entries set the TTL annotation to 1 automatic or do not set it ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service shell kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the Cloudflare DNS records Verifying Cloudflare DNS records Check your Cloudflare dashboard https www cloudflare com a dns example com to view the records for your Cloudflare DNS zone Substitute the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage Cloudflare DNS records we can delete the tutorial s example shell kubectl delete f nginx yaml kubectl delete f externaldns yaml Setting cloudflare proxied on a per ingress basis Using the external dns alpha kubernetes io cloudflare proxied true annotation on your ingress you can specify if the proxy feature of Cloudflare should be enabled for that record This setting will override the global cloudflare proxied setting Setting cloudflare region key to configure regional services Using the external dns alpha kubernetes io cloudflare region key annotation on your ingress you can restrict which data centers can decrypt and serve HTTPS traffic A list of available options can be seen here https developers cloudflare com data localization regional services get started If not set the value will default to global
external-dns Creating a DigitalOcean DNS zone Make sure to use 0 4 2 version of ExternalDNS for this tutorial If you want to learn about how to use DigitalOcean s DNS service read the following tutorial series This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS DigitalOcean DNS
# DigitalOcean DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS. Make sure to use **>=0.4.2** version of ExternalDNS for this tutorial. ## Creating a DigitalOcean DNS zone If you want to learn about how to use DigitalOcean's DNS service read the following tutorial series: [An Introduction to Managing DNS](https://www.digitalocean.com/community/tutorial_series/an-introduction-to-managing-dns), and specifically [How To Set Up a Host Name with DigitalOcean DNS](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean) Create a new DNS zone where you want to create your records in. Let's use `example.com` as an example here. ## Creating DigitalOcean Credentials Generate a new personal token by going to [the API settings](https://cloud.digitalocean.com/settings/api/tokens) or follow [How To Use the DigitalOcean API v2](https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-api-v2) if you need more information. Give the token a name and choose read and write access. The token needs to be passed to ExternalDNS so make a note of it for later use. The environment variable `DO_TOKEN` will be needed to run ExternalDNS with DigitalOcean. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Begin by creating a Kubernetes secret to securely store your DigitalOcean API key. This key will enable ExternalDNS to authenticate with DigitalOcean: ```shell kubectl create secret generic DO_TOKEN --from-literal=DO_TOKEN=YOUR_DIGITALOCEAN_API_KEY ``` Ensure to replace YOUR_DIGITALOCEAN_API_KEY with your actual DigitalOcean API key. Then apply one of the following manifests file to deploy ExternalDNS. ## Using Helm Create a values.yaml file to configure ExternalDNS to use DigitalOcean as the DNS provider. This file should include the necessary environment variables: ```shell provider: name: digitalocean env: - name: DO_TOKEN valueFrom: secretKeyRef: name: DO_TOKEN key: DO_TOKEN ``` ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=digitalocean env: - name: DO_TOKEN valueFrom: secretKeyRef: name: DO_TOKEN key: DO_TOKEN ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=digitalocean env: - name: DO_TOKEN valueFrom: secretKeyRef: name: DO_TOKEN key: DO_TOKEN ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the DigitalOcean DNS zone created above. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the DigitalOcean DNS records. ## Verifying DigitalOcean DNS records Check your [DigitalOcean UI](https://cloud.digitalocean.com/networking/domains) to view the records for your DigitalOcean DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage DigitalOcean DNS records, we can delete the tutorial's example: ``` $ kubectl delete service -f nginx.yaml $ kubectl delete service -f externaldns.yaml ``` ## Advanced Usage ### API Page Size If you have a large number of domains and/or records within a domain, you may encounter API rate limiting because of the number of API calls that external-dns must make to the DigitalOcean API to retrieve the current DNS configuration during every reconciliation loop. If this is the case, use the `--digitalocean-api-page-size` option to increase the size of the pages used when querying the DigitalOcean API. (Note: external-dns uses a default of 50.)
external-dns
DigitalOcean DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using DigitalOcean DNS Make sure to use 0 4 2 version of ExternalDNS for this tutorial Creating a DigitalOcean DNS zone If you want to learn about how to use DigitalOcean s DNS service read the following tutorial series An Introduction to Managing DNS https www digitalocean com community tutorial series an introduction to managing dns and specifically How To Set Up a Host Name with DigitalOcean DNS https www digitalocean com community tutorials how to set up a host name with digitalocean Create a new DNS zone where you want to create your records in Let s use example com as an example here Creating DigitalOcean Credentials Generate a new personal token by going to the API settings https cloud digitalocean com settings api tokens or follow How To Use the DigitalOcean API v2 https www digitalocean com community tutorials how to use the digitalocean api v2 if you need more information Give the token a name and choose read and write access The token needs to be passed to ExternalDNS so make a note of it for later use The environment variable DO TOKEN will be needed to run ExternalDNS with DigitalOcean Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Begin by creating a Kubernetes secret to securely store your DigitalOcean API key This key will enable ExternalDNS to authenticate with DigitalOcean shell kubectl create secret generic DO TOKEN from literal DO TOKEN YOUR DIGITALOCEAN API KEY Ensure to replace YOUR DIGITALOCEAN API KEY with your actual DigitalOcean API key Then apply one of the following manifests file to deploy ExternalDNS Using Helm Create a values yaml file to configure ExternalDNS to use DigitalOcean as the DNS provider This file should include the necessary environment variables shell provider name digitalocean env name DO TOKEN valueFrom secretKeyRef name DO TOKEN key DO TOKEN Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider digitalocean env name DO TOKEN valueFrom secretKeyRef name DO TOKEN key DO TOKEN Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider digitalocean env name DO TOKEN valueFrom secretKeyRef name DO TOKEN key DO TOKEN Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec replicas 1 selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the DigitalOcean DNS zone created above ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the DigitalOcean DNS records Verifying DigitalOcean DNS records Check your DigitalOcean UI https cloud digitalocean com networking domains to view the records for your DigitalOcean DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage DigitalOcean DNS records we can delete the tutorial s example kubectl delete service f nginx yaml kubectl delete service f externaldns yaml Advanced Usage API Page Size If you have a large number of domains and or records within a domain you may encounter API rate limiting because of the number of API calls that external dns must make to the DigitalOcean API to retrieve the current DNS configuration during every reconciliation loop If this is the case use the digitalocean api page size option to increase the size of the pages used when querying the DigitalOcean API Note external dns uses a default of 50
external-dns AWS The following IAM Policy document allows ExternalDNS to update Route53 Resource Record Sets and Hosted Zones You ll want to create this Policy in IAM first In our example we ll call the policy but you can call This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS Make sure to use 0 15 0 version of ExternalDNS for this tutorial IAM Policy it whatever you prefer
# AWS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS. Make sure to use **>=0.15.0** version of ExternalDNS for this tutorial ## IAM Policy The following IAM Policy document allows ExternalDNS to update Route53 Resource Record Sets and Hosted Zones. You'll want to create this Policy in IAM first. In our example, we'll call the policy `AllowExternalDNSUpdates` (but you can call it whatever you prefer). If you prefer, you may fine-tune the policy to permit updates only to explicit Hosted Zone IDs. ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets" ], "Resource": [ "arn:aws:route53:::hostedzone/*" ] }, { "Effect": "Allow", "Action": [ "route53:ListHostedZones", "route53:ListResourceRecordSets", "route53:ListTagsForResource" ], "Resource": [ "*" ] } ] } ``` If you are using the AWS CLI, you can run the following to install the above policy (saved as `policy.json`). This can be use in subsequent steps to allow ExternalDNS to access Route53 zones. ```bash aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://policy.json # example: arn:aws:iam::XXXXXXXXXXXX:policy/AllowExternalDNSUpdates export POLICY_ARN=$(aws iam list-policies \ --query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' --output text) ``` ## Provisioning a Kubernetes cluster You can use [eksctl](https://eksctl.io) to easily provision an [Amazon Elastic Kubernetes Service](https://aws.amazon.com/eks) ([EKS](https://aws.amazon.com/eks)) cluster that is suitable for this tutorial. See [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). ```bash export EKS_CLUSTER_NAME="my-externaldns-cluster" export EKS_CLUSTER_REGION="us-east-2" export KUBECONFIG="$HOME/.kube/${EKS_CLUSTER_NAME}-${EKS_CLUSTER_REGION}.yaml" eksctl create cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION ``` Feel free to use other provisioning tools or an existing cluster. If [Terraform](https://www.terraform.io/) is used, [vpc](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/) and [eks](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/) modules are recommended for standing up an EKS cluster. Amazon has a workshop called [Amazon EKS Terraform Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/afee4679-89af-408b-8108-44f5b1065cc7/) that may be useful for this process. ## Permissions to modify DNS zone You will need to use the above policy (represented by the `POLICY_ARN` environment variable) to allow ExternalDNS to update records in Route53 DNS zones. Here are three common ways this can be accomplished: * [Node IAM Role](#node-iam-role) * [Static credentials](#static-credentials) * [IAM Roles for Service Accounts](#iam-roles-for-service-accounts) For this tutorial, ExternalDNS will use the environment variable `EXTERNALDNS_NS` to represent the namespace, defaulted to `default`. Feel free to change this to something else, such `externaldns` or `kube-addons`. Make sure to edit the `subjects[0].namespace` for the `ClusterRoleBinding` resource when deploying ExternalDNS with RBAC enabled. See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusteres-with-rbac-enabled) for more information. Additionally, throughout this tutorial, the example domain of `example.com` is used. Change this to appropriate domain under your control. See [Set up a hosted zone](#set-up-a-hosted-zone) section. ### Node IAM Role In this method, you can attach a policy to the Node IAM Role. This will allow nodes in the Kubernetes cluster to access Route53 zones, which allows ExternalDNS to update DNS records. Given that this allows all containers to access Route53, not just ExternalDNS, running on the node with these privileges, this method is not recommended, and is only suitable for limited test environments. If you are using eksctl to provision a new cluster, you add the policy at creation time with: ```bash eksctl create cluster --external-dns-access \ --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION \ ``` :warning: **WARNING**: This will assign allow read-write access to all nodes in the cluster, not just ExternalDNS. For this reason, this method is only suitable for limited test environments. If you already provisioned a cluster or use other provisioning tools like Terraform, you can use AWS CLI to attach the policy to the Node IAM Role. #### Get the Node IAM role name The role name of the role associated with the node(s) where ExternalDNS will run is needed. An easy way to get the role name is to use the AWS web console (https://console.aws.amazon.com/eks/), and find any instance in the target node group and copy the role name associated with that instance. ##### Get role name with a single managed nodegroup From the command line, if you have a single managed node group, the default with `eksctl create cluster`, you can find the role name with the following: ```bash # get managed node group name (assuming there's only one node group) GROUP_NAME=$(aws eks list-nodegroups --cluster-name $EKS_CLUSTER_NAME \ --query nodegroups --out text) # fetch role arn given node group name ROLE_ARN=$(aws eks describe-nodegroup --cluster-name $EKS_CLUSTER_NAME \ --nodegroup-name $GROUP_NAME --query nodegroup.nodeRole --out text) # extract just the name part of role arn ROLE_NAME=${NODE_ROLE_ARN##*/} ``` ##### Get role name with other configurations If you have multiple node groups or any unmanaged node groups, the process gets more complex. The first step is to get the instance host name of the desired node to where ExternalDNS will be deployed or is already deployed: ```bash # node instance name of one of the external dns pods currently running INSTANCE_NAME=$(kubectl get pods --all-namespaces \ --selector app.kubernetes.io/instance=external-dns \ --output jsonpath='{.items[0].spec.nodeName}') # instance name of one of the nodes (change if node group is different) INSTANCE_NAME=$(kubectl get nodes --output name | cut -d'/' -f2 | tail -1) ``` With the instance host name, you can then get the instance id: ```bash get_instance_id() { INSTANCE_NAME=$1 # example: ip-192-168-74-34.us-east-2.compute.internal # get list of nodes # ip-192-168-74-34.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx # ip-192-168-86-105.us-east-2.compute.internal aws:///us-east-2a/i-xxxxxxxxxxxxxxxxx NODES=$(kubectl get nodes \ --output jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.providerID}{"\n"}{end}') # print instance id from matching node grep $INSTANCE_NAME <<< "$NODES" | cut -d'/' -f5 } INSTANCE_ID=$(get_instance_id $INSTANCE_NAME) ``` With the instance id, you can get the associated role name: ```bash findRoleName() { INSTANCE_ID=$1 # get all of the roles ROLES=($(aws iam list-roles --query Roles[*].RoleName --out text)) for ROLE in ${ROLES[*]}; do # get instance profile arn PROFILE_ARN=$(aws iam list-instance-profiles-for-role \ --role-name $ROLE --query InstanceProfiles[0].Arn --output text) # if there is an instance profile if [[ "$PROFILE_ARN" != "None" ]]; then # get all the instances with this associated instance profile INSTANCES=$(aws ec2 describe-instances \ --filters Name=iam-instance-profile.arn,Values=$PROFILE_ARN \ --query Reservations[*].Instances[0].InstanceId --out text) # find instances that match the instant profile for INSTANCE in ${INSTANCES[*]}; do # set role name value if there is a match if [[ "$INSTANCE_ID" == "$INSTANCE" ]]; then ROLE_NAME=$ROLE; fi done fi done echo $ROLE_NAME } NODE_ROLE_NAME=$(findRoleName $INSTANCE_ID) ``` Using the role name, you can associate the policy that was created earlier: ```bash # attach policy arn created earlier to node IAM role aws iam attach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN ``` :warning: **WARNING**: This will assign allow read-write access to all pods running on the same node pool, not just the ExternalDNS pod(s). #### Deploy ExternalDNS with attached policy to Node IAM Role If ExternalDNS is not yet deployed, follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC. **NOTE**: Before deleting the cluster during, be sure to run `aws iam detach-role-policy`. Otherwise, there can be errors as the provisioning system, such as `eksctl` or `terraform`, will not be able to delete the roles with the attached policy. ### Static credentials In this method, the policy is attached to an IAM user, and the credentials secrets for the IAM user are then made available using a Kubernetes secret. This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor. However, if the Kubernetes cluster is not hosted on AWS, it may be the only method available. Given this situation, it is important to limit the associated privileges to just minimal required privileges, i.e. read-write access to Route53, and not used a credentials file that has extra privileges beyond what is required. #### Create IAM user and attach the policy ```bash # create IAM user aws iam create-user --user-name "externaldns" # attach policy arn created earlier to IAM user aws iam attach-user-policy --user-name "externaldns" --policy-arn $POLICY_ARN ``` #### Create the static credentials ```bash SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns") ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId') cat <<-EOF > credentials [default] aws_access_key_id = $(echo $ACCESS_KEY_ID) aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey') EOF ``` #### Create Kubernetes secret from credentials ```bash kubectl create secret generic external-dns \ --namespace ${EXTERNALDNS_NS:-"default"} --from-file /local/path/to/credentials ``` #### Deploy ExternalDNS using static credentials Follow the steps under [Deploy ExternalDNS](#deploy-externaldns) using either RBAC or non-RBAC. Make sure to uncomment the section that mounts volumes, so that the credentials can be mounted. > [!TIP] > By default ExternalDNS takes the profile named `default` from the credentials file. If you want to use a different > profile, you can set the environment variable `EXTERNAL_DNS_AWS_PROFILE` to the desired profile name or use the > `--aws-profile` command line argument. It is even possible to use more than one profile at ones, separated by space in > the environment variable `EXTERNAL_DNS_AWS_PROFILE` or by using `--aws-profile` multiple times. In this case > ExternalDNS looks for the hosted zones in all profiles and keeps maintaining a mapping table between zone and profile > in order to be able to modify the zones in the correct profile. ### IAM Roles for Service Accounts [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)) allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts. This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials. This is the preferred method as it implements [PoLP](https://csrc.nist.gov/glossary/term/principle_of_least_privilege) ([Principal of Least Privilege](https://csrc.nist.gov/glossary/term/principle_of_least_privilege)). **IMPORTANT**: This method requires using KSA (Kubernetes service account) and RBAC. This method requires deploying with RBAC. See [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled) when ready to deploy ExternalDNS. **NOTE**: Similar methods to IRSA on AWS are [kiam](https://github.com/uswitch/kiam), which is in maintenence mode, and has [instructions](https://github.com/uswitch/kiam/blob/HEAD/docs/IAM.md) for creating an IAM role, and also [kube2iam](https://github.com/jtblin/kube2iam). IRSA is the officially supported method for EKS clusters, and so for non-EKS clusters on AWS, these other tools could be an option. #### Verify OIDC is supported ```bash aws eks describe-cluster --name $EKS_CLUSTER_NAME \ --query "cluster.identity.oidc.issuer" --output text ``` #### Associate OIDC to cluster Configure the cluster with an OIDC provider and add support for [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) ([IAM roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)). If you used `eksctl` to provision the EKS cluster, you can update it with the following command: ```bash eksctl utils associate-iam-oidc-provider \ --cluster $EKS_CLUSTER_NAME --approve ``` If the cluster was provisioned with Terraform, you can use the `iam_openid_connect_provider` resource ([ref](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_openid_connect_provider)) to associate to the OIDC provider. #### Create an IAM role bound to a service account For the next steps in this process, we will need to associate the `external-dns` service account and a role used to grant access to Route53. This requires the following steps: 1. Create a role with a trust relationship to the cluster's OIDC provider 2. Attach the `AllowExternalDNSUpdates` policy to the role 3. Create the `external-dns` service account 4. Add annotation to the service account with the role arn ##### Use eksctl with eksctl created EKS cluster If `eksctl` was used to provision the EKS cluster, you can perform all of these steps with the following command: ```bash eksctl create iamserviceaccount \ --cluster $EKS_CLUSTER_NAME \ --name "external-dns" \ --namespace ${EXTERNALDNS_NS:-"default"} \ --attach-policy-arn $POLICY_ARN \ --approve ``` ##### Use aws cli with any EKS cluster Otherwise, we can do the following steps using `aws` commands (also see [Creating an IAM role and policy for your service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)): ```bash ACCOUNT_ID=$(aws sts get-caller-identity \ --query "Account" --output text) OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME \ --query "cluster.identity.oidc.issuer" --output text | sed -e 's|^https://||') cat <<-EOF > trust.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "$OIDC_PROVIDER:sub": "system:serviceaccount:${EXTERNALDNS_NS:-"default"}:external-dns", "$OIDC_PROVIDER:aud": "sts.amazonaws.com" } } } ] } EOF IRSA_ROLE="external-dns-irsa-role" aws iam create-role --role-name $IRSA_ROLE --assume-role-policy-document file://trust.json aws iam attach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN ROLE_ARN=$(aws iam get-role --role-name $IRSA_ROLE --query Role.Arn --output text) # Create service account (skip is already created) kubectl create serviceaccount "external-dns" --namespace ${EXTERNALDNS_NS:-"default"} # Add annotation referencing IRSA role kubectl patch serviceaccount "external-dns" --namespace ${EXTERNALDNS_NS:-"default"} --patch \ "{\"metadata\": { \"annotations\": { \"eks.amazonaws.com/role-arn\": \"$ROLE_ARN\" }}}" ``` If any part of this step is misconfigured, such as the role with incorrect namespace configured in the trust relationship, annotation pointing the the wrong role, etc., you will see errors like `WebIdentityErr: failed to retrieve credentials`. Check the configuration and make corrections. When the service account annotations are updated, then the current running pods will have to be terminated, so that new pod(s) with proper configuration (environment variables) will be created automatically. When annotation is added to service account, the ExternalDNS pod(s) scheduled will have `AWS_ROLE_ARN`, `AWS_STS_REGIONAL_ENDPOINTS`, and `AWS_WEB_IDENTITY_TOKEN_FILE` environment variables injected automatically. #### Deploy ExternalDNS using IRSA Follow the steps under [Manifest (for clusters with RBAC enabled)](#manifest-for-clusters-with-rbac-enabled). Make sure to comment out the service account section if this has been created already. If you deployed ExternalDNS before adding the service account annotation and the corresponding role, you will likely see error with `failed to list hosted zones: AccessDenied: User`. You can delete the current running ExternalDNS pod(s) after updating the annotation, so that new pods scheduled will have appropriate configuration to access Route53. ## Set up a hosted zone *If you prefer to try-out ExternalDNS in one of the existing hosted-zones you can skip this step* Create a DNS zone which will contain the managed DNS records. This tutorial will use the fictional domain of `example.com`. ```bash aws route53 create-hosted-zone --name "example.com." \ --caller-reference "external-dns-test-$(date +%s)" ``` Make a note of the nameservers that were assigned to your new zone. ```bash ZONE_ID=$(aws route53 list-hosted-zones-by-name --output json \ --dns-name "example.com." --query HostedZones[0].Id --out text) aws route53 list-resource-record-sets --output text \ --hosted-zone-id $ZONE_ID --query \ "ResourceRecordSets[?Type == 'NS'].ResourceRecords[*].Value | []" | tr '\t' '\n' ``` This should yield something similar this: ``` ns-695.awsdns-22.net. ns-1313.awsdns-36.org. ns-350.awsdns-43.com. ns-1805.awsdns-33.co.uk. ``` If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values in the from the list above. Please consult your registrar's documentation on how to do that. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. You can check if your cluster has RBAC by `kubectl api-versions | grep rbac.authorization.k8s.io`. For clusters with RBAC enabled, be sure to choose the correct `namespace`. For this tutorial, the enviornment variable `EXTERNALDNS_NS` will refer to the namespace. You can set this to a value of your choice: ```bash export EXTERNALDNS_NS="default" # externaldns, kube-addons, etc # create namespace if it does not yet exist kubectl get namespaces | grep -q $EXTERNALDNS_NS || \ kubectl create namespace $EXTERNALDNS_NS ``` ## Using Helm (with OIDC) Create a values.yaml file to configure ExternalDNS: ```shell provider: name: aws env: - name: AWS_DEFAULT_REGION value: us-east-1 # change to region where EKS is installed ``` Finally, install the ExternalDNS chart with Helm using the configuration specified in your values.yaml file: ```shell helm upgrade --install external-dns external-dns/external-dns --values values.yaml ``` ### When using clusters without RBAC enabled Save the following below as `externaldns-no-rbac.yaml`. ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns labels: app.kubernetes.io/name: external-dns spec: strategy: type: Recreate selector: matchLabels: app.kubernetes.io/name: external-dns template: metadata: labels: app.kubernetes.io/name: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=aws - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) - --registry=txt - --txt-owner-id=my-hostedzone-identifier env: - name: AWS_DEFAULT_REGION value: us-east-1 # change to region where EKS is installed # # Uncomment below if using static credentials # - name: AWS_SHARED_CREDENTIALS_FILE # value: /.aws/credentials # volumeMounts: # - name: aws-credentials # mountPath: /.aws # readOnly: true # volumes: # - name: aws-credentials # secret: # secretName: external-dns ``` When ready you can deploy: ```bash kubectl create --filename externaldns-no-rbac.yaml \ --namespace ${EXTERNALDNS_NS:-"default"} ``` ### When using clusters with RBAC enabled If you're using EKS, you can update the `values.yaml` file you created earlier to include the annotations to link the Role ARN you created before. ```yaml provider: name: aws serviceAccount: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/${EXTERNALDNS_ROLE_NAME:-"external-dns"} ``` If you need to provide credentials directly using a secret (ie. You're not using EKS), you can change the `values.yaml` file to include volume and volume mounts. ```yaml provider: name: aws env: - name: AWS_SHARED_CREDENTIALS_FILE value: /etc/aws/credentials/my_credentials extraVolumes: - name: aws-credentials secret: secretName: external-dns # In this example, the secret will have the data stored in a key named `my_credentials` extraVolumeMounts: - name: aws-credentials mountPath: /etc/aws/credentials readOnly: true ``` When ready, update your Helm installation: ```shell helm upgrade --install external-dns external-dns/external-dns --values values.yaml ``` ## Arguments This list is not the full list, but a few arguments that where chosen. ### aws-zone-type `aws-zone-type` allows filtering for private and public zones ## Annotations Annotations which are specific to AWS. ### alias `external-dns.alpha.kubernetes.io/alias` if set to `true` on an ingress, it will create an ALIAS record when the target is an ALIAS as well. To make the target an alias, the ingress needs to be configured correctly as described in [the docs](./gke-nginx.md#with-a-separate-tcp-load-balancer). In particular, the argument `--publish-service=default/nginx-ingress-controller` has to be set on the `nginx-ingress-controller` container. If one uses the `nginx-ingress` Helm chart, this flag can be set with the `controller.publishService.enabled` configuration option. ### target-hosted-zone `external-dns.alpha.kubernetes.io/aws-target-hosted-zone` can optionally be set to the ID of a Route53 hosted zone. This will force external-dns to use the specified hosted zone when creating an ALIAS target. ### aws-zone-match-parent `aws-zone-match-parent` allows support subdomains within the same zone by using their parent domain, i.e --domain-filter=x.example.com would create a DNS entry for x.example.com (and subdomains thereof). ```yaml ## hosted zone domain: example.com --domain-filter=x.example.com,example.com --aws-zone-match-parent ``` ## Verify ExternalDNS works (Service example) Create the following sample application to test that ExternalDNS works. > For services ExternalDNS will look for the annotation `external-dns.alpha.kubernetes.io/hostname` on the service and use the corresponding value. > If you want to give multiple names to service, you can set it to external-dns.alpha.kubernetes.io/hostname with a comma `,` separator. For this verification phase, you can use default or another namespace for the nginx demo, for example: ```bash NGINXDEMO_NS="nginx" kubectl get namespaces | grep -q $NGINXDEMO_NS || kubectl create namespace $NGINXDEMO_NS ``` Save the following manifest below as `nginx.yaml`: ```yaml apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: nginx.example.com spec: type: LoadBalancer ports: - port: 80 name: http targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 name: http ``` Deploy the nginx deployment and service with: ```bash kubectl create --filename nginx.yaml --namespace ${NGINXDEMO_NS:-"default"} ``` Verify that the load balancer was allocated with: ```bash kubectl get service nginx --namespace ${NGINXDEMO_NS:-"default"} ``` This should show something like: ```bash NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.100.47.41 ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com. 80:32749/TCP 12m ``` After roughly two minutes check that a corresponding DNS record for your service that was created. ```bash aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \ --query "ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'A']" ``` This should show something like: ```json [ { "Name": "nginx.example.com.", "Type": "A", "AliasTarget": { "HostedZoneId": "ZEWFWZ4R16P7IB", "DNSName": "ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.", "EvaluateTargetHealth": true } } ] ``` You can also fetch the corresponding text records: ```bash aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \ --query "ResourceRecordSets[?Name == 'nginx.example.com.']|[?Type == 'TXT']" ``` This will show something like: ```json [ { "Name": "nginx.example.com.", "Type": "TXT", "TTL": 300, "ResourceRecords": [ { "Value": "\"heritage=external-dns,external-dns/owner=external-dns,external-dns/resource=service/default/nginx\"" } ] } ] ``` Note created TXT record alongside ALIAS record. TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS. This makes ExternalDNS safe for running in environments where there are other records managed via other means. For more information about ALIAS record, see [Choosing between alias and non-alias records](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html). Let's check that we can resolve this DNS name. We'll ask the nameservers assigned to your zone first. ```bash dig +short @ns-5514.awsdns-53.org. nginx.example.com. ``` This should return 1+ IP addresses that correspond to the ELB FQDN, i.e. `ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com.`. Next try the public nameservers configured by DNS client on your system: ```bash dig +short nginx.example.com. ``` If you hooked up your DNS zone with its parent zone correctly you can use `curl` to access your site. ```bash curl nginx.example.com. ``` This should show something like: ```html <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... </head> <body> <h1>Welcome to nginx!</h1> ... </body> </html> ``` ## Verify ExternalDNS works (Ingress example) With the previous `deployment` and `service` objects deployed, we can add an `ingress` object and configure a FQDN value for the `host` key. The ingress controller will match incoming HTTP traffic, and route it to the appropriate backend service based on the `host` key. > For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object. For this tutorial, we have two endpoints, the service with `LoadBalancer` type and an ingress. For practical purposes, if an ingress is used, the service type can be changed to `ClusterIP` as two endpoints are unecessary in this scenario. **IMPORTANT**: This requires that an ingress controller has been installed in your Kubernetes cluster. EKS does not come with an ingress controller by default. A popular ingress controller is [ingress-nginx](https://github.com/kubernetes/ingress-nginx/), which can be installed by a [helm chart](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) or by [manifests](https://kubernetes.github.io/ingress-nginx/deploy/#aws). Create an ingress resource manifest file named `ingress.yaml` with the contents below: ```yaml --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx spec: ingressClassName: nginx rules: - host: server.example.com http: paths: - backend: service: name: nginx port: number: 80 path: / pathType: Prefix ``` When ready, you can deploy this with: ```bash kubectl create --filename ingress.yaml --namespace ${NGINXDEMO_NS:-"default"} ``` Watch the status of the ingress until the ADDRESS field is populated. ```bash kubectl get ingress --watch --namespace ${NGINXDEMO_NS:-"default"} ``` You should see something like this: ``` NAME CLASS HOSTS ADDRESS PORTS AGE nginx <none> server.example.com 80 47s nginx <none> server.example.com ae11c2360188411e7951602725593fd1-1224345803.eu-central-1.elb.amazonaws.com. 80 54s ``` For the ingress test, run through similar checks, but using domain name used for the ingress: ```bash # check records on route53 aws route53 list-resource-record-sets --output json --hosted-zone-id $ZONE_ID \ --query "ResourceRecordSets[?Name == 'server.example.com.']" # query using a route53 name server dig +short @ns-5514.awsdns-53.org. server.example.com. # query using the default name server dig +short server.example.com. # connect to the nginx web server through the ingress curl server.example.com. ``` ## More service annotation options ### Custom TTL The default DNS record TTL (Time-To-Live) is 300 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io/ttl`. e.g., modify the service manifest YAML file above: ```yaml apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: nginx.example.com external-dns.alpha.kubernetes.io/ttl: "60" spec: ... ``` This will set the DNS record's TTL to 60 seconds. ### Routing policies Route53 offers [different routing policies](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html). The routing policy for a record can be controlled with the following annotations: * `external-dns.alpha.kubernetes.io/set-identifier`: this **needs** to be set to use any of the following routing policies For any given DNS name, only **one** of the following routing policies can be used: * Weighted records: `external-dns.alpha.kubernetes.io/aws-weight` * Latency-based routing: `external-dns.alpha.kubernetes.io/aws-region` * Failover:`external-dns.alpha.kubernetes.io/aws-failover` * Geolocation-based routing: * `external-dns.alpha.kubernetes.io/aws-geolocation-continent-code` * `external-dns.alpha.kubernetes.io/aws-geolocation-country-code` * `external-dns.alpha.kubernetes.io/aws-geolocation-subdivision-code` * Multi-value answer:`external-dns.alpha.kubernetes.io/aws-multi-value-answer` ### Associating DNS records with healthchecks You can configure Route53 to associate DNS records with healthchecks for automated DNS failover using `external-dns.alpha.kubernetes.io/aws-health-check-id: <health-check-id>` annotation. Note: ExternalDNS does not support creating healthchecks, and assumes that `<health-check-id>` already exists. ## Canonical Hosted Zones When creating ALIAS type records in Route53 it is required that external-dns be aware of the canonical hosted zone in which the specified hostname is created. External-dns is able to automatically identify the canonical hosted zone for many hostnames based upon known hostname suffixes which are defined in [aws.go](https://github.com/kubernetes-sigs/external-dns/blob/master/provider/aws/aws.go#L65). If a hostname does not have a known suffix then the suffix can be added into `aws.go` or the [target-hosted-zone annotation](#target-hosted-zone) can be used to manually define the ID of the canonical hosted zone. ## Govcloud caveats Due to the special nature with how Route53 runs in Govcloud, there are a few tweaks in the deployment settings. * An Environment variable with name of `AWS_REGION` set to either `us-gov-west-1` or `us-gov-east-1` is required. Otherwise it tries to lookup a region that does not exist in Govcloud and it errors out. ```yaml env: - name: AWS_REGION value: us-gov-west-1 ``` * Route53 in Govcloud does not allow aliases. Therefore, container args must be set so that it uses CNAMES and a txt-prefix must be set to something. Otherwise, it will try to create a TXT record with the same value than the CNAME itself, which is not allowed. ```yaml args: - --aws-prefer-cname - --txt-prefix= ``` * The first two changes are needed if you use Route53 in Govcloud, which only supports private zones. There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts. If services and ingresses need to make Route 53 entries to an public zone in a commercial account, you will have set env variables of `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` with a key and secret to the commercial account that has the sufficient rights. ```yaml env: - name: AWS_ACCESS_KEY_ID value: XXXXXXXXX - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: key: ``` ## DynamoDB Registry The DynamoDB Registry can be used to store dns records metadata. See the [DynamoDB Registry Tutorial](../registry/dynamodb.md) for more information. ## Clean up Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly. ```bash kubectl delete service nginx ``` **IMPORTANT** If you attached a policy to the Node IAM Role, then you will want to detach this before deleting the EKS cluster. Otherwise, the role resource will be locked, and the cluster cannot be deleted, especially if it was provisioned by automation like `terraform` or `eksctl`. ```bash aws iam detach-role-policy --role-name $NODE_ROLE_NAME --policy-arn $POLICY_ARN ``` If the cluster was provisioned using `eksctl`, you can delete the cluster with: ```bash eksctl delete cluster --name $EKS_CLUSTER_NAME --region $EKS_CLUSTER_REGION ``` Give ExternalDNS some time to clean up the DNS records for you. Then delete the hosted zone if you created one for the testing purpose. ```bash aws route53 delete-hosted-zone --id $ZONE_ID # e.g /hostedzone/ZEWFWZ4R16P7IB ``` If IAM user credentials were used, you can remove the user with: ```bash aws iam detach-user-policy --user-name "externaldns" --policy-arn $POLICY_ARN # If static credentials were used aws iam delete-access-key --user-name "externaldns" --access-key-id $ACCESS_KEY_ID aws iam delete-user --user-name "externaldns" ``` If IRSA was used, you can remove the IRSA role with: ```bash aws iam detach-role-policy --role-name $IRSA_ROLE --policy-arn $POLICY_ARN aws iam delete-role --role-name $IRSA_ROLE ``` Delete any unneeded policies: ```bash aws iam delete-policy --policy-arn $POLICY_ARN ``` ## Throttling Route53 has a [5 API requests per second per account hard quota](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests-route-53). Running several fast polling ExternalDNS instances in a given account can easily hit that limit. Some ways to reduce the request rate include: * Reduce the polling loop's synchronization interval at the possible cost of slower change propagation (but see `--events` below to reduce the impact). * `--interval=5m` (default `1m`) * Enable a Cache to store the zone records list. It comes with a cost: slower propagation when the zone gets modified from other sources such as the AWS console, terraform, cloudformation or anything similar. * `--provider-cache-time=15m` (default `0m`) * Trigger the polling loop on changes to K8s objects, rather than only at `interval` and ensure a minimum of time between events, to have responsive updates with long poll intervals * `--events` * `--min-event-sync-interval=5m` (default `5s`) * Limit the [sources watched](https://github.com/kubernetes-sigs/external-dns/blob/master/pkg/apis/externaldns/types.go#L364) when the `--events` flag is specified to specific types, namespaces, labels, or annotations * `--source=ingress --source=service` - specify multiple times for multiple sources * `--namespace=my-app` * `--label-filter=app in (my-app)` * `--ingress-class=nginx-external` * Limit services watched by type (not applicable to ingress or other types) * `--service-type-filter=LoadBalancer` default `all` * Limit the hosted zones considered * `--zone-id-filter=ABCDEF12345678` - specify multiple times if needed * `--domain-filter=example.com` by domain suffix - specify multiple times if needed * `--regex-domain-filter=example*` by domain suffix but as a regex - overrides domain-filter * `--exclude-domains=ignore.this.example.com` to exclude a domain or subdomain * `--regex-domain-exclusion=ignore*` subtracts it's matches from `regex-domain-filter`'s matches * `--aws-zone-type=public` only sync zones of this type `[public|private]` * `--aws-zone-tags=owner=k8s` only sync zones with this tag * If the list of zones managed by ExternalDNS doesn't change frequently, cache it by setting a TTL. * `--aws-zones-cache-duration=3h` (default `0` - disabled) * Increase the number of changes applied to Route53 in each batch * `--aws-batch-change-size=4000` (default `1000`) * Increase the interval between changes * `--aws-batch-change-interval=10s` (default `1s`) * Introducing some jitter to the pod initialization, so that when multiple instances of ExternalDNS are updated at the same time they do not make their requests on the same second. A simple way to implement randomised startup is with an init container: ``` ... spec: initContainers: - name: init-jitter image: registry.k8s.io/external-dns/external-dns:v0.15.0 command: - /bin/sh - -c - 'FOR=$((RANDOM % 10))s;echo "Sleeping for $FOR";sleep $FOR' containers: ... ``` ### EKS An effective starting point for EKS with an ingress controller might look like: ```bash --interval=5m --events --source=ingress --domain-filter=example.com --aws-zones-cache-duration=1h ``` ### Batch size options After external-dns generates all changes, it will perform a task to group those changes into batches. Each change will be validated against batch-change-size limits. If at least one of those parameters out of range - the change will be moved to a separate batch. If the change can't fit into any batch - *it will be skipped.*<br> There are 3 options to control batch size for AWS provider: * Maximum amount of changes added to one batch * `--aws-batch-change-size` (default `1000`) * Maximum size of changes in bytes added to one batch * `--aws-batch-change-size-bytes` (default `32000`) * Maximum value count of changes added to one batch * `aws-batch-change-size-values` (default `1000`) `aws-batch-change-size` can be very useful for throttling purposes and can be set to any value. Default values for flags `aws-batch-change-size-bytes` and `aws-batch-change-size-values` are taken from [AWS documentation](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DNSLimitations.html#limits-api-requests) for Route53 API. **You should not change those values until you really have to.** <br> Because those limits are in place, `aws-batch-change-size` can be set to any value: Even if your batch size is `4000` records, your change will be split to separate batches due to bytes/values size limits and apply request will be finished without issues.
external-dns
AWS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster on AWS Make sure to use 0 15 0 version of ExternalDNS for this tutorial IAM Policy The following IAM Policy document allows ExternalDNS to update Route53 Resource Record Sets and Hosted Zones You ll want to create this Policy in IAM first In our example we ll call the policy AllowExternalDNSUpdates but you can call it whatever you prefer If you prefer you may fine tune the policy to permit updates only to explicit Hosted Zone IDs json Version 2012 10 17 Statement Effect Allow Action route53 ChangeResourceRecordSets Resource arn aws route53 hostedzone Effect Allow Action route53 ListHostedZones route53 ListResourceRecordSets route53 ListTagsForResource Resource If you are using the AWS CLI you can run the following to install the above policy saved as policy json This can be use in subsequent steps to allow ExternalDNS to access Route53 zones bash aws iam create policy policy name AllowExternalDNSUpdates policy document file policy json example arn aws iam XXXXXXXXXXXX policy AllowExternalDNSUpdates export POLICY ARN aws iam list policies query Policies PolicyName AllowExternalDNSUpdates Arn output text Provisioning a Kubernetes cluster You can use eksctl https eksctl io to easily provision an Amazon Elastic Kubernetes Service https aws amazon com eks EKS https aws amazon com eks cluster that is suitable for this tutorial See Getting started with Amazon EKS eksctl https docs aws amazon com eks latest userguide getting started eksctl html bash export EKS CLUSTER NAME my externaldns cluster export EKS CLUSTER REGION us east 2 export KUBECONFIG HOME kube EKS CLUSTER NAME EKS CLUSTER REGION yaml eksctl create cluster name EKS CLUSTER NAME region EKS CLUSTER REGION Feel free to use other provisioning tools or an existing cluster If Terraform https www terraform io is used vpc https registry terraform io modules terraform aws modules vpc aws and eks https registry terraform io modules terraform aws modules eks aws modules are recommended for standing up an EKS cluster Amazon has a workshop called Amazon EKS Terraform Workshop https catalog us east 1 prod workshops aws workshops afee4679 89af 408b 8108 44f5b1065cc7 that may be useful for this process Permissions to modify DNS zone You will need to use the above policy represented by the POLICY ARN environment variable to allow ExternalDNS to update records in Route53 DNS zones Here are three common ways this can be accomplished Node IAM Role node iam role Static credentials static credentials IAM Roles for Service Accounts iam roles for service accounts For this tutorial ExternalDNS will use the environment variable EXTERNALDNS NS to represent the namespace defaulted to default Feel free to change this to something else such externaldns or kube addons Make sure to edit the subjects 0 namespace for the ClusterRoleBinding resource when deploying ExternalDNS with RBAC enabled See Manifest for clusters with RBAC enabled manifest for clusteres with rbac enabled for more information Additionally throughout this tutorial the example domain of example com is used Change this to appropriate domain under your control See Set up a hosted zone set up a hosted zone section Node IAM Role In this method you can attach a policy to the Node IAM Role This will allow nodes in the Kubernetes cluster to access Route53 zones which allows ExternalDNS to update DNS records Given that this allows all containers to access Route53 not just ExternalDNS running on the node with these privileges this method is not recommended and is only suitable for limited test environments If you are using eksctl to provision a new cluster you add the policy at creation time with bash eksctl create cluster external dns access name EKS CLUSTER NAME region EKS CLUSTER REGION warning WARNING This will assign allow read write access to all nodes in the cluster not just ExternalDNS For this reason this method is only suitable for limited test environments If you already provisioned a cluster or use other provisioning tools like Terraform you can use AWS CLI to attach the policy to the Node IAM Role Get the Node IAM role name The role name of the role associated with the node s where ExternalDNS will run is needed An easy way to get the role name is to use the AWS web console https console aws amazon com eks and find any instance in the target node group and copy the role name associated with that instance Get role name with a single managed nodegroup From the command line if you have a single managed node group the default with eksctl create cluster you can find the role name with the following bash get managed node group name assuming there s only one node group GROUP NAME aws eks list nodegroups cluster name EKS CLUSTER NAME query nodegroups out text fetch role arn given node group name ROLE ARN aws eks describe nodegroup cluster name EKS CLUSTER NAME nodegroup name GROUP NAME query nodegroup nodeRole out text extract just the name part of role arn ROLE NAME NODE ROLE ARN Get role name with other configurations If you have multiple node groups or any unmanaged node groups the process gets more complex The first step is to get the instance host name of the desired node to where ExternalDNS will be deployed or is already deployed bash node instance name of one of the external dns pods currently running INSTANCE NAME kubectl get pods all namespaces selector app kubernetes io instance external dns output jsonpath items 0 spec nodeName instance name of one of the nodes change if node group is different INSTANCE NAME kubectl get nodes output name cut d f2 tail 1 With the instance host name you can then get the instance id bash get instance id INSTANCE NAME 1 example ip 192 168 74 34 us east 2 compute internal get list of nodes ip 192 168 74 34 us east 2 compute internal aws us east 2a i xxxxxxxxxxxxxxxxx ip 192 168 86 105 us east 2 compute internal aws us east 2a i xxxxxxxxxxxxxxxxx NODES kubectl get nodes output jsonpath range items metadata name t spec providerID n end print instance id from matching node grep INSTANCE NAME NODES cut d f5 INSTANCE ID get instance id INSTANCE NAME With the instance id you can get the associated role name bash findRoleName INSTANCE ID 1 get all of the roles ROLES aws iam list roles query Roles RoleName out text for ROLE in ROLES do get instance profile arn PROFILE ARN aws iam list instance profiles for role role name ROLE query InstanceProfiles 0 Arn output text if there is an instance profile if PROFILE ARN None then get all the instances with this associated instance profile INSTANCES aws ec2 describe instances filters Name iam instance profile arn Values PROFILE ARN query Reservations Instances 0 InstanceId out text find instances that match the instant profile for INSTANCE in INSTANCES do set role name value if there is a match if INSTANCE ID INSTANCE then ROLE NAME ROLE fi done fi done echo ROLE NAME NODE ROLE NAME findRoleName INSTANCE ID Using the role name you can associate the policy that was created earlier bash attach policy arn created earlier to node IAM role aws iam attach role policy role name NODE ROLE NAME policy arn POLICY ARN warning WARNING This will assign allow read write access to all pods running on the same node pool not just the ExternalDNS pod s Deploy ExternalDNS with attached policy to Node IAM Role If ExternalDNS is not yet deployed follow the steps under Deploy ExternalDNS deploy externaldns using either RBAC or non RBAC NOTE Before deleting the cluster during be sure to run aws iam detach role policy Otherwise there can be errors as the provisioning system such as eksctl or terraform will not be able to delete the roles with the attached policy Static credentials In this method the policy is attached to an IAM user and the credentials secrets for the IAM user are then made available using a Kubernetes secret This method is not the preferred method as the secrets in the credential file could be copied and used by an unauthorized threat actor However if the Kubernetes cluster is not hosted on AWS it may be the only method available Given this situation it is important to limit the associated privileges to just minimal required privileges i e read write access to Route53 and not used a credentials file that has extra privileges beyond what is required Create IAM user and attach the policy bash create IAM user aws iam create user user name externaldns attach policy arn created earlier to IAM user aws iam attach user policy user name externaldns policy arn POLICY ARN Create the static credentials bash SECRET ACCESS KEY aws iam create access key user name externaldns ACCESS KEY ID echo SECRET ACCESS KEY jq r AccessKey AccessKeyId cat EOF credentials default aws access key id echo ACCESS KEY ID aws secret access key echo SECRET ACCESS KEY jq r AccessKey SecretAccessKey EOF Create Kubernetes secret from credentials bash kubectl create secret generic external dns namespace EXTERNALDNS NS default from file local path to credentials Deploy ExternalDNS using static credentials Follow the steps under Deploy ExternalDNS deploy externaldns using either RBAC or non RBAC Make sure to uncomment the section that mounts volumes so that the credentials can be mounted TIP By default ExternalDNS takes the profile named default from the credentials file If you want to use a different profile you can set the environment variable EXTERNAL DNS AWS PROFILE to the desired profile name or use the aws profile command line argument It is even possible to use more than one profile at ones separated by space in the environment variable EXTERNAL DNS AWS PROFILE or by using aws profile multiple times In this case ExternalDNS looks for the hosted zones in all profiles and keeps maintaining a mapping table between zone and profile in order to be able to modify the zones in the correct profile IAM Roles for Service Accounts IRSA https docs aws amazon com eks latest userguide iam roles for service accounts html IAM roles for Service Accounts https docs aws amazon com eks latest userguide iam roles for service accounts html allows cluster operators to map AWS IAM Roles to Kubernetes Service Accounts This essentially allows only ExternalDNS pods to access Route53 without exposing any static credentials This is the preferred method as it implements PoLP https csrc nist gov glossary term principle of least privilege Principal of Least Privilege https csrc nist gov glossary term principle of least privilege IMPORTANT This method requires using KSA Kubernetes service account and RBAC This method requires deploying with RBAC See Manifest for clusters with RBAC enabled manifest for clusters with rbac enabled when ready to deploy ExternalDNS NOTE Similar methods to IRSA on AWS are kiam https github com uswitch kiam which is in maintenence mode and has instructions https github com uswitch kiam blob HEAD docs IAM md for creating an IAM role and also kube2iam https github com jtblin kube2iam IRSA is the officially supported method for EKS clusters and so for non EKS clusters on AWS these other tools could be an option Verify OIDC is supported bash aws eks describe cluster name EKS CLUSTER NAME query cluster identity oidc issuer output text Associate OIDC to cluster Configure the cluster with an OIDC provider and add support for IRSA https docs aws amazon com eks latest userguide iam roles for service accounts html IAM roles for Service Accounts https docs aws amazon com eks latest userguide iam roles for service accounts html If you used eksctl to provision the EKS cluster you can update it with the following command bash eksctl utils associate iam oidc provider cluster EKS CLUSTER NAME approve If the cluster was provisioned with Terraform you can use the iam openid connect provider resource ref https registry terraform io providers hashicorp aws latest docs resources iam openid connect provider to associate to the OIDC provider Create an IAM role bound to a service account For the next steps in this process we will need to associate the external dns service account and a role used to grant access to Route53 This requires the following steps 1 Create a role with a trust relationship to the cluster s OIDC provider 2 Attach the AllowExternalDNSUpdates policy to the role 3 Create the external dns service account 4 Add annotation to the service account with the role arn Use eksctl with eksctl created EKS cluster If eksctl was used to provision the EKS cluster you can perform all of these steps with the following command bash eksctl create iamserviceaccount cluster EKS CLUSTER NAME name external dns namespace EXTERNALDNS NS default attach policy arn POLICY ARN approve Use aws cli with any EKS cluster Otherwise we can do the following steps using aws commands also see Creating an IAM role and policy for your service account https docs aws amazon com eks latest userguide create service account iam policy and role html bash ACCOUNT ID aws sts get caller identity query Account output text OIDC PROVIDER aws eks describe cluster name EKS CLUSTER NAME query cluster identity oidc issuer output text sed e s https cat EOF trust json Version 2012 10 17 Statement Effect Allow Principal Federated arn aws iam ACCOUNT ID oidc provider OIDC PROVIDER Action sts AssumeRoleWithWebIdentity Condition StringEquals OIDC PROVIDER sub system serviceaccount EXTERNALDNS NS default external dns OIDC PROVIDER aud sts amazonaws com EOF IRSA ROLE external dns irsa role aws iam create role role name IRSA ROLE assume role policy document file trust json aws iam attach role policy role name IRSA ROLE policy arn POLICY ARN ROLE ARN aws iam get role role name IRSA ROLE query Role Arn output text Create service account skip is already created kubectl create serviceaccount external dns namespace EXTERNALDNS NS default Add annotation referencing IRSA role kubectl patch serviceaccount external dns namespace EXTERNALDNS NS default patch metadata annotations eks amazonaws com role arn ROLE ARN If any part of this step is misconfigured such as the role with incorrect namespace configured in the trust relationship annotation pointing the the wrong role etc you will see errors like WebIdentityErr failed to retrieve credentials Check the configuration and make corrections When the service account annotations are updated then the current running pods will have to be terminated so that new pod s with proper configuration environment variables will be created automatically When annotation is added to service account the ExternalDNS pod s scheduled will have AWS ROLE ARN AWS STS REGIONAL ENDPOINTS and AWS WEB IDENTITY TOKEN FILE environment variables injected automatically Deploy ExternalDNS using IRSA Follow the steps under Manifest for clusters with RBAC enabled manifest for clusters with rbac enabled Make sure to comment out the service account section if this has been created already If you deployed ExternalDNS before adding the service account annotation and the corresponding role you will likely see error with failed to list hosted zones AccessDenied User You can delete the current running ExternalDNS pod s after updating the annotation so that new pods scheduled will have appropriate configuration to access Route53 Set up a hosted zone If you prefer to try out ExternalDNS in one of the existing hosted zones you can skip this step Create a DNS zone which will contain the managed DNS records This tutorial will use the fictional domain of example com bash aws route53 create hosted zone name example com caller reference external dns test date s Make a note of the nameservers that were assigned to your new zone bash ZONE ID aws route53 list hosted zones by name output json dns name example com query HostedZones 0 Id out text aws route53 list resource record sets output text hosted zone id ZONE ID query ResourceRecordSets Type NS ResourceRecords Value tr t n This should yield something similar this ns 695 awsdns 22 net ns 1313 awsdns 36 org ns 350 awsdns 43 com ns 1805 awsdns 33 co uk If using your own domain that was registered with a third party domain registrar you should point your domain s name servers to the values in the from the list above Please consult your registrar s documentation on how to do that Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS You can check if your cluster has RBAC by kubectl api versions grep rbac authorization k8s io For clusters with RBAC enabled be sure to choose the correct namespace For this tutorial the enviornment variable EXTERNALDNS NS will refer to the namespace You can set this to a value of your choice bash export EXTERNALDNS NS default externaldns kube addons etc create namespace if it does not yet exist kubectl get namespaces grep q EXTERNALDNS NS kubectl create namespace EXTERNALDNS NS Using Helm with OIDC Create a values yaml file to configure ExternalDNS shell provider name aws env name AWS DEFAULT REGION value us east 1 change to region where EKS is installed Finally install the ExternalDNS chart with Helm using the configuration specified in your values yaml file shell helm upgrade install external dns external dns external dns values values yaml When using clusters without RBAC enabled Save the following below as externaldns no rbac yaml yaml apiVersion apps v1 kind Deployment metadata name external dns labels app kubernetes io name external dns spec strategy type Recreate selector matchLabels app kubernetes io name external dns template metadata labels app kubernetes io name external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com will make ExternalDNS see only the hosted zones matching provided domain omit to process all available hosted zones provider aws policy upsert only would prevent ExternalDNS from deleting any records omit to enable full synchronization aws zone type public only look at public hosted zones valid values are public private or no value for both registry txt txt owner id my hostedzone identifier env name AWS DEFAULT REGION value us east 1 change to region where EKS is installed Uncomment below if using static credentials name AWS SHARED CREDENTIALS FILE value aws credentials volumeMounts name aws credentials mountPath aws readOnly true volumes name aws credentials secret secretName external dns When ready you can deploy bash kubectl create filename externaldns no rbac yaml namespace EXTERNALDNS NS default When using clusters with RBAC enabled If you re using EKS you can update the values yaml file you created earlier to include the annotations to link the Role ARN you created before yaml provider name aws serviceAccount annotations eks amazonaws com role arn arn aws iam ACCOUNT ID role EXTERNALDNS ROLE NAME external dns If you need to provide credentials directly using a secret ie You re not using EKS you can change the values yaml file to include volume and volume mounts yaml provider name aws env name AWS SHARED CREDENTIALS FILE value etc aws credentials my credentials extraVolumes name aws credentials secret secretName external dns In this example the secret will have the data stored in a key named my credentials extraVolumeMounts name aws credentials mountPath etc aws credentials readOnly true When ready update your Helm installation shell helm upgrade install external dns external dns external dns values values yaml Arguments This list is not the full list but a few arguments that where chosen aws zone type aws zone type allows filtering for private and public zones Annotations Annotations which are specific to AWS alias external dns alpha kubernetes io alias if set to true on an ingress it will create an ALIAS record when the target is an ALIAS as well To make the target an alias the ingress needs to be configured correctly as described in the docs gke nginx md with a separate tcp load balancer In particular the argument publish service default nginx ingress controller has to be set on the nginx ingress controller container If one uses the nginx ingress Helm chart this flag can be set with the controller publishService enabled configuration option target hosted zone external dns alpha kubernetes io aws target hosted zone can optionally be set to the ID of a Route53 hosted zone This will force external dns to use the specified hosted zone when creating an ALIAS target aws zone match parent aws zone match parent allows support subdomains within the same zone by using their parent domain i e domain filter x example com would create a DNS entry for x example com and subdomains thereof yaml hosted zone domain example com domain filter x example com example com aws zone match parent Verify ExternalDNS works Service example Create the following sample application to test that ExternalDNS works For services ExternalDNS will look for the annotation external dns alpha kubernetes io hostname on the service and use the corresponding value If you want to give multiple names to service you can set it to external dns alpha kubernetes io hostname with a comma separator For this verification phase you can use default or another namespace for the nginx demo for example bash NGINXDEMO NS nginx kubectl get namespaces grep q NGINXDEMO NS kubectl create namespace NGINXDEMO NS Save the following manifest below as nginx yaml yaml apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname nginx example com spec type LoadBalancer ports port 80 name http targetPort 80 selector app nginx apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 name http Deploy the nginx deployment and service with bash kubectl create filename nginx yaml namespace NGINXDEMO NS default Verify that the load balancer was allocated with bash kubectl get service nginx namespace NGINXDEMO NS default This should show something like bash NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE nginx LoadBalancer 10 100 47 41 ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com 80 32749 TCP 12m After roughly two minutes check that a corresponding DNS record for your service that was created bash aws route53 list resource record sets output json hosted zone id ZONE ID query ResourceRecordSets Name nginx example com Type A This should show something like json Name nginx example com Type A AliasTarget HostedZoneId ZEWFWZ4R16P7IB DNSName ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com EvaluateTargetHealth true You can also fetch the corresponding text records bash aws route53 list resource record sets output json hosted zone id ZONE ID query ResourceRecordSets Name nginx example com Type TXT This will show something like json Name nginx example com Type TXT TTL 300 ResourceRecords Value heritage external dns external dns owner external dns external dns resource service default nginx Note created TXT record alongside ALIAS record TXT record signifies that the corresponding ALIAS record is managed by ExternalDNS This makes ExternalDNS safe for running in environments where there are other records managed via other means For more information about ALIAS record see Choosing between alias and non alias records https docs aws amazon com Route53 latest DeveloperGuide resource record sets choosing alias non alias html Let s check that we can resolve this DNS name We ll ask the nameservers assigned to your zone first bash dig short ns 5514 awsdns 53 org nginx example com This should return 1 IP addresses that correspond to the ELB FQDN i e ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com Next try the public nameservers configured by DNS client on your system bash dig short nginx example com If you hooked up your DNS zone with its parent zone correctly you can use curl to access your site bash curl nginx example com This should show something like html DOCTYPE html html head title Welcome to nginx title head body h1 Welcome to nginx h1 body html Verify ExternalDNS works Ingress example With the previous deployment and service objects deployed we can add an ingress object and configure a FQDN value for the host key The ingress controller will match incoming HTTP traffic and route it to the appropriate backend service based on the host key For ingress objects ExternalDNS will create a DNS record based on the host specified for the ingress object For this tutorial we have two endpoints the service with LoadBalancer type and an ingress For practical purposes if an ingress is used the service type can be changed to ClusterIP as two endpoints are unecessary in this scenario IMPORTANT This requires that an ingress controller has been installed in your Kubernetes cluster EKS does not come with an ingress controller by default A popular ingress controller is ingress nginx https github com kubernetes ingress nginx which can be installed by a helm chart https artifacthub io packages helm ingress nginx ingress nginx or by manifests https kubernetes github io ingress nginx deploy aws Create an ingress resource manifest file named ingress yaml with the contents below yaml apiVersion networking k8s io v1 kind Ingress metadata name nginx spec ingressClassName nginx rules host server example com http paths backend service name nginx port number 80 path pathType Prefix When ready you can deploy this with bash kubectl create filename ingress yaml namespace NGINXDEMO NS default Watch the status of the ingress until the ADDRESS field is populated bash kubectl get ingress watch namespace NGINXDEMO NS default You should see something like this NAME CLASS HOSTS ADDRESS PORTS AGE nginx none server example com 80 47s nginx none server example com ae11c2360188411e7951602725593fd1 1224345803 eu central 1 elb amazonaws com 80 54s For the ingress test run through similar checks but using domain name used for the ingress bash check records on route53 aws route53 list resource record sets output json hosted zone id ZONE ID query ResourceRecordSets Name server example com query using a route53 name server dig short ns 5514 awsdns 53 org server example com query using the default name server dig short server example com connect to the nginx web server through the ingress curl server example com More service annotation options Custom TTL The default DNS record TTL Time To Live is 300 seconds You can customize this value by setting the annotation external dns alpha kubernetes io ttl e g modify the service manifest YAML file above yaml apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname nginx example com external dns alpha kubernetes io ttl 60 spec This will set the DNS record s TTL to 60 seconds Routing policies Route53 offers different routing policies https docs aws amazon com Route53 latest DeveloperGuide routing policy html The routing policy for a record can be controlled with the following annotations external dns alpha kubernetes io set identifier this needs to be set to use any of the following routing policies For any given DNS name only one of the following routing policies can be used Weighted records external dns alpha kubernetes io aws weight Latency based routing external dns alpha kubernetes io aws region Failover external dns alpha kubernetes io aws failover Geolocation based routing external dns alpha kubernetes io aws geolocation continent code external dns alpha kubernetes io aws geolocation country code external dns alpha kubernetes io aws geolocation subdivision code Multi value answer external dns alpha kubernetes io aws multi value answer Associating DNS records with healthchecks You can configure Route53 to associate DNS records with healthchecks for automated DNS failover using external dns alpha kubernetes io aws health check id health check id annotation Note ExternalDNS does not support creating healthchecks and assumes that health check id already exists Canonical Hosted Zones When creating ALIAS type records in Route53 it is required that external dns be aware of the canonical hosted zone in which the specified hostname is created External dns is able to automatically identify the canonical hosted zone for many hostnames based upon known hostname suffixes which are defined in aws go https github com kubernetes sigs external dns blob master provider aws aws go L65 If a hostname does not have a known suffix then the suffix can be added into aws go or the target hosted zone annotation target hosted zone can be used to manually define the ID of the canonical hosted zone Govcloud caveats Due to the special nature with how Route53 runs in Govcloud there are a few tweaks in the deployment settings An Environment variable with name of AWS REGION set to either us gov west 1 or us gov east 1 is required Otherwise it tries to lookup a region that does not exist in Govcloud and it errors out yaml env name AWS REGION value us gov west 1 Route53 in Govcloud does not allow aliases Therefore container args must be set so that it uses CNAMES and a txt prefix must be set to something Otherwise it will try to create a TXT record with the same value than the CNAME itself which is not allowed yaml args aws prefer cname txt prefix The first two changes are needed if you use Route53 in Govcloud which only supports private zones There are also no cross account IAM whatsoever between Govcloud and commercial AWS accounts If services and ingresses need to make Route 53 entries to an public zone in a commercial account you will have set env variables of AWS ACCESS KEY ID and AWS SECRET ACCESS KEY with a key and secret to the commercial account that has the sufficient rights yaml env name AWS ACCESS KEY ID value XXXXXXXXX name AWS SECRET ACCESS KEY valueFrom secretKeyRef name key DynamoDB Registry The DynamoDB Registry can be used to store dns records metadata See the DynamoDB Registry Tutorial registry dynamodb md for more information Clean up Make sure to delete all Service objects before terminating the cluster so all load balancers get cleaned up correctly bash kubectl delete service nginx IMPORTANT If you attached a policy to the Node IAM Role then you will want to detach this before deleting the EKS cluster Otherwise the role resource will be locked and the cluster cannot be deleted especially if it was provisioned by automation like terraform or eksctl bash aws iam detach role policy role name NODE ROLE NAME policy arn POLICY ARN If the cluster was provisioned using eksctl you can delete the cluster with bash eksctl delete cluster name EKS CLUSTER NAME region EKS CLUSTER REGION Give ExternalDNS some time to clean up the DNS records for you Then delete the hosted zone if you created one for the testing purpose bash aws route53 delete hosted zone id ZONE ID e g hostedzone ZEWFWZ4R16P7IB If IAM user credentials were used you can remove the user with bash aws iam detach user policy user name externaldns policy arn POLICY ARN If static credentials were used aws iam delete access key user name externaldns access key id ACCESS KEY ID aws iam delete user user name externaldns If IRSA was used you can remove the IRSA role with bash aws iam detach role policy role name IRSA ROLE policy arn POLICY ARN aws iam delete role role name IRSA ROLE Delete any unneeded policies bash aws iam delete policy policy arn POLICY ARN Throttling Route53 has a 5 API requests per second per account hard quota https docs aws amazon com Route53 latest DeveloperGuide DNSLimitations html limits api requests route 53 Running several fast polling ExternalDNS instances in a given account can easily hit that limit Some ways to reduce the request rate include Reduce the polling loop s synchronization interval at the possible cost of slower change propagation but see events below to reduce the impact interval 5m default 1m Enable a Cache to store the zone records list It comes with a cost slower propagation when the zone gets modified from other sources such as the AWS console terraform cloudformation or anything similar provider cache time 15m default 0m Trigger the polling loop on changes to K8s objects rather than only at interval and ensure a minimum of time between events to have responsive updates with long poll intervals events min event sync interval 5m default 5s Limit the sources watched https github com kubernetes sigs external dns blob master pkg apis externaldns types go L364 when the events flag is specified to specific types namespaces labels or annotations source ingress source service specify multiple times for multiple sources namespace my app label filter app in my app ingress class nginx external Limit services watched by type not applicable to ingress or other types service type filter LoadBalancer default all Limit the hosted zones considered zone id filter ABCDEF12345678 specify multiple times if needed domain filter example com by domain suffix specify multiple times if needed regex domain filter example by domain suffix but as a regex overrides domain filter exclude domains ignore this example com to exclude a domain or subdomain regex domain exclusion ignore subtracts it s matches from regex domain filter s matches aws zone type public only sync zones of this type public private aws zone tags owner k8s only sync zones with this tag If the list of zones managed by ExternalDNS doesn t change frequently cache it by setting a TTL aws zones cache duration 3h default 0 disabled Increase the number of changes applied to Route53 in each batch aws batch change size 4000 default 1000 Increase the interval between changes aws batch change interval 10s default 1s Introducing some jitter to the pod initialization so that when multiple instances of ExternalDNS are updated at the same time they do not make their requests on the same second A simple way to implement randomised startup is with an init container spec initContainers name init jitter image registry k8s io external dns external dns v0 15 0 command bin sh c FOR RANDOM 10 s echo Sleeping for FOR sleep FOR containers EKS An effective starting point for EKS with an ingress controller might look like bash interval 5m events source ingress domain filter example com aws zones cache duration 1h Batch size options After external dns generates all changes it will perform a task to group those changes into batches Each change will be validated against batch change size limits If at least one of those parameters out of range the change will be moved to a separate batch If the change can t fit into any batch it will be skipped br There are 3 options to control batch size for AWS provider Maximum amount of changes added to one batch aws batch change size default 1000 Maximum size of changes in bytes added to one batch aws batch change size bytes default 32000 Maximum value count of changes added to one batch aws batch change size values default 1000 aws batch change size can be very useful for throttling purposes and can be set to any value Default values for flags aws batch change size bytes and aws batch change size values are taken from AWS documentation https docs aws amazon com Route53 latest DeveloperGuide DNSLimitations html limits api requests for Route53 API You should not change those values until you really have to br Because those limits are in place aws batch change size can be set to any value Even if your batch size is 4000 records your change will be split to separate batches due to bytes values size limits and apply request will be finished without issues
external-dns RFC2136 provider This tutorial describes how to use the RFC2136 with either BIND or Windows DNS deployment of external dns To use external dns with BIND generate procure a key configure DNS and add a Using with BIND Server credentials
# RFC2136 provider This tutorial describes how to use the RFC2136 with either BIND or Windows DNS. ## Using with BIND To use external-dns with BIND: generate/procure a key, configure DNS and add a deployment of external-dns. ### Server credentials: - RFC2136 was developed for and tested with [BIND](https://www.isc.org/downloads/bind/) DNS server. This documentation assumes that you already have a configured and working server. If you don't, please check BIND documents or tutorials. - If your DNS is provided for you, ask for a TSIG key authorized to update and transfer the zone you wish to update. The key will look something like below. Skip the next steps wrt BIND setup. ```text key "externaldns-key" { algorithm hmac-sha256; secret "96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8="; }; ``` - If you are your own DNS administrator create a TSIG key. Use `tsig-keygen -a hmac-sha256 externaldns` or on older distributions `dnssec-keygen -a HMAC-SHA256 -b 256 -n HOST externaldns`. You will end up with a key printed to standard out like above (or in the case of dnssec-keygen in a file called `Kexternaldns......key`). ### BIND Configuration: If you do not administer your own DNS, skip to RFC provider configuration - Edit your named.conf file (or appropriate included file) and add/change the following. - Make sure You are listening on the right interfaces. At least whatever interface external-dns will be communicating over and the interface that faces the internet. - Add the key that you generated/was given to you above. Copy paste the four lines that you got (not the same as the example key) into your file. - Create a zone for kubernetes. If you already have a zone, skip to the next step. (I put the zone in it's own subdirectory because named, which shouldn't be running as root, needs to create a journal file and the default zone directory isn't writeable by named). ```text zone "k8s.example.org" { type master; file "/etc/bind/pri/k8s/k8s.zone"; }; ``` - Add your key to both transfer and update. For instance with our previous zone. ```text zone "k8s.example.org" { type master; file "/etc/bind/pri/k8s/k8s.zone"; allow-transfer { key "externaldns-key"; }; update-policy { grant externaldns-key zonesub ANY; }; }; ``` - Create a zone file (k8s.zone): ```text $TTL 60 ; 1 minute k8s.example.org IN SOA k8s.example.org. root.k8s.example.org. ( 16 ; serial 60 ; refresh (1 minute) 60 ; retry (1 minute) 60 ; expire (1 minute) 60 ; minimum (1 minute) ) NS ns.k8s.example.org. ns A 123.456.789.012 ``` - Reload (or restart) named ### Using external-dns To use external-dns add an ingress or a LoadBalancer service with a host that is part of the domain-filter. For example both of the following would produce A records. ```text apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: svc.example.org spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: nginx --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: rules: - host: ingress.example.org http: paths: - path: / backend: serviceName: my-service servicePort: 8000 ``` ### Custom TTL The default DNS record TTL (Time-To-Live) is 0 seconds. You can customize this value by setting the annotation `external-dns.alpha.kubernetes.io/ttl`. e.g., modify the service manifest YAML file above: ``` apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: nginx.external-dns-test.my-org.com external-dns.alpha.kubernetes.io/ttl: 60 spec: ... ``` This will set the DNS record's TTL to 60 seconds. A default TTL for all records can be set using the the flag with a time in seconds, minutes or hours, such as `--rfc2136-min-ttl=60s` There are other annotation that can affect the generation of DNS records, but these are beyond the scope of this tutorial and are covered in the main documentation. ### Generate reverse DNS records If you want to generate reverse DNS records for your services, you have to enable the functionality using the `--rfc2136-create-ptr` flag. You have also to add the zone to the list of zones managed by ExternalDNS via the `--rfc2136-zone` and `--domain-filter` flags. An example of a valid configuration is the following: ```--domain-filter=157.168.192.in-addr.arpa --rfc2136-zone=157.168.192.in-addr.arpa``` PTR record tracking is managed by the A/AAAA record so you can't create PTR records for already generated A/AAAA records. ### Test with external-dns installed on local machine (optional) You may install external-dns and test on a local machine by running: ``` external-dns --txt-owner-id k8s --provider rfc2136 --rfc2136-host=192.168.0.1 --rfc2136-port=53 --rfc2136-zone=k8s.example.org --rfc2136-tsig-secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= --rfc2136-tsig-secret-alg=hmac-sha256 --rfc2136-tsig-keyname=externaldns-key --rfc2136-tsig-axfr --source ingress --once --domain-filter=k8s.example.org --dry-run ``` - host should be the IP of your master DNS server. - tsig-secret should be changed to match your secret. - tsig-keyname needs to match the keyname you used (if you changed it). - domain-filter can be used as shown to filter the domains you wish to update. ### RFC2136 provider configuration: In order to use external-dns with your cluster you need to add a deployment with access to your ingress and service resources. The following are two example manifests with and without RBAC respectively. - With RBAC: ```text apiVersion: v1 kind: Namespace metadata: name: external-dns labels: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns namespace: external-dns rules: - apiGroups: - "" resources: - services - endpoints - pods - nodes verbs: - get - watch - list - apiGroups: - extensions - networking.k8s.io resources: - ingresses verbs: - get - list - watch --- apiVersion: v1 kind: ServiceAccount metadata: name: external-dns namespace: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer namespace: external-dns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: external-dns --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns namespace: external-dns spec: selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --registry=txt - --txt-prefix=external-dns- - --txt-owner-id=k8s - --provider=rfc2136 - --rfc2136-host=192.168.0.1 - --rfc2136-port=53 - --rfc2136-zone=k8s.example.org - --rfc2136-zone=k8s.your-zone.org - --rfc2136-tsig-secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= - --rfc2136-tsig-secret-alg=hmac-sha256 - --rfc2136-tsig-keyname=externaldns-key - --rfc2136-tsig-axfr - --source=ingress - --domain-filter=k8s.example.org ``` - Without RBAC: ```text apiVersion: v1 kind: Namespace metadata: name: external-dns labels: name: external-dns --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns namespace: external-dns spec: selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --registry=txt - --txt-prefix=external-dns- - --txt-owner-id=k8s - --provider=rfc2136 - --rfc2136-host=192.168.0.1 - --rfc2136-port=53 - --rfc2136-zone=k8s.example.org - --rfc2136-zone=k8s.your-zone.org - --rfc2136-tsig-secret=96Ah/a2g0/nLeFGK+d/0tzQcccf9hCEIy34PoXX2Qg8= - --rfc2136-tsig-secret-alg=hmac-sha256 - --rfc2136-tsig-keyname=externaldns-key - --rfc2136-tsig-axfr - --source=ingress - --domain-filter=k8s.example.org ``` ## Microsoft DNS (Insecure Updates) While `external-dns` was not developed or tested against Microsoft DNS, it can be configured to work against it. YMMV. ### Insecure Updates #### DNS-side configuration 1. Create a DNS zone 2. Enable insecure dynamic updates for the zone 3. Enable Zone Transfers to all servers #### `external-dns` configuration You'll want to configure `external-dns` similarly to the following: ```text ... - --provider=rfc2136 - --rfc2136-host=192.168.0.1 - --rfc2136-port=53 - --rfc2136-zone=k8s.example.org - --rfc2136-zone=k8s.your-zone.org - --rfc2136-insecure - --rfc2136-tsig-axfr # needed to enable zone transfers, which is required for deletion of records. ... ``` ### Secure Updates Using RFC3645 (GSS-TSIG) #### DNS-side configuration 1. Create a DNS zone 2. Enable secure dynamic updates for the zone 3. Enable Zone Transfers to all servers If you see any error messages which indicate that `external-dns` was somehow not able to fetch existing DNS records from your DNS server, this could mean that you forgot about step 3. ##### Kerberos Configuration DNS with secure updates relies upon a valid Kerberos configuration running within the `external-dns` container. At this time, you will need to create a ConfigMap for the `external-dns` container to use and mount it in your deployment. Below is an example of a working Kerberos configuration inside a ConfigMap definition. This may be different depending on many factors in your environment: ```yaml apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: krb5.conf data: krb5.conf: | [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt default_ccache_name = KEYRING:persistent:%{uid} default_realm = YOUR-REALM.COM [realms] YOUR-REALM.COM = { kdc = dc1.yourdomain.com admin_server = dc1.yourdomain.com } [domain_realm] yourdomain.com = YOUR-REALM.COM .yourdomain.com = YOUR-REALM.COM ``` In most cases, the realm name will probably be the same as the domain name, so you can simply replace `YOUR-REALM.COM` with something like `YOURDOMAIN.COM`. Once the ConfigMap is created, the container `external-dns` container needs to be told to mount that ConfigMap as a volume at the default Kerberos configuration location. The pod spec should include a similar configuration to the following: ```yaml ... volumeMounts: - mountPath: /etc/krb5.conf name: kerberos-config-volume subPath: krb5.conf ... volumes: - configMap: defaultMode: 420 name: krb5.conf name: kerberos-config-volume ... ``` ##### `external-dns` configuration You'll want to configure `external-dns` similarly to the following: ```text ... - --provider=rfc2136 - --rfc2136-gss-tsig - --rfc2136-host=dns-host.yourdomain.com - --rfc2136-port=53 - --rfc2136-zone=your-zone.com - --rfc2136-zone=your-secondary-zone.com - --rfc2136-kerberos-username=your-domain-account - --rfc2136-kerberos-password=your-domain-password - --rfc2136-kerberos-realm=your-domain.com - --rfc2136-tsig-axfr # needed to enable zone transfers, which is required for deletion of records. ... ``` As noted above, the `--rfc2136-kerberos-realm` flag is completely optional and won't be necessary in many cases. Most likely, you will only need it if you see errors similar to this: `KRB Error: (68) KDC_ERR_WRONG_REALM Reserved for future use`. The flag `--rfc2136-host` can be set to the host's domain name or IP address. However, it also determines the name of the Kerberos principal which is used during authentication. This means that Active Directory might only work if this is set to a specific domain name, possibly leading to errors like this: `KDC_ERR_S_PRINCIPAL_UNKNOWN Server not found in Kerberos database`. To fix this, try setting `--rfc2136-host` to the "actual" hostname of your DNS server. ## DNS Over TLS (RFCs 7858 and 9103) If your DNS server does zone transfers over TLS, you can instruct `external-dns` to connect over TLS with the following flags: * `--rfc2136-use-tls` Will enable TLS for both zone transfers and for updates. * `--tls-ca=<cert-file>` Is the path to a file containing certificate(s) that can be used to verify the DNS server * `--tls-client-cert=<client-cert-file>` and * `--tls-client-cert-key=<client-key-file>` Set the client certificate and key for mutual verification * `--rfc2136-skip-tls-verify` Disables verification of the certificate supplied by the DNS server. It is currently not supported to do only zone transfers over TLS, but not the updates. They are enabled and disabled together.
external-dns
RFC2136 provider This tutorial describes how to use the RFC2136 with either BIND or Windows DNS Using with BIND To use external dns with BIND generate procure a key configure DNS and add a deployment of external dns Server credentials RFC2136 was developed for and tested with BIND https www isc org downloads bind DNS server This documentation assumes that you already have a configured and working server If you don t please check BIND documents or tutorials If your DNS is provided for you ask for a TSIG key authorized to update and transfer the zone you wish to update The key will look something like below Skip the next steps wrt BIND setup text key externaldns key algorithm hmac sha256 secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8 If you are your own DNS administrator create a TSIG key Use tsig keygen a hmac sha256 externaldns or on older distributions dnssec keygen a HMAC SHA256 b 256 n HOST externaldns You will end up with a key printed to standard out like above or in the case of dnssec keygen in a file called Kexternaldns key BIND Configuration If you do not administer your own DNS skip to RFC provider configuration Edit your named conf file or appropriate included file and add change the following Make sure You are listening on the right interfaces At least whatever interface external dns will be communicating over and the interface that faces the internet Add the key that you generated was given to you above Copy paste the four lines that you got not the same as the example key into your file Create a zone for kubernetes If you already have a zone skip to the next step I put the zone in it s own subdirectory because named which shouldn t be running as root needs to create a journal file and the default zone directory isn t writeable by named text zone k8s example org type master file etc bind pri k8s k8s zone Add your key to both transfer and update For instance with our previous zone text zone k8s example org type master file etc bind pri k8s k8s zone allow transfer key externaldns key update policy grant externaldns key zonesub ANY Create a zone file k8s zone text TTL 60 1 minute k8s example org IN SOA k8s example org root k8s example org 16 serial 60 refresh 1 minute 60 retry 1 minute 60 expire 1 minute 60 minimum 1 minute NS ns k8s example org ns A 123 456 789 012 Reload or restart named Using external dns To use external dns add an ingress or a LoadBalancer service with a host that is part of the domain filter For example both of the following would produce A records text apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname svc example org spec type LoadBalancer ports port 80 targetPort 80 selector app nginx apiVersion networking k8s io v1 kind Ingress metadata name my ingress spec rules host ingress example org http paths path backend serviceName my service servicePort 8000 Custom TTL The default DNS record TTL Time To Live is 0 seconds You can customize this value by setting the annotation external dns alpha kubernetes io ttl e g modify the service manifest YAML file above apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname nginx external dns test my org com external dns alpha kubernetes io ttl 60 spec This will set the DNS record s TTL to 60 seconds A default TTL for all records can be set using the the flag with a time in seconds minutes or hours such as rfc2136 min ttl 60s There are other annotation that can affect the generation of DNS records but these are beyond the scope of this tutorial and are covered in the main documentation Generate reverse DNS records If you want to generate reverse DNS records for your services you have to enable the functionality using the rfc2136 create ptr flag You have also to add the zone to the list of zones managed by ExternalDNS via the rfc2136 zone and domain filter flags An example of a valid configuration is the following domain filter 157 168 192 in addr arpa rfc2136 zone 157 168 192 in addr arpa PTR record tracking is managed by the A AAAA record so you can t create PTR records for already generated A AAAA records Test with external dns installed on local machine optional You may install external dns and test on a local machine by running external dns txt owner id k8s provider rfc2136 rfc2136 host 192 168 0 1 rfc2136 port 53 rfc2136 zone k8s example org rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8 rfc2136 tsig secret alg hmac sha256 rfc2136 tsig keyname externaldns key rfc2136 tsig axfr source ingress once domain filter k8s example org dry run host should be the IP of your master DNS server tsig secret should be changed to match your secret tsig keyname needs to match the keyname you used if you changed it domain filter can be used as shown to filter the domains you wish to update RFC2136 provider configuration In order to use external dns with your cluster you need to add a deployment with access to your ingress and service resources The following are two example manifests with and without RBAC respectively With RBAC text apiVersion v1 kind Namespace metadata name external dns labels name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns namespace external dns rules apiGroups resources services endpoints pods nodes verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get list watch apiVersion v1 kind ServiceAccount metadata name external dns namespace external dns apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer namespace external dns roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace external dns apiVersion apps v1 kind Deployment metadata name external dns namespace external dns spec selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args registry txt txt prefix external dns txt owner id k8s provider rfc2136 rfc2136 host 192 168 0 1 rfc2136 port 53 rfc2136 zone k8s example org rfc2136 zone k8s your zone org rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8 rfc2136 tsig secret alg hmac sha256 rfc2136 tsig keyname externaldns key rfc2136 tsig axfr source ingress domain filter k8s example org Without RBAC text apiVersion v1 kind Namespace metadata name external dns labels name external dns apiVersion apps v1 kind Deployment metadata name external dns namespace external dns spec selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args registry txt txt prefix external dns txt owner id k8s provider rfc2136 rfc2136 host 192 168 0 1 rfc2136 port 53 rfc2136 zone k8s example org rfc2136 zone k8s your zone org rfc2136 tsig secret 96Ah a2g0 nLeFGK d 0tzQcccf9hCEIy34PoXX2Qg8 rfc2136 tsig secret alg hmac sha256 rfc2136 tsig keyname externaldns key rfc2136 tsig axfr source ingress domain filter k8s example org Microsoft DNS Insecure Updates While external dns was not developed or tested against Microsoft DNS it can be configured to work against it YMMV Insecure Updates DNS side configuration 1 Create a DNS zone 2 Enable insecure dynamic updates for the zone 3 Enable Zone Transfers to all servers external dns configuration You ll want to configure external dns similarly to the following text provider rfc2136 rfc2136 host 192 168 0 1 rfc2136 port 53 rfc2136 zone k8s example org rfc2136 zone k8s your zone org rfc2136 insecure rfc2136 tsig axfr needed to enable zone transfers which is required for deletion of records Secure Updates Using RFC3645 GSS TSIG DNS side configuration 1 Create a DNS zone 2 Enable secure dynamic updates for the zone 3 Enable Zone Transfers to all servers If you see any error messages which indicate that external dns was somehow not able to fetch existing DNS records from your DNS server this could mean that you forgot about step 3 Kerberos Configuration DNS with secure updates relies upon a valid Kerberos configuration running within the external dns container At this time you will need to create a ConfigMap for the external dns container to use and mount it in your deployment Below is an example of a working Kerberos configuration inside a ConfigMap definition This may be different depending on many factors in your environment yaml apiVersion v1 kind ConfigMap metadata creationTimestamp null name krb5 conf data krb5 conf logging default FILE var log krb5libs log kdc FILE var log krb5kdc log admin server FILE var log kadmind log libdefaults dns lookup realm false ticket lifetime 24h renew lifetime 7d forwardable true rdns false pkinit anchors etc pki tls certs ca bundle crt default ccache name KEYRING persistent uid default realm YOUR REALM COM realms YOUR REALM COM kdc dc1 yourdomain com admin server dc1 yourdomain com domain realm yourdomain com YOUR REALM COM yourdomain com YOUR REALM COM In most cases the realm name will probably be the same as the domain name so you can simply replace YOUR REALM COM with something like YOURDOMAIN COM Once the ConfigMap is created the container external dns container needs to be told to mount that ConfigMap as a volume at the default Kerberos configuration location The pod spec should include a similar configuration to the following yaml volumeMounts mountPath etc krb5 conf name kerberos config volume subPath krb5 conf volumes configMap defaultMode 420 name krb5 conf name kerberos config volume external dns configuration You ll want to configure external dns similarly to the following text provider rfc2136 rfc2136 gss tsig rfc2136 host dns host yourdomain com rfc2136 port 53 rfc2136 zone your zone com rfc2136 zone your secondary zone com rfc2136 kerberos username your domain account rfc2136 kerberos password your domain password rfc2136 kerberos realm your domain com rfc2136 tsig axfr needed to enable zone transfers which is required for deletion of records As noted above the rfc2136 kerberos realm flag is completely optional and won t be necessary in many cases Most likely you will only need it if you see errors similar to this KRB Error 68 KDC ERR WRONG REALM Reserved for future use The flag rfc2136 host can be set to the host s domain name or IP address However it also determines the name of the Kerberos principal which is used during authentication This means that Active Directory might only work if this is set to a specific domain name possibly leading to errors like this KDC ERR S PRINCIPAL UNKNOWN Server not found in Kerberos database To fix this try setting rfc2136 host to the actual hostname of your DNS server DNS Over TLS RFCs 7858 and 9103 If your DNS server does zone transfers over TLS you can instruct external dns to connect over TLS with the following flags rfc2136 use tls Will enable TLS for both zone transfers and for updates tls ca cert file Is the path to a file containing certificate s that can be used to verify the DNS server tls client cert client cert file and tls client cert key client key file Set the client certificate and key for mutual verification rfc2136 skip tls verify Disables verification of the certificate supplied by the DNS server It is currently not supported to do only zone transfers over TLS but not the updates They are enabled and disabled together
external-dns If you want to learn about how to use Civo DNS Manager read the following tutorials Make sure to use 0 13 5 version of ExternalDNS for this tutorial This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager Managing DNS with Civo Civo DNS
# Civo DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager. Make sure to use **>0.13.5** version of ExternalDNS for this tutorial. ## Managing DNS with Civo If you want to learn about how to use Civo DNS Manager read the following tutorials: [An Introduction to Managing DNS](https://www.civo.com/learn/configure-dns) ## Get Civo Token Copy the token in the settings for your account The environment variable `CIVO_TOKEN` will be needed to run ExternalDNS with Civo. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=civo env: - name: CIVO_TOKEN value: "YOUR_CIVO_API_TOKEN" ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=civo env: - name: CIVO_TOKEN value: "YOUR_CIVO_API_TOKEN" ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the Civo DNS zone created above. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Civo DNS records. ## Verifying Civo DNS records Check your [Civo UI](https://www.civo.com/account/dns) to view the records for your Civo DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage Civo DNS records, we can delete the tutorial's example: ``` $ kubectl delete service -f nginx.yaml $ kubectl delete service -f externaldns.yaml ```
external-dns
Civo DNS This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Civo DNS Manager Make sure to use 0 13 5 version of ExternalDNS for this tutorial Managing DNS with Civo If you want to learn about how to use Civo DNS Manager read the following tutorials An Introduction to Managing DNS https www civo com learn configure dns Get Civo Token Copy the token in the settings for your account The environment variable CIVO TOKEN will be needed to run ExternalDNS with Civo Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider civo env name CIVO TOKEN value YOUR CIVO API TOKEN Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider civo env name CIVO TOKEN value YOUR CIVO API TOKEN Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the Civo DNS zone created above ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the Civo DNS records Verifying Civo DNS records Check your Civo UI https www civo com account dns to view the records for your Civo DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage Civo DNS records we can delete the tutorial s example kubectl delete service f nginx yaml kubectl delete service f externaldns yaml
external-dns This tutorial describes how to setup ExternalDNS for use within a If you are new to OVH we recommend you first read the following Creating a zone with OVH DNS OVHcloud Kubernetes cluster using OVH DNS Make sure to use 0 6 version of ExternalDNS for this tutorial
# OVHcloud This tutorial describes how to setup ExternalDNS for use within a Kubernetes cluster using OVH DNS. Make sure to use **>=0.6** version of ExternalDNS for this tutorial. ## Creating a zone with OVH DNS If you are new to OVH, we recommend you first read the following instructions for creating a zone. [Creating a zone using the OVH manager](https://docs.ovh.com/gb/en/domains/create_a_dns_zone_for_a_domain_which_is_not_registered_at_ovh/) [Creating a zone using the OVH API](https://api.ovh.com/console/) ## Creating OVH Credentials You first need to create an OVH application. Using the [OVH documentation](https://docs.ovh.com/gb/en/api/first-steps-with-ovh-api/#advanced-usage-pair-ovhcloud-apis-with-an-application_2) you will have your `Application key` and `Application secret` And you will need to generate your consumer key, here the permissions needed : - GET on `/domain/zone` - GET on `/domain/zone/*/record` - GET on `/domain/zone/*/record/*` - POST on `/domain/zone/*/record` - DELETE on `/domain/zone/*/record/*` - GET on `/domain/zone/*/soa` - POST on `/domain/zone/*/refresh` You can use the following `curl` request to generate & validated your `Consumer key` ```bash curl -XPOST -H "X-Ovh-Application: <ApplicationKey>" -H "Content-type: application/json" https://eu.api.ovh.com/1.0/auth/credential -d '{ "accessRules": [ { "method": "GET", "path": "/domain/zone" }, { "method": "GET", "path": "/domain/zone/*/soa" }, { "method": "GET", "path": "/domain/zone/*/record" }, { "method": "GET", "path": "/domain/zone/*/record/*" }, { "method": "POST", "path": "/domain/zone/*/record" }, { "method": "DELETE", "path": "/domain/zone/*/record/*" }, { "method": "POST", "path": "/domain/zone/*/refresh" } ], "redirection":"https://github.com/kubernetes-sigs/external-dns/blob/HEAD/docs/tutorials/ovh.md#creating-ovh-credentials" }' ``` ## Deploy ExternalDNS Connect your `kubectl` client to the cluster with which you want to test ExternalDNS, and then apply one of the following manifest files for deployment: ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=ovh env: - name: OVH_APPLICATION_KEY value: "YOUR_OVH_APPLICATION_KEY" - name: OVH_APPLICATION_SECRET value: "YOUR_OVH_APPLICATION_SECRET" - name: OVH_CONSUMER_KEY value: "YOUR_OVH_CONSUMER_KEY_AFTER_VALIDATED_LINK" ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=ovh env: - name: OVH_APPLICATION_KEY value: "YOUR_OVH_APPLICATION_KEY" - name: OVH_APPLICATION_SECRET value: "YOUR_OVH_APPLICATION_SECRET" - name: OVH_CONSUMER_KEY value: "YOUR_OVH_CONSUMER_KEY_AFTER_VALIDATED_LINK" ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: example.com external-dns.alpha.kubernetes.io/ttl: "120" #optional spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` **A note about annotations** Verify that the annotation on the service uses the same hostname as the OVH DNS zone created above. The annotation may also be a subdomain of the DNS zone (e.g. 'www.example.com'). The TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional. If this annotation is not set, the TTL on records managed by ExternalDNS will default to 10. ExternalDNS uses the hostname annotation to determine which services should be registered with DNS. Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records. ### Create the deployment and service ``` $ kubectl create -f nginx.yaml ``` Depending on where you run your service, it may take some time for your cloud provider to create an external IP for the service. Once an external IP is assigned, ExternalDNS detects the new service IP address and synchronizes the OVH DNS records. ## Verifying OVH DNS records Use the OVH manager or API to verify that the A record for your domain shows the external IP address of the services. ## Cleanup Once you successfully configure and verify record management via ExternalDNS, you can delete the tutorial's example: ``` $ kubectl delete -f nginx.yaml $ kubectl delete -f externaldns.yaml ```
external-dns
OVHcloud This tutorial describes how to setup ExternalDNS for use within a Kubernetes cluster using OVH DNS Make sure to use 0 6 version of ExternalDNS for this tutorial Creating a zone with OVH DNS If you are new to OVH we recommend you first read the following instructions for creating a zone Creating a zone using the OVH manager https docs ovh com gb en domains create a dns zone for a domain which is not registered at ovh Creating a zone using the OVH API https api ovh com console Creating OVH Credentials You first need to create an OVH application Using the OVH documentation https docs ovh com gb en api first steps with ovh api advanced usage pair ovhcloud apis with an application 2 you will have your Application key and Application secret And you will need to generate your consumer key here the permissions needed GET on domain zone GET on domain zone record GET on domain zone record POST on domain zone record DELETE on domain zone record GET on domain zone soa POST on domain zone refresh You can use the following curl request to generate validated your Consumer key bash curl XPOST H X Ovh Application ApplicationKey H Content type application json https eu api ovh com 1 0 auth credential d accessRules method GET path domain zone method GET path domain zone soa method GET path domain zone record method GET path domain zone record method POST path domain zone record method DELETE path domain zone record method POST path domain zone refresh redirection https github com kubernetes sigs external dns blob HEAD docs tutorials ovh md creating ovh credentials Deploy ExternalDNS Connect your kubectl client to the cluster with which you want to test ExternalDNS and then apply one of the following manifest files for deployment Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider ovh env name OVH APPLICATION KEY value YOUR OVH APPLICATION KEY name OVH APPLICATION SECRET value YOUR OVH APPLICATION SECRET name OVH CONSUMER KEY value YOUR OVH CONSUMER KEY AFTER VALIDATED LINK Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services verbs get watch list apiGroups resources pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list apiGroups resources endpoints verbs get watch list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider ovh env name OVH APPLICATION KEY value YOUR OVH APPLICATION KEY name OVH APPLICATION SECRET value YOUR OVH APPLICATION SECRET name OVH CONSUMER KEY value YOUR OVH CONSUMER KEY AFTER VALIDATED LINK Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname example com external dns alpha kubernetes io ttl 120 optional spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 A note about annotations Verify that the annotation on the service uses the same hostname as the OVH DNS zone created above The annotation may also be a subdomain of the DNS zone e g www example com The TTL annotation can be used to configure the TTL on DNS records managed by ExternalDNS and is optional If this annotation is not set the TTL on records managed by ExternalDNS will default to 10 ExternalDNS uses the hostname annotation to determine which services should be registered with DNS Removing the hostname annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service kubectl create f nginx yaml Depending on where you run your service it may take some time for your cloud provider to create an external IP for the service Once an external IP is assigned ExternalDNS detects the new service IP address and synchronizes the OVH DNS records Verifying OVH DNS records Use the OVH manager or API to verify that the A record for your domain shows the external IP address of the services Cleanup Once you successfully configure and verify record management via ExternalDNS you can delete the tutorial s example kubectl delete f nginx yaml kubectl delete f externaldns yaml
external-dns Azure Private DNS 3 Deploy ExternalDNS 4 Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name 5 Install NGINX Ingress Controller Optional This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS 1 Provision Azure Private DNS It comprises of the following steps 2 Configure service principal for managing the zone
# Azure Private DNS This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS. It comprises of the following steps: 1) Provision Azure Private DNS 2) Configure service principal for managing the zone 3) Deploy ExternalDNS 4) Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name 5) Install NGINX Ingress Controller (Optional) 6) Expose an nginx service with an ingress (Optional) 7) Verify the DNS records Everything will be deployed on Kubernetes. Therefore, please see the subsequent prerequisites. ## Prerequisites - Azure Kubernetes Service is deployed and ready - [Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) and `kubectl` installed on the box to execute the subsequent steps ## Provision Azure Private DNS The provider will find suitable zones for domains it manages. It will not automatically create zones. For this tutorial, we will create a Azure resource group named 'externaldns' that can easily be deleted later. ``` $ az group create -n externaldns -l westeurope ``` Substitute a more suitable location for the resource group if desired. As a prerequisite for Azure Private DNS to resolve records is to define links with VNETs. Thus, first create a VNET. ``` $ az network vnet create \ --name myvnet \ --resource-group externaldns \ --location westeurope \ --address-prefix 10.2.0.0/16 \ --subnet-name mysubnet \ --subnet-prefixes 10.2.0.0/24 ``` Next, create a Azure Private DNS zone for "example.com": ``` $ az network private-dns zone create -g externaldns -n example.com ``` Substitute a domain you own for "example.com" if desired. Finally, create the mentioned link with the VNET. ``` $ az network private-dns link vnet create -g externaldns -n mylink \ -z example.com -v myvnet --registration-enabled false ``` ## Configure service principal for managing the zone ExternalDNS needs permissions to make changes in Azure Private DNS. These permissions are roles assigned to the service principal used by ExternalDNS. A service principal with a minimum access level of `Private DNS Zone Contributor` to the Private DNS zone(s) and `Reader` to the resource group containing the Azure Private DNS zone(s) is necessary. More powerful role-assignments like `Owner` or assignments on subscription-level work too. Start off by **creating the service principal** without role-assignments. ``` $ az ad sp create-for-rbac --skip-assignment -n http://externaldns-sp { "appId": "appId GUID", <-- aadClientId value ... "password": "password", <-- aadClientSecret value "tenant": "AzureAD Tenant Id" <-- tenantId value } ``` > Note: Alternatively, you can issue `az account show --query "tenantId"` to retrieve the id of your AAD Tenant too. Next, assign the roles to the service principal. But first **retrieve the ID's** of the objects to assign roles on. ``` # find out the resource ids of the resource group where the dns zone is deployed, and the dns zone itself $ az group show --name externaldns --query id -o tsv /subscriptions/id/resourceGroups/externaldns $ az network private-dns zone show --name example.com -g externaldns --query id -o tsv /subscriptions/.../resourceGroups/externaldns/providers/Microsoft.Network/privateDnsZones/example.com ``` Now, **create role assignments**. ``` # 1. as a reader to the resource group $ az role assignment create --role "Reader" --assignee <appId GUID> --scope <resource group resource id> # 2. as a contributor to DNS Zone itself $ az role assignment create --role "Private DNS Zone Contributor" --assignee <appId GUID> --scope <dns zone resource id> ``` ## Throttling When the ExternalDNS managed zones list doesn't change frequently, one can set `--azure-zones-cache-duration` (zones list cache time-to-live). The zones list cache is disabled by default, with a value of 0s. ## Deploy ExternalDNS Configure `kubectl` to be able to communicate and authenticate with your cluster. This is per default done through the file `~/.kube/config`. For general background information on this see [kubernetes-docs](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/). Azure-CLI features functionality for automatically maintaining this file for AKS-Clusters. See [Azure-Docs](https://docs.microsoft.com/de-de/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials). Follow the steps for [azure-dns provider](./azure.md#creating-configuration-file) to create a configuration file. Then apply one of the following manifests depending on whether you use RBAC or not. The credentials of the service principal are provided to ExternalDNS as environment-variables. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: externaldns spec: selector: matchLabels: app: externaldns strategy: type: Recreate template: metadata: labels: app: externaldns spec: containers: - name: externaldns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com - --provider=azure-private-dns - --azure-resource-group=externaldns - --azure-subscription-id=<use the id of your subscription> volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` ### Manifest (for clusters with RBAC enabled, cluster access) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: externaldns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: externaldns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: externaldns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: externaldns subjects: - kind: ServiceAccount name: externaldns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: externaldns spec: selector: matchLabels: app: externaldns strategy: type: Recreate template: metadata: labels: app: externaldns spec: serviceAccountName: externaldns containers: - name: externaldns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com - --provider=azure-private-dns - --azure-resource-group=externaldns - --azure-subscription-id=<use the id of your subscription> volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` ### Manifest (for clusters with RBAC enabled, namespace access) This configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster. However, access to [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) requires cluster access, so when using this manifest, services with type `NodePort` will be skipped! ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: externaldns --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: externaldns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: externaldns roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: externaldns subjects: - kind: ServiceAccount name: externaldns --- apiVersion: apps/v1 kind: Deployment metadata: name: externaldns spec: selector: matchLabels: app: externaldns strategy: type: Recreate template: metadata: labels: app: externaldns spec: serviceAccountName: externaldns containers: - name: externaldns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com - --provider=azure-private-dns - --azure-resource-group=externaldns - --azure-subscription-id=<use the id of your subscription> volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` Create the deployment for ExternalDNS: ``` $ kubectl create -f externaldns.yaml ``` ## Create an nginx deployment This step creates a demo workload in your cluster. Apply the following manifest to create a deployment that we are going to expose later in this tutorial in multiple ways: ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 ``` ## Expose the nginx deployment with a load balancer Apply the following manifest to create a service of type `LoadBalancer`. This will create a public load balancer in Azure that will forward traffic to the nginx pods. ```yaml --- apiVersion: v1 kind: Service metadata: name: nginx-svc annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" external-dns.alpha.kubernetes.io/hostname: server.example.com external-dns.alpha.kubernetes.io/internal-hostname: server-clusterip.example.com spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer ``` In the service we used multiple annptations. The annotation `service.beta.kubernetes.io/azure-load-balancer-internal` is used to create an internal load balancer. The annotation `external-dns.alpha.kubernetes.io/hostname` is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer. The annotation `external-dns.alpha.kubernetes.io/internal-hostname` is used to create a private DNS record for the load balancer that will point to the cluster IP. ## Install NGINX Ingress Controller (Optional) Helm is used to deploy the ingress controller. We employ the popular chart [ingress-nginx](https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx). ``` $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx $ helm repo update $ helm install [RELEASE_NAME] ingress-nginx/ingress-nginx --set controller.publishService.enabled=true ``` The parameter `controller.publishService.enabled` needs to be set to `true.` It will make the ingress controller update the endpoint records of ingress-resources to contain the external-ip of the loadbalancer serving the ingress-controller. This is crucial as ExternalDNS reads those endpoints records when creating DNS-Records from ingress-resources. In the subsequent parameter we will make use of this. If you don't want to work with ingress-resources in your later use, you can leave the parameter out. Verify the correct propagation of the loadbalancer's ip by listing the ingresses. ``` $ kubectl get ingress ``` The address column should contain the ip for each ingress. ExternalDNS will pick up exactly this piece of information. ``` NAME HOSTS ADDRESS PORTS AGE nginx1 sample1.aks.com 52.167.195.110 80 6d22h nginx2 sample2.aks.com 52.167.195.110 80 6d21h ``` If you do not want to deploy the ingress controller with Helm, ensure to pass the following cmdline-flags to it through the mechanism of your choice: ``` flags: --publish-service=<namespace of ingress-controller >/<svcname of ingress-controller> --update-status=true (default-value) example: ./nginx-ingress-controller --publish-service=default/nginx-ingress-controller ``` ## Expose the nginx deployment with the ingress (Optional) Apply the following manifest to create an ingress resource that will expose the nginx deployment. The ingress resource backend points to a `ClusterIP` service that is needed to select the pods that will receive the traffic. ```yaml --- apiVersion: v1 kind: Service metadata: name: nginx-svc-clusterip spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx spec: ingressClassName: nginx rules: - host: server.example.com http: paths: - backend: service: name: nginx-svc-clusterip port: number: 80 pathType: Prefix ``` When you use ExternalDNS with Ingress resources, it automatically creates DNS records based on the hostnames listed in those Ingress objects. Those hostnames must match the filters that you defined (if any): - By default, `--domain-filter` filters Azure Private DNS zone. - If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure Private DNS zone name. When those hostnames are removed or renamed the corresponding DNS records are also altered. Create the deployment, service and ingress object: ``` $ kubectl create -f nginx.yaml ``` Since your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute. ## Verify created records Run the following command to view the A records for your Azure Private DNS zone: ``` $ az network private-dns record-set a list -g externaldns -z example.com ``` Substitute the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain ('@' indicates the record is for the zone itself).
external-dns
Azure Private DNS This tutorial describes how to set up ExternalDNS for managing records in Azure Private DNS It comprises of the following steps 1 Provision Azure Private DNS 2 Configure service principal for managing the zone 3 Deploy ExternalDNS 4 Expose an NGINX service with a LoadBalancer and annotate it with the desired DNS name 5 Install NGINX Ingress Controller Optional 6 Expose an nginx service with an ingress Optional 7 Verify the DNS records Everything will be deployed on Kubernetes Therefore please see the subsequent prerequisites Prerequisites Azure Kubernetes Service is deployed and ready Azure CLI 2 0 https docs microsoft com en us cli azure install azure cli and kubectl installed on the box to execute the subsequent steps Provision Azure Private DNS The provider will find suitable zones for domains it manages It will not automatically create zones For this tutorial we will create a Azure resource group named externaldns that can easily be deleted later az group create n externaldns l westeurope Substitute a more suitable location for the resource group if desired As a prerequisite for Azure Private DNS to resolve records is to define links with VNETs Thus first create a VNET az network vnet create name myvnet resource group externaldns location westeurope address prefix 10 2 0 0 16 subnet name mysubnet subnet prefixes 10 2 0 0 24 Next create a Azure Private DNS zone for example com az network private dns zone create g externaldns n example com Substitute a domain you own for example com if desired Finally create the mentioned link with the VNET az network private dns link vnet create g externaldns n mylink z example com v myvnet registration enabled false Configure service principal for managing the zone ExternalDNS needs permissions to make changes in Azure Private DNS These permissions are roles assigned to the service principal used by ExternalDNS A service principal with a minimum access level of Private DNS Zone Contributor to the Private DNS zone s and Reader to the resource group containing the Azure Private DNS zone s is necessary More powerful role assignments like Owner or assignments on subscription level work too Start off by creating the service principal without role assignments az ad sp create for rbac skip assignment n http externaldns sp appId appId GUID aadClientId value password password aadClientSecret value tenant AzureAD Tenant Id tenantId value Note Alternatively you can issue az account show query tenantId to retrieve the id of your AAD Tenant too Next assign the roles to the service principal But first retrieve the ID s of the objects to assign roles on find out the resource ids of the resource group where the dns zone is deployed and the dns zone itself az group show name externaldns query id o tsv subscriptions id resourceGroups externaldns az network private dns zone show name example com g externaldns query id o tsv subscriptions resourceGroups externaldns providers Microsoft Network privateDnsZones example com Now create role assignments 1 as a reader to the resource group az role assignment create role Reader assignee appId GUID scope resource group resource id 2 as a contributor to DNS Zone itself az role assignment create role Private DNS Zone Contributor assignee appId GUID scope dns zone resource id Throttling When the ExternalDNS managed zones list doesn t change frequently one can set azure zones cache duration zones list cache time to live The zones list cache is disabled by default with a value of 0s Deploy ExternalDNS Configure kubectl to be able to communicate and authenticate with your cluster This is per default done through the file kube config For general background information on this see kubernetes docs https kubernetes io docs tasks access application cluster access cluster Azure CLI features functionality for automatically maintaining this file for AKS Clusters See Azure Docs https docs microsoft com de de cli azure aks view azure cli latest az aks get credentials Follow the steps for azure dns provider azure md creating configuration file to create a configuration file Then apply one of the following manifests depending on whether you use RBAC or not The credentials of the service principal are provided to ExternalDNS as environment variables Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name externaldns spec selector matchLabels app externaldns strategy type Recreate template metadata labels app externaldns spec containers name externaldns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com provider azure private dns azure resource group externaldns azure subscription id use the id of your subscription volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Manifest for clusters with RBAC enabled cluster access yaml apiVersion v1 kind ServiceAccount metadata name externaldns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name externaldns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs get watch list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name externaldns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name externaldns subjects kind ServiceAccount name externaldns namespace default apiVersion apps v1 kind Deployment metadata name externaldns spec selector matchLabels app externaldns strategy type Recreate template metadata labels app externaldns spec serviceAccountName externaldns containers name externaldns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com provider azure private dns azure resource group externaldns azure subscription id use the id of your subscription volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Manifest for clusters with RBAC enabled namespace access This configuration is the same as above except it only requires privileges for the current namespace not for the whole cluster However access to nodes https kubernetes io docs concepts architecture nodes requires cluster access so when using this manifest services with type NodePort will be skipped yaml apiVersion v1 kind ServiceAccount metadata name externaldns apiVersion rbac authorization k8s io v1 kind Role metadata name externaldns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiVersion rbac authorization k8s io v1 kind RoleBinding metadata name externaldns roleRef apiGroup rbac authorization k8s io kind Role name externaldns subjects kind ServiceAccount name externaldns apiVersion apps v1 kind Deployment metadata name externaldns spec selector matchLabels app externaldns strategy type Recreate template metadata labels app externaldns spec serviceAccountName externaldns containers name externaldns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com provider azure private dns azure resource group externaldns azure subscription id use the id of your subscription volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Create the deployment for ExternalDNS kubectl create f externaldns yaml Create an nginx deployment This step creates a demo workload in your cluster Apply the following manifest to create a deployment that we are going to expose later in this tutorial in multiple ways yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 Expose the nginx deployment with a load balancer Apply the following manifest to create a service of type LoadBalancer This will create a public load balancer in Azure that will forward traffic to the nginx pods yaml apiVersion v1 kind Service metadata name nginx svc annotations service beta kubernetes io azure load balancer internal true external dns alpha kubernetes io hostname server example com external dns alpha kubernetes io internal hostname server clusterip example com spec ports port 80 protocol TCP targetPort 80 selector app nginx type LoadBalancer In the service we used multiple annptations The annotation service beta kubernetes io azure load balancer internal is used to create an internal load balancer The annotation external dns alpha kubernetes io hostname is used to create a DNS record for the load balancer that will point to the internal IP address in the VNET allocated by the internal load balancer The annotation external dns alpha kubernetes io internal hostname is used to create a private DNS record for the load balancer that will point to the cluster IP Install NGINX Ingress Controller Optional Helm is used to deploy the ingress controller We employ the popular chart ingress nginx https github com kubernetes ingress nginx tree main charts ingress nginx helm repo add ingress nginx https kubernetes github io ingress nginx helm repo update helm install RELEASE NAME ingress nginx ingress nginx set controller publishService enabled true The parameter controller publishService enabled needs to be set to true It will make the ingress controller update the endpoint records of ingress resources to contain the external ip of the loadbalancer serving the ingress controller This is crucial as ExternalDNS reads those endpoints records when creating DNS Records from ingress resources In the subsequent parameter we will make use of this If you don t want to work with ingress resources in your later use you can leave the parameter out Verify the correct propagation of the loadbalancer s ip by listing the ingresses kubectl get ingress The address column should contain the ip for each ingress ExternalDNS will pick up exactly this piece of information NAME HOSTS ADDRESS PORTS AGE nginx1 sample1 aks com 52 167 195 110 80 6d22h nginx2 sample2 aks com 52 167 195 110 80 6d21h If you do not want to deploy the ingress controller with Helm ensure to pass the following cmdline flags to it through the mechanism of your choice flags publish service namespace of ingress controller svcname of ingress controller update status true default value example nginx ingress controller publish service default nginx ingress controller Expose the nginx deployment with the ingress Optional Apply the following manifest to create an ingress resource that will expose the nginx deployment The ingress resource backend points to a ClusterIP service that is needed to select the pods that will receive the traffic yaml apiVersion v1 kind Service metadata name nginx svc clusterip spec ports port 80 protocol TCP targetPort 80 selector app nginx type ClusterIP apiVersion networking k8s io v1 kind Ingress metadata name nginx spec ingressClassName nginx rules host server example com http paths backend service name nginx svc clusterip port number 80 pathType Prefix When you use ExternalDNS with Ingress resources it automatically creates DNS records based on the hostnames listed in those Ingress objects Those hostnames must match the filters that you defined if any By default domain filter filters Azure Private DNS zone If you use domain filter together with zone name filter the behavior changes domain filter then filters Ingress domains not the Azure Private DNS zone name When those hostnames are removed or renamed the corresponding DNS records are also altered Create the deployment service and ingress object kubectl create f nginx yaml Since your external IP would have already been assigned to the nginx ingress service the DNS records pointing to the IP of the nginx ingress service should be created within a minute Verify created records Run the following command to view the A records for your Azure Private DNS zone az network private dns record set a list g externaldns z example com Substitute the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain indicates the record is for the zone itself
external-dns Importing a Domain into Scaleway DNS Make sure to use 0 7 4 version of ExternalDNS for this tutorial Warning Scaleway DNS is currently in Public Beta and may not be suited for production usage This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS Scaleway
# Scaleway This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS. Make sure to use **>=0.7.4** version of ExternalDNS for this tutorial. **Warning**: Scaleway DNS is currently in Public Beta and may not be suited for production usage. ## Importing a Domain into Scaleway DNS In order to use your domain, you need to import it into Scaleway DNS. If it's not already done, you can follow [this documentation](https://www.scaleway.com/en/docs/scaleway-dns/) Once the domain is imported you can either use the root zone, or create a subzone to use. In this example we will use `example.com` as an example. ## Creating Scaleway Credentials To use ExternalDNS with Scaleway DNS, you need to create an API token (composed of the Access Key and the Secret Key). You can either use existing ones or you can create a new token, as explained in [How to generate an API token](https://www.scaleway.com/en/docs/generate-an-api-token/) or directly by going to the [credentials page](https://console.scaleway.com/account/organization/credentials). Scaleway provider supports configuring credentials using profiles or supplying it directly with environment variables. ### Configuration using a config file You can supply the credentials through a config file: 1. Create the config file. Check out [Scaleway docs](https://github.com/scaleway/scaleway-sdk-go/blob/master/scw/README.md#scaleway-config) for instructions 2. Mount it as a Secret into the Pod 3. Configure environment variable `SCW_PROFILE` to match the profile name in the config file 4. Configure environment variable `SCW_CONFIG_PATH` to match the location of the mounted config file ### Configuration using environment variables Two environment variables are needed to run ExternalDNS with Scaleway DNS: - `SCW_ACCESS_KEY` which is the Access Key. - `SCW_SECRET_KEY` which is the Secret Key. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. The following example are suited for development. For a production usage, prefer secrets over environment, and use a [tagged release](https://github.com/kubernetes-sigs/external-dns/releases). ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=scaleway env: - name: SCW_ACCESS_KEY value: "<your access key>" - name: SCW_SECRET_KEY value: "<your secret key>" ### Set if configuring using a config file. Make sure to create the Secret first. # - name: SCW_PROFILE # value: "<profile name>" # - name: SCW_CONFIG_PATH # value: /etc/scw/config.yaml # volumeMounts: # - name: scw-config # mountPath: /etc/scw/config.yaml # readOnly: true # volumes: # - name: scw-config # secret: # secretName: scw-config ### ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=scaleway env: - name: SCW_ACCESS_KEY value: "<your access key>" - name: SCW_SECRET_KEY value: "<your secret key>" ### Set if configuring using a config file. Make sure to create the Secret first. # - name: SCW_PROFILE # value: "<profile name>" # - name: SCW_CONFIG_PATH # value: /etc/scw/config.yaml # volumeMounts: # - name: scw-config # mountPath: /etc/scw/config.yaml # readOnly: true # volumes: # - name: scw-config # secret: # secretName: scw-config ### ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the Scaleway DNS zone created above. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Scaleway DNS records. ## Verifying Scaleway DNS records Check your [Scaleway DNS UI](https://console.scaleway.com/domains/external) to view the records for your Scaleway DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage Scaleway DNS records, we can delete the tutorial's example: ``` $ kubectl delete service -f nginx.yaml $ kubectl delete service -f externaldns.yaml ```
external-dns
Scaleway This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Scaleway DNS Make sure to use 0 7 4 version of ExternalDNS for this tutorial Warning Scaleway DNS is currently in Public Beta and may not be suited for production usage Importing a Domain into Scaleway DNS In order to use your domain you need to import it into Scaleway DNS If it s not already done you can follow this documentation https www scaleway com en docs scaleway dns Once the domain is imported you can either use the root zone or create a subzone to use In this example we will use example com as an example Creating Scaleway Credentials To use ExternalDNS with Scaleway DNS you need to create an API token composed of the Access Key and the Secret Key You can either use existing ones or you can create a new token as explained in How to generate an API token https www scaleway com en docs generate an api token or directly by going to the credentials page https console scaleway com account organization credentials Scaleway provider supports configuring credentials using profiles or supplying it directly with environment variables Configuration using a config file You can supply the credentials through a config file 1 Create the config file Check out Scaleway docs https github com scaleway scaleway sdk go blob master scw README md scaleway config for instructions 2 Mount it as a Secret into the Pod 3 Configure environment variable SCW PROFILE to match the profile name in the config file 4 Configure environment variable SCW CONFIG PATH to match the location of the mounted config file Configuration using environment variables Two environment variables are needed to run ExternalDNS with Scaleway DNS SCW ACCESS KEY which is the Access Key SCW SECRET KEY which is the Secret Key Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS The following example are suited for development For a production usage prefer secrets over environment and use a tagged release https github com kubernetes sigs external dns releases Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider scaleway env name SCW ACCESS KEY value your access key name SCW SECRET KEY value your secret key Set if configuring using a config file Make sure to create the Secret first name SCW PROFILE value profile name name SCW CONFIG PATH value etc scw config yaml volumeMounts name scw config mountPath etc scw config yaml readOnly true volumes name scw config secret secretName scw config Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions resources ingresses verbs get watch list apiGroups resources nodes verbs list watch apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider scaleway env name SCW ACCESS KEY value your access key name SCW SECRET KEY value your secret key Set if configuring using a config file Make sure to create the Secret first name SCW PROFILE value profile name name SCW CONFIG PATH value etc scw config yaml volumeMounts name scw config mountPath etc scw config yaml readOnly true volumes name scw config secret secretName scw config Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec replicas 1 selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the Scaleway DNS zone created above ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the Scaleway DNS records Verifying Scaleway DNS records Check your Scaleway DNS UI https console scaleway com domains external to view the records for your Scaleway DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage Scaleway DNS records we can delete the tutorial s example kubectl delete service f nginx yaml kubectl delete service f externaldns yaml
external-dns Make sure to use 0 5 5 version of ExternalDNS for this tutorial If you want to learn about how to use Linode DNS Manager read the following tutorials Managing DNS with Linode This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager Linode
# Linode This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager. Make sure to use **>=0.5.5** version of ExternalDNS for this tutorial. ## Managing DNS with Linode If you want to learn about how to use Linode DNS Manager read the following tutorials: [An Introduction to Managing DNS](https://www.linode.com/docs/platform/manager/dns-manager/), and [general documentation](https://www.linode.com/docs/networking/dns/) ## Creating Linode Credentials Generate a new oauth token by following the instructions at [Access-and-Authentication](https://developers.linode.com/api/v4#section/Access-and-Authentication) The environment variable `LINODE_TOKEN` will be needed to run ExternalDNS with Linode. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=linode env: - name: LINODE_TOKEN value: "YOUR_LINODE_API_KEY" ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=linode env: - name: LINODE_TOKEN value: "YOUR_LINODE_API_KEY" ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the Linode DNS zone created above. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Linode DNS records. ## Verifying Linode DNS records Check your [Linode UI](https://cloud.linode.com/domains) to view the records for your Linode DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage Linode DNS records, we can delete the tutorial's example: ``` $ kubectl delete service -f nginx.yaml $ kubectl delete service -f externaldns.yaml ```
external-dns
Linode This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Linode DNS Manager Make sure to use 0 5 5 version of ExternalDNS for this tutorial Managing DNS with Linode If you want to learn about how to use Linode DNS Manager read the following tutorials An Introduction to Managing DNS https www linode com docs platform manager dns manager and general documentation https www linode com docs networking dns Creating Linode Credentials Generate a new oauth token by following the instructions at Access and Authentication https developers linode com api v4 section Access and Authentication The environment variable LINODE TOKEN will be needed to run ExternalDNS with Linode Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider linode env name LINODE TOKEN value YOUR LINODE API KEY Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider linode env name LINODE TOKEN value YOUR LINODE API KEY Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the Linode DNS zone created above ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the Linode DNS records Verifying Linode DNS records Check your Linode UI https cloud linode com domains to view the records for your Linode DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage Linode DNS records we can delete the tutorial s example kubectl delete service f nginx yaml kubectl delete service f externaldns yaml
external-dns functional It expects that zones you wish to add records to already exist PowerDNS Prerequisites The provider has been written for and tested against v4 1 x and thus requires PowerDNS Auth Server 4 1 x The PDNS provider expects that your PowerDNS instance is already setup and PowerDNS provider support was added via thus you need to use external dns version v0 5
# PowerDNS ## Prerequisites The provider has been written for and tested against [PowerDNS](https://github.com/PowerDNS/pdns) v4.1.x and thus requires **PowerDNS Auth Server >= 4.1.x** PowerDNS provider support was added via [this PR](https://github.com/kubernetes-sigs/external-dns/pull/373), thus you need to use external-dns version >= v0.5 The PDNS provider expects that your PowerDNS instance is already setup and functional. It expects that zones, you wish to add records to, already exist and are configured correctly. It does not add, remove or configure new zones in anyway. ## Feature Support The PDNS provider currently does not support: * Dry running a configuration is not supported ## Deployment Deploying external DNS for PowerDNS is actually nearly identical to deploying it for other providers. This is what a sample `deployment.yaml` looks like: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: # Only use if you're also using RBAC # serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # or ingress or both - --provider=pdns - --pdns-server= - --pdns-server-id= - --pdns-api-key= - --txt-owner-id= - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the zones matching provided domain; omit to process all available zones in PowerDNS - --log-level=debug - --interval=30s ``` #### Domain Filter (`--domain-filter`) When the `--domain-filter` argument is specified, external-dns will only create DNS records for host names (specified in ingress objects and services with the external-dns annotation) related to zones that match the `--domain-filter` argument in the external-dns deployment manifest. eg. ```--domain-filter=example.org``` will allow for zone `example.org` and any zones in PowerDNS that ends in `.example.org`, including `an.example.org`, ie. the subdomains of example.org. eg. ```--domain-filter=.example.org``` will allow *only* zones that end in `.example.org`, ie. the subdomains of example.org but not the `example.org` zone itself. The filter can also match parent zones. For example `--domain-filter=a.example.com` will allow for zone `example.com`. If you want to match parent zones, you cannot pre-pend your filter with a ".", eg. `--domain-filter=.example.com` will not attempt to match parent zones. ### Regex Domain Filter (`--regex-domain-filter`) `--regex-domain-filter` limits possible domains and target zone with a regex. It overrides domain filters and can be specified only once. ## RBAC If your cluster is RBAC enabled, you also need to setup the following, before you can run external-dns: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["pods"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default ``` ## Testing and Verification **Important!**: Remember to change `example.com` with your own domain throughout the following text. Spin up a simple "Hello World" HTTP server with the following spec (`kubectl apply -f`): ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: echo spec: selector: matchLabels: app: echo template: metadata: labels: app: echo spec: containers: - image: hashicorp/http-echo name: echo ports: - containerPort: 5678 args: - -text="Hello World" --- apiVersion: v1 kind: Service metadata: name: echo annotations: external-dns.alpha.kubernetes.io/hostname: echo.example.com spec: selector: app: echo type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 5678 ``` **Important!**: Don't run dig, nslookup or similar immediately (until you've confirmed the record exists). You'll get hit by [negative DNS caching](https://tools.ietf.org/html/rfc2308), which is hard to flush. Run the following to make sure everything is in order: ```bash $ kubectl get services echo $ kubectl get endpoints echo ``` Make sure everything looks correct, i.e the service is defined and receives a public IP, and that the endpoint also has a pod IP. Once that's done, wait about 30s-1m (interval for external-dns to kick in), then do: ```bash $ curl -H "X-API-Key: ${PDNS_API_KEY}" ${PDNS_API_URL}/api/v1/servers/localhost/zones/example.com. | jq '.rrsets[] | select(.name | contains("echo"))' ``` Once the API shows the record correctly, you can double check your record using: ```bash $ dig @${PDNS_FQDN} echo.example.com. ``` ## Using CRD source to manage DNS records in PowerDNS [CRD source](https://github.com/kubernetes-sigs/external-dns/blob/master/docs/contributing/crd-source.md) provides a generic mechanism and declarative way to manage DNS records in PowerDNS using external-dns. ```bash external-dns --source=crd --provider=pdns \ --pdns-server= \ --pdns-api-key= \ --domain-filter=example.com \ --managed-record-types=A \ --managed-record-types=CNAME \ --managed-record-types=TXT \ --managed-record-types=MX \ --managed-record-types=SRV ``` Not all the record types are enabled by default so we can enable the required record types using `--managed-record-types`. * Example for record type `A` ```yaml apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: examplearecord spec: endpoints: - dnsName: example.com recordTTL: 60 recordType: A targets: - 10.0.0.1 ``` * Example for record type `CNAME` ```yaml apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: examplecnamerecord spec: endpoints: - dnsName: test-a.example.com recordTTL: 300 recordType: CNAME targets: - example.com ``` * Example for record type `TXT` ```yaml apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: exampletxtrecord spec: endpoints: - dnsName: example.com recordTTL: 3600 recordType: TXT targets: - '"v=spf1 include:spf.protection.example.com include:example.org -all"' - '"apple-domain-verification=XXXXXXXXXXXXX"' ``` * Example for record type `MX` ```yaml apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: examplemxrecord spec: endpoints: - dnsName: example.com recordTTL: 3600 recordType: MX targets: - "10 mailhost1.example.com" ``` * Example for record type `SRV` ```yaml apiVersion: externaldns.k8s.io/v1alpha1 kind: DNSEndpoint metadata: name: examplesrvrecord spec: endpoints: - dnsName: _service._tls.example.com recordTTL: 180 recordType: SRV targets: - "100 1 443 service.example.com" ``
external-dns
PowerDNS Prerequisites The provider has been written for and tested against PowerDNS https github com PowerDNS pdns v4 1 x and thus requires PowerDNS Auth Server 4 1 x PowerDNS provider support was added via this PR https github com kubernetes sigs external dns pull 373 thus you need to use external dns version v0 5 The PDNS provider expects that your PowerDNS instance is already setup and functional It expects that zones you wish to add records to already exist and are configured correctly It does not add remove or configure new zones in anyway Feature Support The PDNS provider currently does not support Dry running a configuration is not supported Deployment Deploying external DNS for PowerDNS is actually nearly identical to deploying it for other providers This is what a sample deployment yaml looks like yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec Only use if you re also using RBAC serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service or ingress or both provider pdns pdns server pdns server id pdns api key txt owner id domain filter external dns test my org com will make ExternalDNS see only the zones matching provided domain omit to process all available zones in PowerDNS log level debug interval 30s Domain Filter domain filter When the domain filter argument is specified external dns will only create DNS records for host names specified in ingress objects and services with the external dns annotation related to zones that match the domain filter argument in the external dns deployment manifest eg domain filter example org will allow for zone example org and any zones in PowerDNS that ends in example org including an example org ie the subdomains of example org eg domain filter example org will allow only zones that end in example org ie the subdomains of example org but not the example org zone itself The filter can also match parent zones For example domain filter a example com will allow for zone example com If you want to match parent zones you cannot pre pend your filter with a eg domain filter example com will not attempt to match parent zones Regex Domain Filter regex domain filter regex domain filter limits possible domains and target zone with a regex It overrides domain filters and can be specified only once RBAC If your cluster is RBAC enabled you also need to setup the following before you can run external dns yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources pods verbs get watch list apiGroups resources nodes verbs list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default Testing and Verification Important Remember to change example com with your own domain throughout the following text Spin up a simple Hello World HTTP server with the following spec kubectl apply f yaml apiVersion apps v1 kind Deployment metadata name echo spec selector matchLabels app echo template metadata labels app echo spec containers image hashicorp http echo name echo ports containerPort 5678 args text Hello World apiVersion v1 kind Service metadata name echo annotations external dns alpha kubernetes io hostname echo example com spec selector app echo type LoadBalancer ports protocol TCP port 80 targetPort 5678 Important Don t run dig nslookup or similar immediately until you ve confirmed the record exists You ll get hit by negative DNS caching https tools ietf org html rfc2308 which is hard to flush Run the following to make sure everything is in order bash kubectl get services echo kubectl get endpoints echo Make sure everything looks correct i e the service is defined and receives a public IP and that the endpoint also has a pod IP Once that s done wait about 30s 1m interval for external dns to kick in then do bash curl H X API Key PDNS API KEY PDNS API URL api v1 servers localhost zones example com jq rrsets select name contains echo Once the API shows the record correctly you can double check your record using bash dig PDNS FQDN echo example com Using CRD source to manage DNS records in PowerDNS CRD source https github com kubernetes sigs external dns blob master docs contributing crd source md provides a generic mechanism and declarative way to manage DNS records in PowerDNS using external dns bash external dns source crd provider pdns pdns server pdns api key domain filter example com managed record types A managed record types CNAME managed record types TXT managed record types MX managed record types SRV Not all the record types are enabled by default so we can enable the required record types using managed record types Example for record type A yaml apiVersion externaldns k8s io v1alpha1 kind DNSEndpoint metadata name examplearecord spec endpoints dnsName example com recordTTL 60 recordType A targets 10 0 0 1 Example for record type CNAME yaml apiVersion externaldns k8s io v1alpha1 kind DNSEndpoint metadata name examplecnamerecord spec endpoints dnsName test a example com recordTTL 300 recordType CNAME targets example com Example for record type TXT yaml apiVersion externaldns k8s io v1alpha1 kind DNSEndpoint metadata name exampletxtrecord spec endpoints dnsName example com recordTTL 3600 recordType TXT targets v spf1 include spf protection example com include example org all apple domain verification XXXXXXXXXXXXX Example for record type MX yaml apiVersion externaldns k8s io v1alpha1 kind DNSEndpoint metadata name examplemxrecord spec endpoints dnsName example com recordTTL 3600 recordType MX targets 10 mailhost1 example com Example for record type SRV yaml apiVersion externaldns k8s io v1alpha1 kind DNSEndpoint metadata name examplesrvrecord spec endpoints dnsName service tls example com recordTTL 180 recordType SRV targets 100 1 443 service example com
external-dns This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi Create a new DNS zone where you want to create your records in Let s use as an example here Make sure the zone uses Make sure to use 0 7 7 version of ExternalDNS for this tutorial Gandi Creating a Gandi DNS zone domain
# Gandi This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi. Make sure to use **>=0.7.7** version of ExternalDNS for this tutorial. ## Creating a Gandi DNS zone (domain) Create a new DNS zone where you want to create your records in. Let's use `example.com` as an example here. Make sure the zone uses ## Creating Gandi Personal Access Token (PAT) Generate a Personal Access Token on [your account](https://admin.gandi.net) (click on "User Settings") with `Manage domain name technical configurations` permission. The environment variable `GANDI_PAT` will be needed to run ExternalDNS with Gandi. You can also set `GANDI_KEY` if you have an old API key. ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=gandi env: - name: GANDI_PAT value: "YOUR_GANDI_PAT" ``` ### Manifest (for clusters with RBAC enabled) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list","watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: replicas: 1 selector: matchLabels: app: external-dns strategy: type: Recreate template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service # ingress is also possible - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=gandi env: - name: GANDI_PAT value: "YOUR_GANDI_PAT" ``` ## Deploying an Nginx Service Create a service file called 'nginx.yaml' with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx annotations: external-dns.alpha.kubernetes.io/hostname: my-app.example.com spec: selector: app: nginx type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 80 ``` Note the annotation on the service; use the same hostname as the Gandi Domain. Make sure that your Domain is configured to use Live-DNS. ExternalDNS uses this annotation to determine what services should be registered with DNS. Removing the annotation will cause ExternalDNS to remove the corresponding DNS records. Create the deployment and service: ```console $ kubectl create -f nginx.yaml ``` Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service. Once the service has an external IP assigned, ExternalDNS will notice the new service IP address and synchronize the Gandi DNS records. ## Verifying Gandi DNS records Check your [Gandi Dashboard](https://admin.gandi.net/domain) to view the records for your Gandi DNS zone. Click on the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain. ## Cleanup Now that we have verified that ExternalDNS will automatically manage Gandi DNS records, we can delete the tutorial's example: ``` $ kubectl delete service -f nginx.yaml $ kubectl delete service -f externaldns.yaml ``` # Additional options If you're using organizations to separate your domains, you can pass the organization's ID in an environment variable called `GANDI_SHARING_ID` to get access to it.
external-dns
Gandi This tutorial describes how to setup ExternalDNS for usage within a Kubernetes cluster using Gandi Make sure to use 0 7 7 version of ExternalDNS for this tutorial Creating a Gandi DNS zone domain Create a new DNS zone where you want to create your records in Let s use example com as an example here Make sure the zone uses Creating Gandi Personal Access Token PAT Generate a Personal Access Token on your account https admin gandi net click on User Settings with Manage domain name technical configurations permission The environment variable GANDI PAT will be needed to run ExternalDNS with Gandi You can also set GANDI KEY if you have an old API key Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider gandi env name GANDI PAT value YOUR GANDI PAT Manifest for clusters with RBAC enabled yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs list watch apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec replicas 1 selector matchLabels app external dns strategy type Recreate template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service ingress is also possible domain filter example com optional limit to only example com domains change to match the zone created above provider gandi env name GANDI PAT value YOUR GANDI PAT Deploying an Nginx Service Create a service file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec replicas 1 selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx annotations external dns alpha kubernetes io hostname my app example com spec selector app nginx type LoadBalancer ports protocol TCP port 80 targetPort 80 Note the annotation on the service use the same hostname as the Gandi Domain Make sure that your Domain is configured to use Live DNS ExternalDNS uses this annotation to determine what services should be registered with DNS Removing the annotation will cause ExternalDNS to remove the corresponding DNS records Create the deployment and service console kubectl create f nginx yaml Depending where you run your service it can take a little while for your cloud provider to create an external IP for the service Once the service has an external IP assigned ExternalDNS will notice the new service IP address and synchronize the Gandi DNS records Verifying Gandi DNS records Check your Gandi Dashboard https admin gandi net domain to view the records for your Gandi DNS zone Click on the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain Cleanup Now that we have verified that ExternalDNS will automatically manage Gandi DNS records we can delete the tutorial s example kubectl delete service f nginx yaml kubectl delete service f externaldns yaml Additional options If you re using organizations to separate your domains you can pass the organization s ID in an environment variable called GANDI SHARING ID to get access to it
external-dns Make sure to use 0 11 0 version of ExternalDNS for this tutorial are being run on an orchestration node This tutorial uses for all Azure DNS Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and commands This tutorial describes how to setup ExternalDNS for with
# Azure DNS This tutorial describes how to setup ExternalDNS for [Azure DNS](https://azure.microsoft.com/services/dns/) with [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/). Make sure to use **>=0.11.0** version of ExternalDNS for this tutorial. This tutorial uses [Azure CLI 2.0](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) for all Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and `kubectl` commands are being run on an orchestration node. ## Creating an Azure DNS zone The Azure provider for ExternalDNS will find suitable zones for domains it manages; it will not automatically create zones. For this tutorial, we will create a Azure resource group named `MyDnsResourceGroup` that can easily be deleted later: ```bash $ az group create --name "MyDnsResourceGroup" --location "eastus" ``` Substitute a more suitable location for the resource group if desired. Next, create a Azure DNS zone for `example.com`: ```bash $ az network dns zone create --resource-group "MyDnsResourceGroup" --name "example.com" ``` Substitute a domain you own for `example.com` if desired. If using your own domain that was registered with a third-party domain registrar, you should point your domain's name servers to the values in the `nameServers` field from the JSON data returned by the `az network dns zone create` command. Please consult your registrar's documentation on how to do that. ### Internal Load Balancer To create internal load balancers, one can set the annotation `service.beta.kubernetes.io/azure-load-balancer-internal` to `true` on the resource. **Note**: AKS cluster's control plane managed identity needs to be granted `Network Contributor` role to update the subnet. For more details refer to [Use an internal load balancer with Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/internal-lb) ## Configuration file The azure provider will reference a configuration file called `azure.json`. The preferred way to inject the configuration file is by using a Kubernetes secret. The secret should contain an object named `azure.json` with content similar to this: ```json { "tenantId": "01234abc-de56-ff78-abc1-234567890def", "subscriptionId": "01234abc-de56-ff78-abc1-234567890def", "resourceGroup": "MyDnsResourceGroup", "aadClientId": "01234abc-de56-ff78-abc1-234567890def", "aadClientSecret": "uKiuXeiwui4jo9quae9o" } ``` The following fields are used: * `tenantId` (**required**) - run `az account show --query "tenantId"` or by selecting Azure Active Directory in the Azure Portal and checking the _Directory ID_ under Properties. * `subscriptionId` (**required**) - run `az account show --query "id"` or by selecting Subscriptions in the Azure Portal. * `resourceGroup` (**required**) is the Resource Group created in a previous step that contains the Azure DNS Zone. * `aadClientID` is associated with the Service Principal. This is used with Service Principal or Workload Identity methods documented in the next section. * `aadClientSecret` is associated with the Service Principal. This is only used with Service Principal method documented in the next section. * `useManagedIdentityExtension` - this is set to `true` if you use either AKS Kubelet Identity or AAD Pod Identities methods documented in the next section. * `userAssignedIdentityID` - this contains the client id from the Managed identity when using the AAD Pod Identities method documented in the next setion. * `activeDirectoryAuthorityHost` - this contains the uri to overwrite the default provided AAD Endpoint. This is useful for providing additional support where the endpoint is not available in the default cloud config from the [azure-sdk-for-go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud#pkg-variables). * `useWorkloadIdentityExtension` - this is set to `true` if you use Workload Identity method documented in the next section. The Azure DNS provider expects, by default, that the configuration file is at `/etc/kubernetes/azure.json`. This can be overridden with the `--azure-config-file` option when starting ExternalDNS. ## Permissions to modify DNS zone ExternalDNS needs permissions to make changes to the Azure DNS zone. There are four ways configure the access needed: - [Service Principal](#service-principal) - [Managed Identity Using AKS Kubelet Identity](#managed-identity-using-aks-kubelet-identity) - [Managed Identity Using AAD Pod Identities](#managed-identity-using-aad-pod-identities) - [Managed Identity Using Workload Identity](#managed-identity-using-workload-identity) ### Service Principal These permissions are defined in a Service Principal that should be made available to ExternalDNS as a configuration file `azure.json`. #### Creating a service principal A Service Principal with a minimum access level of `DNS Zone Contributor` or `Contributor` to the DNS zone(s) and `Reader` to the resource group containing the Azure DNS zone(s) is necessary for ExternalDNS to be able to edit DNS records. However, other more permissive access levels will work too (e.g. `Contributor` to the resource group or the whole subscription). This is an Azure CLI example on how to query the Azure API for the information required for the Resource Group and DNS zone you would have already created in previous steps (requires `azure-cli` and `jq`) ```bash $ EXTERNALDNS_NEW_SP_NAME="ExternalDnsServicePrincipal" # name of the service principal $ AZURE_DNS_ZONE_RESOURCE_GROUP="MyDnsResourceGroup" # name of resource group where dns zone is hosted $ AZURE_DNS_ZONE="example.com" # DNS zone name like example.com or sub.example.com # Create the service principal $ DNS_SP=$(az ad sp create-for-rbac --name $EXTERNALDNS_NEW_SP_NAME) $ EXTERNALDNS_SP_APP_ID=$(echo $DNS_SP | jq -r '.appId') $ EXTERNALDNS_SP_PASSWORD=$(echo $DNS_SP | jq -r '.password') ``` #### Assign the rights for the service principal Grant access to Azure DNS zone for the service principal. ```bash # fetch DNS id used to grant access to the service principal DNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE \ --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query "id" --output tsv) # 1. as a reader to the resource group $ az role assignment create --role "Reader" --assignee $EXTERNALDNS_SP_APP_ID --scope $DNS_ID # 2. as a contributor to DNS Zone itself $ az role assignment create --role "Contributor" --assignee $EXTERNALDNS_SP_APP_ID --scope $DNS_ID ``` #### Creating a configuration file for the service principal Create the file `azure.json` with values gather from previous steps. ```bash cat <<-EOF > /local/path/to/azure.json { "tenantId": "$(az account show --query tenantId -o tsv)", "subscriptionId": "$(az account show --query id -o tsv)", "resourceGroup": "$AZURE_DNS_ZONE_RESOURCE_GROUP", "aadClientId": "$EXTERNALDNS_SP_APP_ID", "aadClientSecret": "$EXTERNALDNS_SP_PASSWORD" } EOF ``` Use this file to create a Kubernetes secret: ```bash $ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json ``` ### Managed identity using AKS Kubelet identity The [managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS. Managed identities are essentially a service principal whose lifecycle is managed, such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster. The managed identity assigned Kubernetes node pool, or specifically the [VMSS](https://docs.microsoft.com/azure/virtual-machine-scale-sets/overview), is called the Kubelet identity. The managed identites were previously called MSI (Managed Service Identity) and are enabled by default when creating an AKS cluster. Note that permissions granted to this identity will be accessible to all containers running inside the Kubernetes cluster, not just the ExternalDNS container(s). For the managed identity, the contents of `azure.json` should be similar to this: ```json { "tenantId": "01234abc-de56-ff78-abc1-234567890def", "subscriptionId": "01234abc-de56-ff78-abc1-234567890def", "resourceGroup": "MyDnsResourceGroup", "useManagedIdentityExtension": true, "userAssignedIdentityID": "01234abc-de56-ff78-abc1-234567890def" } ``` #### Fetching the Kubelet identity For this process, you will need to get the kubelet identity: ```bash $ PRINCIPAL_ID=$(az aks show --resource-group $CLUSTER_GROUP --name $CLUSTERNAME \ --query "identityProfile.kubeletidentity.objectId" --output tsv) $ IDENTITY_CLIENT_ID=$(az aks show --resource-group $CLUSTER_GROUP --name $CLUSTERNAME \ --query "identityProfile.kubeletidentity.clientId" --output tsv) ``` #### Assign rights for the Kubelet identity Grant access to Azure DNS zone for the kubelet identity. ```bash $ AZURE_DNS_ZONE="example.com" # DNS zone name like example.com or sub.example.com $ AZURE_DNS_ZONE_RESOURCE_GROUP="MyDnsResourceGroup" # resource group where DNS zone is hosted # fetch DNS id used to grant access to the kubelet identity $ DNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE \ --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query "id" --output tsv) $ az role assignment create --role "DNS Zone Contributor" --assignee $PRINCIPAL_ID --scope $DNS_ID ``` #### Creating a configuration file for the kubelet identity Create the file `azure.json` with values gather from previous steps. ```bash cat <<-EOF > /local/path/to/azure.json { "tenantId": "$(az account show --query tenantId -o tsv)", "subscriptionId": "$(az account show --query id -o tsv)", "resourceGroup": "$AZURE_DNS_ZONE_RESOURCE_GROUP", "useManagedIdentityExtension": true, "userAssignedIdentityID": "$IDENTITY_CLIENT_ID" } EOF ``` Use the `azure.json` file to create a Kubernetes secret: ```bash $ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json ``` ### Managed identity using AAD Pod Identities For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container. This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool, but rather associated with explicit ExternalDNS containers. #### Enable the AAD Pod Identities feature For this solution, [AAD Pod Identities](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) preview feature can be enabled. The commands below should do the trick to enable this feature: ```bash $ az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService $ az feature register --name AutoUpgradePreview --namespace Microsoft.ContainerService $ az extension add --name aks-preview $ az extension update --name aks-preview $ az provider register --namespace Microsoft.ContainerService ``` #### Deploy the AAD Pod Identities service Once enabled, you can update your cluster and install needed services for the [AAD Pod Identities](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) feature. ```bash $ AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created $ AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created $ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-pod-identity ``` Note that, if you use the default network plugin `kubenet`, then you need to add the command line option `--enable-pod-identity-with-kubenet` to the above command. #### Creating the managed identity After this process is finished, create a managed identity. ```bash $ IDENTITY_RESOURCE_GROUP=$AZURE_AKS_RESOURCE_GROUP # custom group or reuse AKS group $ IDENTITY_NAME="example-com-identity" # create a managed identity $ az identity create --resource-group "${IDENTITY_RESOURCE_GROUP}" --name "${IDENTITY_NAME}" ``` #### Assign rights for the managed identity Grant access to Azure DNS zone for the managed identity. ```bash $ AZURE_DNS_ZONE_RESOURCE_GROUP="MyDnsResourceGroup" # name of resource group where dns zone is hosted $ AZURE_DNS_ZONE="example.com" # DNS zone name like example.com or sub.example.com # fetch identity client id from managed identity created earlier $ IDENTITY_CLIENT_ID=$(az identity show --resource-group "${IDENTITY_RESOURCE_GROUP}" \ --name "${IDENTITY_NAME}" --query "clientId" --output tsv) # fetch DNS id used to grant access to the managed identity $ DNS_ID=$(az network dns zone show --name "${AZURE_DNS_ZONE}" \ --resource-group "${AZURE_DNS_ZONE_RESOURCE_GROUP}" --query "id" --output tsv) $ az role assignment create --role "DNS Zone Contributor" \ --assignee "${IDENTITY_CLIENT_ID}" --scope "${DNS_ID}" ``` #### Creating a configuration file for the managed identity Create the file `azure.json` with the values from previous steps: ```bash cat <<-EOF > /local/path/to/azure.json { "tenantId": "$(az account show --query tenantId -o tsv)", "subscriptionId": "$(az account show --query id -o tsv)", "resourceGroup": "$AZURE_DNS_ZONE_RESOURCE_GROUP", "useManagedIdentityExtension": true, "userAssignedIdentityID": "$IDENTITY_CLIENT_ID" } EOF ``` Use the `azure.json` file to create a Kubernetes secret: ```bash $ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json ``` #### Creating an Azure identity binding A binding between the managed identity and the ExternalDNS pods needs to be setup by creating `AzureIdentity` and `AzureIdentityBinding` resources. This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity. When AAD Pod Identity feature is enabled from previous steps above, the `az aks pod-identity add` can be used to create these resources: ```bash $ IDENTITY_RESOURCE_ID=$(az identity show --resource-group ${IDENTITY_RESOURCE_GROUP} \ --name ${IDENTITY_NAME} --query id --output tsv) $ az aks pod-identity add --resource-group ${AZURE_AKS_RESOURCE_GROUP} \ --cluster-name ${AZURE_AKS_CLUSTER_NAME} --namespace "default" \ --name "external-dns" --identity-resource-id ${IDENTITY_RESOURCE_ID} ``` This will add something similar to the following resources: ```yaml apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentity metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.azure.com/managedby: aks name: external-dns spec: clientID: $IDENTITY_CLIENT_ID resourceID: $IDENTITY_RESOURCE_ID type: 0 --- apiVersion: aadpodidentity.k8s.io/v1 kind: AzureIdentityBinding metadata: annotations: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.azure.com/managedby: aks name: external-dns-binding spec: azureIdentity: external-dns selector: external-dns ``` #### Update ExternalDNS labels When deploying ExternalDNS, you want to make sure that deployed pod(s) will have the label `aadpodidbinding: external-dns` to enable AAD Pod Identities. You can patch an existing deployment of ExternalDNS with this command: ```bash kubectl patch deployment external-dns --namespace "default" --patch \ '{"spec": {"template": {"metadata": {"labels": {"aadpodidbinding": "external-dns"}}}}}' ``` ### Managed identity using Workload Identity For this process, we will create a [managed identity](https://docs.microsoft.com//azure/active-directory/managed-identities-azure-resources/overview) that will be explicitly used by the ExternalDNS container. This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account. #### Deploy OIDC issuer and Workload Identity services Update your cluster to install [OIDC Issuer](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer) and [Workload Identity](https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster): ```bash $ AZURE_AKS_RESOURCE_GROUP="my-aks-cluster-group" # name of resource group where aks cluster was created $ AZURE_AKS_CLUSTER_NAME="my-aks-cluster" # name of aks cluster previously created $ az aks update --resource-group ${AZURE_AKS_RESOURCE_GROUP} --name ${AZURE_AKS_CLUSTER_NAME} --enable-oidc-issuer --enable-workload-identity ``` #### Create a managed identity Create a managed identity: ```bash $ IDENTITY_RESOURCE_GROUP=$AZURE_AKS_RESOURCE_GROUP # custom group or reuse AKS group $ IDENTITY_NAME="example-com-identity" # create a managed identity $ az identity create --resource-group "${IDENTITY_RESOURCE_GROUP}" --name "${IDENTITY_NAME}" ``` #### Assign a role to the managed identity Grant access to Azure DNS zone for the managed identity: ```bash $ AZURE_DNS_ZONE_RESOURCE_GROUP="MyDnsResourceGroup" # name of resource group where dns zone is hosted $ AZURE_DNS_ZONE="example.com" # DNS zone name like example.com or sub.example.com # fetch identity client id from managed identity created earlier $ IDENTITY_CLIENT_ID=$(az identity show --resource-group "${IDENTITY_RESOURCE_GROUP}" \ --name "${IDENTITY_NAME}" --query "clientId" --output tsv) # fetch DNS id used to grant access to the managed identity $ DNS_ID=$(az network dns zone show --name "${AZURE_DNS_ZONE}" \ --resource-group "${AZURE_DNS_ZONE_RESOURCE_GROUP}" --query "id" --output tsv) $ RESOURCE_GROUP_ID=$(az group show --name "${AZURE_DNS_ZONE_RESOURCE_GROUP}" --query "id" --output tsv) $ az role assignment create --role "DNS Zone Contributor" \ --assignee "${IDENTITY_CLIENT_ID}" --scope "${DNS_ID}" $ az role assignment create --role "Reader" \ --assignee "${IDENTITY_CLIENT_ID}" --scope "${RESOURCE_GROUP_ID}" ``` #### Create a federated identity credential A binding between the managed identity and the ExternalDNS service account needs to be setup by creating a federated identity resource: ```bash $ OIDC_ISSUER_URL="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)" $ az identity federated-credential create --name ${IDENTITY_NAME} --identity-name ${IDENTITY_NAME} --resource-group $AZURE_AKS_RESOURCE_GROUP} --issuer "$OIDC_ISSUER_URL" --subject "system:serviceaccount:default:external-dns" ``` NOTE: make sure federated credential refers to correct namespace and service account (`system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT>`) #### Helm When deploying external-dns with Helm you need to create a secret to store the Azure config (see below) and create a workload identity (out of scope here) before you can install the chart. ```yaml apiVersion: v1 kind: Secret metadata: name: external-dns-azure type: Opaque data: azure.json: | { "tenantId": "<TENANT_ID>", "subscriptionId": "<SUBSCRIPTION_ID>", "resourceGroup": "<AZURE_DNS_ZONE_RESOURCE_GROUP>", "useWorkloadIdentityExtension": true } ``` Once you have created the secret and have a workload identity you can install the chart with the following values. ```yaml fullnameOverride: external-dns serviceAccount: labels: azure.workload.identity/use: "true" annotations: azure.workload.identity/client-id: <IDENTITY_CLIENT_ID> podLabels: azure.workload.identity/use: "true" extraVolumes: - name: azure-config-file secret: secretName: external-dns-azure extraVolumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true provider: name: azure ``` NOTE: make sure the pod is restarted whenever you make a configuration change. #### kubectl (alternative) ##### Create a configuration file for the managed identity Create the file `azure.json` with the values from previous steps: ```bash cat <<-EOF > /local/path/to/azure.json { "subscriptionId": "$(az account show --query id -o tsv)", "resourceGroup": "$AZURE_DNS_ZONE_RESOURCE_GROUP", "useWorkloadIdentityExtension": true } EOF ``` NOTE: it's also possible to specify (or override) ClientID specified in the next section through `aadClientId` field in this `azure.json` file. Use the `azure.json` file to create a Kubernetes secret: ```bash $ kubectl create secret generic azure-config-file --namespace "default" --from-file /local/path/to/azure.json ``` ##### Update labels and annotations on ExternalDNS service account To instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod, the pod needs to have a label `azure.workload.identity/use: "true"` (before Workload Identity 1.0.0, this label was supposed to be set on the service account instead). Also, the service account needs to have an annotation `azure.workload.identity/client-id: <IDENTITY_CLIENT_ID>`: To patch the existing serviceaccount and deployment, use the following command: ```bash $ kubectl patch serviceaccount external-dns --namespace "default" --patch \ "{\"metadata\": {\"annotations\": {\"azure.workload.identity/client-id\": \"${IDENTITY_CLIENT_ID}\"}}}" $ kubectl patch deployment external-dns --namespace "default" --patch \ '{"spec": {"template": {"metadata": {"labels": {\"azure.workload.identity/use\": \"true\"}}}}}' ``` NOTE: it's also possible to specify (or override) ClientID through `aadClientId` field in `azure.json`. NOTE: make sure the pod is restarted whenever you make a configuration change. ## Throttling When the ExternalDNS managed zones list doesn't change frequently, one can set `--azure-zones-cache-duration` (zones list cache time-to-live). The zones list cache is disabled by default, with a value of 0s. ## Ingress used with ExternalDNS This deployment assumes that you will be using nginx-ingress. When using nginx-ingress do not deploy it as a Daemon Set. This causes nginx-ingress to write the Cluster IP of the backend pods in the ingress status.loadbalancer.ip property which then has external-dns write the Cluster IP(s) in DNS vs. the nginx-ingress service external IP. Ensure that your nginx-ingress deployment has the following arg: added to it: ``` - --publish-service=namespace/nginx-ingress-controller-svcname ``` For more details see here: [nginx-ingress external-dns](https://github.com/kubernetes-sigs/external-dns/blob/HEAD/docs/faq.md#why-is-externaldns-only-adding-a-single-ip-address-in-route-53-on-aws-when-using-the-nginx-ingress-controller-how-do-i-get-it-to-use-the-fqdn-of-the-elb-assigned-to-my-nginx-ingress-controller-service-instead) ## Deploy ExternalDNS Connect your `kubectl` client to the cluster you want to test ExternalDNS with. Then apply one of the following manifests file to deploy ExternalDNS. The deployment assumes that ExternalDNS will be installed into the `default` namespace. If this namespace is different, the `ClusterRoleBinding` will need to be updated to reflect the desired alternative namespace, such as `external-dns`, `kube-addons`, etc. ### Manifest (for clusters without RBAC enabled) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=azure - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` ### Manifest (for clusters with RBAC enabled, cluster access) ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods", "nodes"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=azure - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group - --txt-prefix=externaldns- volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` ### Manifest (for clusters with RBAC enabled, namespace access) This configuration is the same as above, except it only requires privileges for the current namespace, not for the whole cluster. However, access to [nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) requires cluster access, so when using this manifest, services with type `NodePort` will be skipped! ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: external-dns roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: external-dns subjects: - kind: ServiceAccount name: external-dns --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.k8s.io/external-dns/external-dns:v0.15.0 args: - --source=service - --source=ingress - --domain-filter=example.com # (optional) limit to only example.com domains; change to match the zone created above. - --provider=azure - --azure-resource-group=MyDnsResourceGroup # (optional) use the DNS zones from the tutorial's resource group volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azure-config-file ``` Create the deployment for ExternalDNS: ```bash $ kubectl create --namespace "default" --filename externaldns.yaml ``` ## Ingress Option: Expose an nginx service with an ingress Create a file called `nginx.yaml` with the following contents: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx spec: ingressClassName: nginx rules: - host: server.example.com http: paths: - path: / pathType: Prefix backend: service: name: nginx-svc port: number: 80 ``` When you use ExternalDNS with Ingress resources, it automatically creates DNS records based on the hostnames listed in those Ingress objects. Those hostnames must match the filters that you defined (if any): - By default, `--domain-filter` filters Azure DNS zone. - If you use `--domain-filter` together with `--zone-name-filter`, the behavior changes: `--domain-filter` then filters Ingress domains, not the Azure DNS zone name. When those hostnames are removed or renamed the corresponding DNS records are also altered. Create the deployment, service and ingress object: ```bash $ kubectl create --namespace "default" --filename nginx.yaml ``` Since your external IP would have already been assigned to the nginx-ingress service, the DNS records pointing to the IP of the nginx-ingress service should be created within a minute. ## Azure Load Balancer option: Expose an nginx service with a load balancer Create a file called `nginx.yaml` with the following contents: ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-svc annotations: external-dns.alpha.kubernetes.io/hostname: server.example.com spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer ``` The annotation `external-dns.alpha.kubernetes.io/hostname` is used to specify the DNS name that should be created for the service. The annotation value is a comma separated list of host names. ## Verifying Azure DNS records Run the following command to view the A records for your Azure DNS zone: ```bash $ az network dns record-set a list --resource-group "MyDnsResourceGroup" --zone-name example.com ``` Substitute the zone for the one created above if a different domain was used. This should show the external IP address of the service as the A record for your domain ('@' indicates the record is for the zone itself). ## Delete Azure Resource Group Now that we have verified that ExternalDNS will automatically manage Azure DNS records, we can delete the tutorial's resource group: ```bash $ az group delete --name "MyDnsResourceGroup" ``` ## More tutorials A video explanation is available here: https://www.youtube.com/watch?v=VSn6DPKIhM8&list=PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE ![image](https://user-images.githubusercontent.com/6548359/235437721-87611869-75f2-4f32-bb35-9da585e46299.png)
external-dns
Azure DNS This tutorial describes how to setup ExternalDNS for Azure DNS https azure microsoft com services dns with Azure Kubernetes Service https docs microsoft com azure aks Make sure to use 0 11 0 version of ExternalDNS for this tutorial This tutorial uses Azure CLI 2 0 https docs microsoft com en us cli azure install azure cli for all Azure commands and assumes that the Kubernetes cluster was created via Azure Container Services and kubectl commands are being run on an orchestration node Creating an Azure DNS zone The Azure provider for ExternalDNS will find suitable zones for domains it manages it will not automatically create zones For this tutorial we will create a Azure resource group named MyDnsResourceGroup that can easily be deleted later bash az group create name MyDnsResourceGroup location eastus Substitute a more suitable location for the resource group if desired Next create a Azure DNS zone for example com bash az network dns zone create resource group MyDnsResourceGroup name example com Substitute a domain you own for example com if desired If using your own domain that was registered with a third party domain registrar you should point your domain s name servers to the values in the nameServers field from the JSON data returned by the az network dns zone create command Please consult your registrar s documentation on how to do that Internal Load Balancer To create internal load balancers one can set the annotation service beta kubernetes io azure load balancer internal to true on the resource Note AKS cluster s control plane managed identity needs to be granted Network Contributor role to update the subnet For more details refer to Use an internal load balancer with Azure Kubernetes Service AKS https learn microsoft com en us azure aks internal lb Configuration file The azure provider will reference a configuration file called azure json The preferred way to inject the configuration file is by using a Kubernetes secret The secret should contain an object named azure json with content similar to this json tenantId 01234abc de56 ff78 abc1 234567890def subscriptionId 01234abc de56 ff78 abc1 234567890def resourceGroup MyDnsResourceGroup aadClientId 01234abc de56 ff78 abc1 234567890def aadClientSecret uKiuXeiwui4jo9quae9o The following fields are used tenantId required run az account show query tenantId or by selecting Azure Active Directory in the Azure Portal and checking the Directory ID under Properties subscriptionId required run az account show query id or by selecting Subscriptions in the Azure Portal resourceGroup required is the Resource Group created in a previous step that contains the Azure DNS Zone aadClientID is associated with the Service Principal This is used with Service Principal or Workload Identity methods documented in the next section aadClientSecret is associated with the Service Principal This is only used with Service Principal method documented in the next section useManagedIdentityExtension this is set to true if you use either AKS Kubelet Identity or AAD Pod Identities methods documented in the next section userAssignedIdentityID this contains the client id from the Managed identity when using the AAD Pod Identities method documented in the next setion activeDirectoryAuthorityHost this contains the uri to overwrite the default provided AAD Endpoint This is useful for providing additional support where the endpoint is not available in the default cloud config from the azure sdk for go https pkg go dev github com Azure azure sdk for go sdk azcore cloud pkg variables useWorkloadIdentityExtension this is set to true if you use Workload Identity method documented in the next section The Azure DNS provider expects by default that the configuration file is at etc kubernetes azure json This can be overridden with the azure config file option when starting ExternalDNS Permissions to modify DNS zone ExternalDNS needs permissions to make changes to the Azure DNS zone There are four ways configure the access needed Service Principal service principal Managed Identity Using AKS Kubelet Identity managed identity using aks kubelet identity Managed Identity Using AAD Pod Identities managed identity using aad pod identities Managed Identity Using Workload Identity managed identity using workload identity Service Principal These permissions are defined in a Service Principal that should be made available to ExternalDNS as a configuration file azure json Creating a service principal A Service Principal with a minimum access level of DNS Zone Contributor or Contributor to the DNS zone s and Reader to the resource group containing the Azure DNS zone s is necessary for ExternalDNS to be able to edit DNS records However other more permissive access levels will work too e g Contributor to the resource group or the whole subscription This is an Azure CLI example on how to query the Azure API for the information required for the Resource Group and DNS zone you would have already created in previous steps requires azure cli and jq bash EXTERNALDNS NEW SP NAME ExternalDnsServicePrincipal name of the service principal AZURE DNS ZONE RESOURCE GROUP MyDnsResourceGroup name of resource group where dns zone is hosted AZURE DNS ZONE example com DNS zone name like example com or sub example com Create the service principal DNS SP az ad sp create for rbac name EXTERNALDNS NEW SP NAME EXTERNALDNS SP APP ID echo DNS SP jq r appId EXTERNALDNS SP PASSWORD echo DNS SP jq r password Assign the rights for the service principal Grant access to Azure DNS zone for the service principal bash fetch DNS id used to grant access to the service principal DNS ID az network dns zone show name AZURE DNS ZONE resource group AZURE DNS ZONE RESOURCE GROUP query id output tsv 1 as a reader to the resource group az role assignment create role Reader assignee EXTERNALDNS SP APP ID scope DNS ID 2 as a contributor to DNS Zone itself az role assignment create role Contributor assignee EXTERNALDNS SP APP ID scope DNS ID Creating a configuration file for the service principal Create the file azure json with values gather from previous steps bash cat EOF local path to azure json tenantId az account show query tenantId o tsv subscriptionId az account show query id o tsv resourceGroup AZURE DNS ZONE RESOURCE GROUP aadClientId EXTERNALDNS SP APP ID aadClientSecret EXTERNALDNS SP PASSWORD EOF Use this file to create a Kubernetes secret bash kubectl create secret generic azure config file namespace default from file local path to azure json Managed identity using AKS Kubelet identity The managed identity https docs microsoft com azure active directory managed identities azure resources overview that is assigned to the underlying node pool in the AKS cluster can be given permissions to access Azure DNS Managed identities are essentially a service principal whose lifecycle is managed such as deleting the AKS cluster will also delete the service principals associated with the AKS cluster The managed identity assigned Kubernetes node pool or specifically the VMSS https docs microsoft com azure virtual machine scale sets overview is called the Kubelet identity The managed identites were previously called MSI Managed Service Identity and are enabled by default when creating an AKS cluster Note that permissions granted to this identity will be accessible to all containers running inside the Kubernetes cluster not just the ExternalDNS container s For the managed identity the contents of azure json should be similar to this json tenantId 01234abc de56 ff78 abc1 234567890def subscriptionId 01234abc de56 ff78 abc1 234567890def resourceGroup MyDnsResourceGroup useManagedIdentityExtension true userAssignedIdentityID 01234abc de56 ff78 abc1 234567890def Fetching the Kubelet identity For this process you will need to get the kubelet identity bash PRINCIPAL ID az aks show resource group CLUSTER GROUP name CLUSTERNAME query identityProfile kubeletidentity objectId output tsv IDENTITY CLIENT ID az aks show resource group CLUSTER GROUP name CLUSTERNAME query identityProfile kubeletidentity clientId output tsv Assign rights for the Kubelet identity Grant access to Azure DNS zone for the kubelet identity bash AZURE DNS ZONE example com DNS zone name like example com or sub example com AZURE DNS ZONE RESOURCE GROUP MyDnsResourceGroup resource group where DNS zone is hosted fetch DNS id used to grant access to the kubelet identity DNS ID az network dns zone show name AZURE DNS ZONE resource group AZURE DNS ZONE RESOURCE GROUP query id output tsv az role assignment create role DNS Zone Contributor assignee PRINCIPAL ID scope DNS ID Creating a configuration file for the kubelet identity Create the file azure json with values gather from previous steps bash cat EOF local path to azure json tenantId az account show query tenantId o tsv subscriptionId az account show query id o tsv resourceGroup AZURE DNS ZONE RESOURCE GROUP useManagedIdentityExtension true userAssignedIdentityID IDENTITY CLIENT ID EOF Use the azure json file to create a Kubernetes secret bash kubectl create secret generic azure config file namespace default from file local path to azure json Managed identity using AAD Pod Identities For this process we will create a managed identity https docs microsoft com azure active directory managed identities azure resources overview that will be explicitly used by the ExternalDNS container This process is similar to Kubelet identity except that this managed identity is not associated with the Kubernetes node pool but rather associated with explicit ExternalDNS containers Enable the AAD Pod Identities feature For this solution AAD Pod Identities https docs microsoft com azure aks use azure ad pod identity preview feature can be enabled The commands below should do the trick to enable this feature bash az feature register name EnablePodIdentityPreview namespace Microsoft ContainerService az feature register name AutoUpgradePreview namespace Microsoft ContainerService az extension add name aks preview az extension update name aks preview az provider register namespace Microsoft ContainerService Deploy the AAD Pod Identities service Once enabled you can update your cluster and install needed services for the AAD Pod Identities https docs microsoft com azure aks use azure ad pod identity feature bash AZURE AKS RESOURCE GROUP my aks cluster group name of resource group where aks cluster was created AZURE AKS CLUSTER NAME my aks cluster name of aks cluster previously created az aks update resource group AZURE AKS RESOURCE GROUP name AZURE AKS CLUSTER NAME enable pod identity Note that if you use the default network plugin kubenet then you need to add the command line option enable pod identity with kubenet to the above command Creating the managed identity After this process is finished create a managed identity bash IDENTITY RESOURCE GROUP AZURE AKS RESOURCE GROUP custom group or reuse AKS group IDENTITY NAME example com identity create a managed identity az identity create resource group IDENTITY RESOURCE GROUP name IDENTITY NAME Assign rights for the managed identity Grant access to Azure DNS zone for the managed identity bash AZURE DNS ZONE RESOURCE GROUP MyDnsResourceGroup name of resource group where dns zone is hosted AZURE DNS ZONE example com DNS zone name like example com or sub example com fetch identity client id from managed identity created earlier IDENTITY CLIENT ID az identity show resource group IDENTITY RESOURCE GROUP name IDENTITY NAME query clientId output tsv fetch DNS id used to grant access to the managed identity DNS ID az network dns zone show name AZURE DNS ZONE resource group AZURE DNS ZONE RESOURCE GROUP query id output tsv az role assignment create role DNS Zone Contributor assignee IDENTITY CLIENT ID scope DNS ID Creating a configuration file for the managed identity Create the file azure json with the values from previous steps bash cat EOF local path to azure json tenantId az account show query tenantId o tsv subscriptionId az account show query id o tsv resourceGroup AZURE DNS ZONE RESOURCE GROUP useManagedIdentityExtension true userAssignedIdentityID IDENTITY CLIENT ID EOF Use the azure json file to create a Kubernetes secret bash kubectl create secret generic azure config file namespace default from file local path to azure json Creating an Azure identity binding A binding between the managed identity and the ExternalDNS pods needs to be setup by creating AzureIdentity and AzureIdentityBinding resources This will allow appropriately labeled ExternalDNS pods to authenticate using the managed identity When AAD Pod Identity feature is enabled from previous steps above the az aks pod identity add can be used to create these resources bash IDENTITY RESOURCE ID az identity show resource group IDENTITY RESOURCE GROUP name IDENTITY NAME query id output tsv az aks pod identity add resource group AZURE AKS RESOURCE GROUP cluster name AZURE AKS CLUSTER NAME namespace default name external dns identity resource id IDENTITY RESOURCE ID This will add something similar to the following resources yaml apiVersion aadpodidentity k8s io v1 kind AzureIdentity metadata labels addonmanager kubernetes io mode Reconcile kubernetes azure com managedby aks name external dns spec clientID IDENTITY CLIENT ID resourceID IDENTITY RESOURCE ID type 0 apiVersion aadpodidentity k8s io v1 kind AzureIdentityBinding metadata annotations labels addonmanager kubernetes io mode Reconcile kubernetes azure com managedby aks name external dns binding spec azureIdentity external dns selector external dns Update ExternalDNS labels When deploying ExternalDNS you want to make sure that deployed pod s will have the label aadpodidbinding external dns to enable AAD Pod Identities You can patch an existing deployment of ExternalDNS with this command bash kubectl patch deployment external dns namespace default patch spec template metadata labels aadpodidbinding external dns Managed identity using Workload Identity For this process we will create a managed identity https docs microsoft com azure active directory managed identities azure resources overview that will be explicitly used by the ExternalDNS container This process is somewhat similar to Pod Identity except that this managed identity is associated with a kubernetes service account Deploy OIDC issuer and Workload Identity services Update your cluster to install OIDC Issuer https learn microsoft com en us azure aks use oidc issuer and Workload Identity https learn microsoft com en us azure aks workload identity deploy cluster bash AZURE AKS RESOURCE GROUP my aks cluster group name of resource group where aks cluster was created AZURE AKS CLUSTER NAME my aks cluster name of aks cluster previously created az aks update resource group AZURE AKS RESOURCE GROUP name AZURE AKS CLUSTER NAME enable oidc issuer enable workload identity Create a managed identity Create a managed identity bash IDENTITY RESOURCE GROUP AZURE AKS RESOURCE GROUP custom group or reuse AKS group IDENTITY NAME example com identity create a managed identity az identity create resource group IDENTITY RESOURCE GROUP name IDENTITY NAME Assign a role to the managed identity Grant access to Azure DNS zone for the managed identity bash AZURE DNS ZONE RESOURCE GROUP MyDnsResourceGroup name of resource group where dns zone is hosted AZURE DNS ZONE example com DNS zone name like example com or sub example com fetch identity client id from managed identity created earlier IDENTITY CLIENT ID az identity show resource group IDENTITY RESOURCE GROUP name IDENTITY NAME query clientId output tsv fetch DNS id used to grant access to the managed identity DNS ID az network dns zone show name AZURE DNS ZONE resource group AZURE DNS ZONE RESOURCE GROUP query id output tsv RESOURCE GROUP ID az group show name AZURE DNS ZONE RESOURCE GROUP query id output tsv az role assignment create role DNS Zone Contributor assignee IDENTITY CLIENT ID scope DNS ID az role assignment create role Reader assignee IDENTITY CLIENT ID scope RESOURCE GROUP ID Create a federated identity credential A binding between the managed identity and the ExternalDNS service account needs to be setup by creating a federated identity resource bash OIDC ISSUER URL az aks show n myAKSCluster g myResourceGroup query oidcIssuerProfile issuerUrl otsv az identity federated credential create name IDENTITY NAME identity name IDENTITY NAME resource group AZURE AKS RESOURCE GROUP issuer OIDC ISSUER URL subject system serviceaccount default external dns NOTE make sure federated credential refers to correct namespace and service account system serviceaccount NAMESPACE SERVICE ACCOUNT Helm When deploying external dns with Helm you need to create a secret to store the Azure config see below and create a workload identity out of scope here before you can install the chart yaml apiVersion v1 kind Secret metadata name external dns azure type Opaque data azure json tenantId TENANT ID subscriptionId SUBSCRIPTION ID resourceGroup AZURE DNS ZONE RESOURCE GROUP useWorkloadIdentityExtension true Once you have created the secret and have a workload identity you can install the chart with the following values yaml fullnameOverride external dns serviceAccount labels azure workload identity use true annotations azure workload identity client id IDENTITY CLIENT ID podLabels azure workload identity use true extraVolumes name azure config file secret secretName external dns azure extraVolumeMounts name azure config file mountPath etc kubernetes readOnly true provider name azure NOTE make sure the pod is restarted whenever you make a configuration change kubectl alternative Create a configuration file for the managed identity Create the file azure json with the values from previous steps bash cat EOF local path to azure json subscriptionId az account show query id o tsv resourceGroup AZURE DNS ZONE RESOURCE GROUP useWorkloadIdentityExtension true EOF NOTE it s also possible to specify or override ClientID specified in the next section through aadClientId field in this azure json file Use the azure json file to create a Kubernetes secret bash kubectl create secret generic azure config file namespace default from file local path to azure json Update labels and annotations on ExternalDNS service account To instruct Workload Identity webhook to inject a projected token into the ExternalDNS pod the pod needs to have a label azure workload identity use true before Workload Identity 1 0 0 this label was supposed to be set on the service account instead Also the service account needs to have an annotation azure workload identity client id IDENTITY CLIENT ID To patch the existing serviceaccount and deployment use the following command bash kubectl patch serviceaccount external dns namespace default patch metadata annotations azure workload identity client id IDENTITY CLIENT ID kubectl patch deployment external dns namespace default patch spec template metadata labels azure workload identity use true NOTE it s also possible to specify or override ClientID through aadClientId field in azure json NOTE make sure the pod is restarted whenever you make a configuration change Throttling When the ExternalDNS managed zones list doesn t change frequently one can set azure zones cache duration zones list cache time to live The zones list cache is disabled by default with a value of 0s Ingress used with ExternalDNS This deployment assumes that you will be using nginx ingress When using nginx ingress do not deploy it as a Daemon Set This causes nginx ingress to write the Cluster IP of the backend pods in the ingress status loadbalancer ip property which then has external dns write the Cluster IP s in DNS vs the nginx ingress service external IP Ensure that your nginx ingress deployment has the following arg added to it publish service namespace nginx ingress controller svcname For more details see here nginx ingress external dns https github com kubernetes sigs external dns blob HEAD docs faq md why is externaldns only adding a single ip address in route 53 on aws when using the nginx ingress controller how do i get it to use the fqdn of the elb assigned to my nginx ingress controller service instead Deploy ExternalDNS Connect your kubectl client to the cluster you want to test ExternalDNS with Then apply one of the following manifests file to deploy ExternalDNS The deployment assumes that ExternalDNS will be installed into the default namespace If this namespace is different the ClusterRoleBinding will need to be updated to reflect the desired alternative namespace such as external dns kube addons etc Manifest for clusters without RBAC enabled yaml apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec containers name external dns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com optional limit to only example com domains change to match the zone created above provider azure azure resource group MyDnsResourceGroup optional use the DNS zones from the tutorial s resource group volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Manifest for clusters with RBAC enabled cluster access yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods nodes verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com optional limit to only example com domains change to match the zone created above provider azure azure resource group MyDnsResourceGroup optional use the DNS zones from the tutorial s resource group txt prefix externaldns volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Manifest for clusters with RBAC enabled namespace access This configuration is the same as above except it only requires privileges for the current namespace not for the whole cluster However access to nodes https kubernetes io docs concepts architecture nodes requires cluster access so when using this manifest services with type NodePort will be skipped yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind Role metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiVersion rbac authorization k8s io v1 kind RoleBinding metadata name external dns roleRef apiGroup rbac authorization k8s io kind Role name external dns subjects kind ServiceAccount name external dns apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image registry k8s io external dns external dns v0 15 0 args source service source ingress domain filter example com optional limit to only example com domains change to match the zone created above provider azure azure resource group MyDnsResourceGroup optional use the DNS zones from the tutorial s resource group volumeMounts name azure config file mountPath etc kubernetes readOnly true volumes name azure config file secret secretName azure config file Create the deployment for ExternalDNS bash kubectl create namespace default filename externaldns yaml Ingress Option Expose an nginx service with an ingress Create a file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx svc spec ports port 80 protocol TCP targetPort 80 selector app nginx type ClusterIP apiVersion networking k8s io v1 kind Ingress metadata name nginx spec ingressClassName nginx rules host server example com http paths path pathType Prefix backend service name nginx svc port number 80 When you use ExternalDNS with Ingress resources it automatically creates DNS records based on the hostnames listed in those Ingress objects Those hostnames must match the filters that you defined if any By default domain filter filters Azure DNS zone If you use domain filter together with zone name filter the behavior changes domain filter then filters Ingress domains not the Azure DNS zone name When those hostnames are removed or renamed the corresponding DNS records are also altered Create the deployment service and ingress object bash kubectl create namespace default filename nginx yaml Since your external IP would have already been assigned to the nginx ingress service the DNS records pointing to the IP of the nginx ingress service should be created within a minute Azure Load Balancer option Expose an nginx service with a load balancer Create a file called nginx yaml with the following contents yaml apiVersion apps v1 kind Deployment metadata name nginx spec selector matchLabels app nginx template metadata labels app nginx spec containers image nginx name nginx ports containerPort 80 apiVersion v1 kind Service metadata name nginx svc annotations external dns alpha kubernetes io hostname server example com spec ports port 80 protocol TCP targetPort 80 selector app nginx type LoadBalancer The annotation external dns alpha kubernetes io hostname is used to specify the DNS name that should be created for the service The annotation value is a comma separated list of host names Verifying Azure DNS records Run the following command to view the A records for your Azure DNS zone bash az network dns record set a list resource group MyDnsResourceGroup zone name example com Substitute the zone for the one created above if a different domain was used This should show the external IP address of the service as the A record for your domain indicates the record is for the zone itself Delete Azure Resource Group Now that we have verified that ExternalDNS will automatically manage Azure DNS records we can delete the tutorial s resource group bash az group delete name MyDnsResourceGroup More tutorials A video explanation is available here https www youtube com watch v VSn6DPKIhM8 list PLpbcUe4chE79sB7Jg7B4z3HytqUUEwcNE image https user images githubusercontent com 6548359 235437721 87611869 75f2 4f32 bb35 9da585e46299 png
gcp gke docs 01 Google Cloud Account Creation 02 Create GKE Standard Public Cluster Course Modules 04 Install gcloud CLI on Windows OS 03 Install gcloud CLI on mac OS
# [GCP GKE Google Kubernetes Engine DevOps 75 Real-World Demos](https://links.stacksimplify.com/gcp-google-kubernetes-engine-gke-with-devops) [![Image](images/course-title.png "Google Kubernetes Engine GKE with DevOps 75 Real-World Demos")](https://links.stacksimplify.com/gcp-google-kubernetes-engine-gke-with-devops) ## Course Modules 01. Google Cloud Account Creation 02. Create GKE Standard Public Cluster 03. Install gcloud CLI on mac OS 04. Install gcloud CLI on Windows OS 05. Docker Fundamentals 06. Kubernetes Pods 07. Kubernetes ReplicaSets 08. Kubernetes Deployment - CREATE 09. Kubernetes Deployment - UPDATE 10. Kubernetes Deployment - ROLLBACK 11. Kubernetes Deployments - Pause and Resume 12. Kubernetes ClusterIP and Load Balancer Service 13. YAML Basics 14. Kubernetes Pod & Service using YAML 15. Kubernetes ReplicaSets using YAML 16. Kubernetes Deployment using YAML 17. Kubernetes Services using YAML 18. GKE Kubernetes NodePort Service 19. GKE Kubernetes Headless Service 20. GKE Private Cluster 21. How to use GCP Persistent Disks in GKE ? 22. How to use Balanced Persistent Disk in GKE ? 23. How to use Custom Storage Class in GKE for Persistent Disks ? 24. How to use Pre-existing Persistent Disks in GKE ? 25. How to use Regional Persistent Disks in GKE ? 26. How to perform Persistent Disk Volume Snapshots and Volume Restore ? 28. GKE Workloads and Cloud SQL with Public IP 29. GKE Workloads and Cloud SQL with Private IP 30. GKE Workloads and Cloud SQL with Private IP and No ExternalName Service 31. How to use Google Cloud File Store in GKE ? 32. How to use Custom Storage Class for File Store in GKE ? 33. How to perform File Store Instance Volume Snapshots and Volume Restore ? 34. Ingress Service Basics 35. Ingress Context Path based Routing 36. Ingress Custom Health Checks using Readiness Probes 37. Register a Google Cloud Domain for some advanced Ingress Service Demos 38. Ingress with Static External IP and Cloud DNS 39. Google Managed SSL Certificates for Ingress 40. Ingress HTTP to HTTPS Redirect 41. GKE Workload Identity 42. External DNS Controller Install 43. External DNS - Ingress Service 44. External DNS - Kubernetes Service 45. Ingress Name based Virtual Host Routing 46. Ingress SSL Policy 47. Ingress with Identity-Aware Proxy 48. Ingress with Self Signed SSL Certificates 49. Ingress with Pre-shared SSL Certificates 50. Ingress with Cloud CDN, HTTP Access Logging and Timeouts 51. Ingress with Client IP Affinity 52. Ingress with Cookie Affinity 53. Ingress with Custom Health Checks using BackendConfig CRD 54. Ingress Internal Load Balancer 55. Ingress with Google Cloud Armor 56. Google Artifact Registry 57. GKE Continuous Integration 58. GKE Continuous Delivery 59. Kubernetes Liveness Probes 60. Kubernetes Startup Probes 61. Kubernetes Readiness Probe 62. Kubernetes Requests and Limits 63. GKE Cluster Autoscaling 64. Kubernetes Namespaces 65. Kubernetes Namespaces Resource Quota 66. Kubernetes Namespaces Limit Range 67. Kubernetes Horizontal Pod Autoscaler 68. GKE Autopilot Cluster 69. How to manage Multiple Cluster access in kubeconfig ? ## Kubernetes Concepts Covered 01. Kubernetes Deployments (Create, Update, Rollback, Pause, Resume) 02. Kubernetes Pods 03. Kubernetes Service of Type LoadBalancer 04. Kubernetes Service of Type ClusterIP 05. Kubernetes Ingress Service 06. Kubernetes Storage Class 07. Kubernetes Storage Persistent Volume 08. Kubernetes Storage Persistent Volume Claim 09. Kubernetes Cluster Autoscaler 10. Kubernetes Horizontal Pod Autoscaler 11. Kubernetes Namespaces 12. Kubernetes Namespaces Resource Quota 13. Kubernetes Namespaces Limit Range 14. Kubernetes Service Accounts 15. Kubernetes ConfigMaps 16. Kubernetes Requests and Limits 17. Kubernetes Worker Nodes 18. Kubernetes Service of Type NodePort 19. Kubernetes Service of Type Headless 20. Kubernetes ReplicaSets ## Google Services Covered 01. Google GKE Standard Cluster 02. Google GKE Autopilot Cluster 03. Compute Engine - Virtual Machines 04. Compute Engine - Storage Disks 05. Compute Engine - Storage Snapshots 06. Compute Engine - Storage Images 07. Compute Engine - Instance Groups 08. Compute Engine - Health Checks 09. Compute Engine - Network Endpoint Groups 10. VPC Networks - VPC 11. VPC Network - External and Internal IP Addresses 12. VPC Network - Firewall 13. Network Services - Load Balancing 14. Network Services - Cloud DNS 15. Network Services - Cloud CDN 16. Network Services - Cloud NAT 17. Network Services - Cloud Domains 18. Network Services - Private Service Connection 19. Network Security - Cloud Armor 20. Network Security - SSL Policies 21. IAM & Admin - IAM 22. IAM & Admin - Service Accounts 23. IAM & Admin - Roles 24. IAM & Admin - Identity-Aware Proxy 25. DevOps - Cloud Source Repositories 26. DevOps - Cloud Build 27. DevOps - Cloud Storage 28. SQL - Cloud SQL 29. Storage - Filestore 30. Google Artifact Registry 31. Operations Logging 32. GCP Monitoring ## What will students learn in your course? - You will learn to master Kubernetes on Google GKE with 75 Real-world demo's on Google Cloud Platform with 20+ Kubernetes and 30+ Google Cloud Services - You will learn Kubernetes Basics for 4.5 hours - You will create GKE Standard and Autopilot clusters with public and private networks - You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store - You will also use Google Cloud SQL, Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster - You will master Kubernetes Ingress concepts in detail on GKE with 22 Real-world Demos - You will implement Ingress Context Path Routing and Name based vhost routing - You will implement Ingress with Google Managed SSL Certificates - You will master Google GKE Workload Identity with a detailed dedicated demo. - You will implement External DNS Controller to automatically add, delete DNS records automatically in Google Cloud DNS Service - You will implement Ingress with Preshared SSL and Self Signed Certificates - You will implement Ingress with Cloud CDN, Cloud Armor, Internal Load Balancer, Cookie Affinity, IP Affinity, HTTP Access Logging. - You will implement Ingress with Google Identity-Aware Proxy - You will learn to use Google Artifact Registry with GKE - You will implement DevOps Continuous Integration (CI) and Continuous Delivery (CD) with Cloud Build and Cloud Source Services - You will learn to master Kubernetes Probes (Readiness, Startup, Liveness) - You will implement Kubernetes Requests, Limits, Namespaces, Resource Quota and Limit Range - You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler ## What are the requirements or prerequisites for taking your course? - You must have an Google Cloud account to follow with me for hands-on activities. - You don't need to have any basic knowledge of Kubernetes. Course will get started from very very basics of Kubernetes and take you to very advanced levels - Any Cloud Platform basics is required to understand the terminology ## Who is this course for? - Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real-World perspective on Google Cloud Platform (GCP) - Any beginner who is interested in learning Kubernetes with Google Cloud Platform (GCP) - Any beginner who is planning their career in DevOps ## Github Repositories used for this course - [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://github.com/stacksimplify/terraform-on-aws-eks) - [Course Presentation](https://github.com/stacksimplify/terraform-on-aws-eks/tree/main/course-presentation) - [Kubernetes Fundamentals](https://github.com/stacksimplify/kubernetes-fundamentals) - **Important Note:** Please go to these repositories and FORK these repositories and make use of them during the course. ## Each of my courses come with - Amazing Hands-on Step By Step Learning Experiences - Real Implementation Experience - Friendly Support in the Q&A section - "30-Day "No Questions Asked" Money Back Guaranteed by Udemy" ## My Other AWS Courses - [Udemy Enroll](https://www.stacksimplify.com/azure-aks/courses/stacksimplify-best-selling-courses-on-udemy/) ## Stack Simplify Udemy Profile - [Udemy Profile](https://www.udemy.com/user/kalyan-reddy-9/) # HashiCorp Certified: Terraform Associate - 50 Practical Demos [![Image](https://stacksimplify.com/course-images/hashicorp-certified-terraform-associate-highest-rated.png "HashiCorp Certified: Terraform Associate - 50 Practical Demos")](https://links.stacksimplify.com/hashicorp-certified-terraform-associate) # AWS EKS - Elastic Kubernetes Service - Masterclass [![Image](https://stacksimplify.com/course-images/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png "AWS EKS Kubernetes - Masterclass")](https://www.udemy.com/course/aws-eks-kubernetes-masterclass-devops-microservices/?referralCode=257C9AD5B5AF8D12D1E1) # Azure Kubernetes Service with Azure DevOps and Terraform [![Image](https://stacksimplify.com/course-images/azure-kubernetes-service-with-azure-devops-and-terraform.png "Azure Kubernetes Service with Azure DevOps and Terraform")](https://www.udemy.com/course/azure-kubernetes-service-with-azure-devops-and-terraform/?referralCode=2499BF7F5FAAA506ED42) # Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos [![Image](https://stacksimplify.com/course-images/terraform-on-aws-best-seller.png "Terraform on AWS with SRE & IaC DevOps | Real-World 20 Demos")](https://links.stacksimplify.com/terraform-on-aws-with-sre-and-iacdevops) # Azure - HashiCorp Certified: Terraform Associate - 70 Demos [![Image](https://stacksimplify.com/course-images/azure-hashicorp-certified-terraform-associate-highest-rated.png "Azure - HashiCorp Certified: Terraform Associate - 70 Demos")](https://links.stacksimplify.com/azure-hashicorp-certified-terraform-associate) # Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos [![Image](https://stacksimplify.com/course-images/terraform-on-azure-with-iac-azure-devops-sre-1.png "Terraform on Azure with IaC DevOps and SRE | Real-World 25 Demos")](https://links.stacksimplify.com/terraform-on-azure-with-iac-devops-sre) # [Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos](https://links.stacksimplify.com/terraform-on-aws-eks-kubernetes-iac-sre) [![Image](https://stacksimplify.com/course-images/terraform-on-aws-eks-kubernetes.png "Terraform on AWS EKS Kubernetes IaC SRE- 50 Real-World Demos ")](https://links.stacksimplify.com/terraform-on-aws-eks-kubernetes-iac-sre)
gcp gke docs
GCP GKE Google Kubernetes Engine DevOps 75 Real World Demos https links stacksimplify com gcp google kubernetes engine gke with devops Image images course title png Google Kubernetes Engine GKE with DevOps 75 Real World Demos https links stacksimplify com gcp google kubernetes engine gke with devops Course Modules 01 Google Cloud Account Creation 02 Create GKE Standard Public Cluster 03 Install gcloud CLI on mac OS 04 Install gcloud CLI on Windows OS 05 Docker Fundamentals 06 Kubernetes Pods 07 Kubernetes ReplicaSets 08 Kubernetes Deployment CREATE 09 Kubernetes Deployment UPDATE 10 Kubernetes Deployment ROLLBACK 11 Kubernetes Deployments Pause and Resume 12 Kubernetes ClusterIP and Load Balancer Service 13 YAML Basics 14 Kubernetes Pod Service using YAML 15 Kubernetes ReplicaSets using YAML 16 Kubernetes Deployment using YAML 17 Kubernetes Services using YAML 18 GKE Kubernetes NodePort Service 19 GKE Kubernetes Headless Service 20 GKE Private Cluster 21 How to use GCP Persistent Disks in GKE 22 How to use Balanced Persistent Disk in GKE 23 How to use Custom Storage Class in GKE for Persistent Disks 24 How to use Pre existing Persistent Disks in GKE 25 How to use Regional Persistent Disks in GKE 26 How to perform Persistent Disk Volume Snapshots and Volume Restore 28 GKE Workloads and Cloud SQL with Public IP 29 GKE Workloads and Cloud SQL with Private IP 30 GKE Workloads and Cloud SQL with Private IP and No ExternalName Service 31 How to use Google Cloud File Store in GKE 32 How to use Custom Storage Class for File Store in GKE 33 How to perform File Store Instance Volume Snapshots and Volume Restore 34 Ingress Service Basics 35 Ingress Context Path based Routing 36 Ingress Custom Health Checks using Readiness Probes 37 Register a Google Cloud Domain for some advanced Ingress Service Demos 38 Ingress with Static External IP and Cloud DNS 39 Google Managed SSL Certificates for Ingress 40 Ingress HTTP to HTTPS Redirect 41 GKE Workload Identity 42 External DNS Controller Install 43 External DNS Ingress Service 44 External DNS Kubernetes Service 45 Ingress Name based Virtual Host Routing 46 Ingress SSL Policy 47 Ingress with Identity Aware Proxy 48 Ingress with Self Signed SSL Certificates 49 Ingress with Pre shared SSL Certificates 50 Ingress with Cloud CDN HTTP Access Logging and Timeouts 51 Ingress with Client IP Affinity 52 Ingress with Cookie Affinity 53 Ingress with Custom Health Checks using BackendConfig CRD 54 Ingress Internal Load Balancer 55 Ingress with Google Cloud Armor 56 Google Artifact Registry 57 GKE Continuous Integration 58 GKE Continuous Delivery 59 Kubernetes Liveness Probes 60 Kubernetes Startup Probes 61 Kubernetes Readiness Probe 62 Kubernetes Requests and Limits 63 GKE Cluster Autoscaling 64 Kubernetes Namespaces 65 Kubernetes Namespaces Resource Quota 66 Kubernetes Namespaces Limit Range 67 Kubernetes Horizontal Pod Autoscaler 68 GKE Autopilot Cluster 69 How to manage Multiple Cluster access in kubeconfig Kubernetes Concepts Covered 01 Kubernetes Deployments Create Update Rollback Pause Resume 02 Kubernetes Pods 03 Kubernetes Service of Type LoadBalancer 04 Kubernetes Service of Type ClusterIP 05 Kubernetes Ingress Service 06 Kubernetes Storage Class 07 Kubernetes Storage Persistent Volume 08 Kubernetes Storage Persistent Volume Claim 09 Kubernetes Cluster Autoscaler 10 Kubernetes Horizontal Pod Autoscaler 11 Kubernetes Namespaces 12 Kubernetes Namespaces Resource Quota 13 Kubernetes Namespaces Limit Range 14 Kubernetes Service Accounts 15 Kubernetes ConfigMaps 16 Kubernetes Requests and Limits 17 Kubernetes Worker Nodes 18 Kubernetes Service of Type NodePort 19 Kubernetes Service of Type Headless 20 Kubernetes ReplicaSets Google Services Covered 01 Google GKE Standard Cluster 02 Google GKE Autopilot Cluster 03 Compute Engine Virtual Machines 04 Compute Engine Storage Disks 05 Compute Engine Storage Snapshots 06 Compute Engine Storage Images 07 Compute Engine Instance Groups 08 Compute Engine Health Checks 09 Compute Engine Network Endpoint Groups 10 VPC Networks VPC 11 VPC Network External and Internal IP Addresses 12 VPC Network Firewall 13 Network Services Load Balancing 14 Network Services Cloud DNS 15 Network Services Cloud CDN 16 Network Services Cloud NAT 17 Network Services Cloud Domains 18 Network Services Private Service Connection 19 Network Security Cloud Armor 20 Network Security SSL Policies 21 IAM Admin IAM 22 IAM Admin Service Accounts 23 IAM Admin Roles 24 IAM Admin Identity Aware Proxy 25 DevOps Cloud Source Repositories 26 DevOps Cloud Build 27 DevOps Cloud Storage 28 SQL Cloud SQL 29 Storage Filestore 30 Google Artifact Registry 31 Operations Logging 32 GCP Monitoring What will students learn in your course You will learn to master Kubernetes on Google GKE with 75 Real world demo s on Google Cloud Platform with 20 Kubernetes and 30 Google Cloud Services You will learn Kubernetes Basics for 4 5 hours You will create GKE Standard and Autopilot clusters with public and private networks You will learn to implement Kubernetes Storage with Google Persistent Disks and Google File Store You will also use Google Cloud SQL Cloud Load Balancing to deploy a sample application outlining LB to DB usecase in GKE Cluster You will master Kubernetes Ingress concepts in detail on GKE with 22 Real world Demos You will implement Ingress Context Path Routing and Name based vhost routing You will implement Ingress with Google Managed SSL Certificates You will master Google GKE Workload Identity with a detailed dedicated demo You will implement External DNS Controller to automatically add delete DNS records automatically in Google Cloud DNS Service You will implement Ingress with Preshared SSL and Self Signed Certificates You will implement Ingress with Cloud CDN Cloud Armor Internal Load Balancer Cookie Affinity IP Affinity HTTP Access Logging You will implement Ingress with Google Identity Aware Proxy You will learn to use Google Artifact Registry with GKE You will implement DevOps Continuous Integration CI and Continuous Delivery CD with Cloud Build and Cloud Source Services You will learn to master Kubernetes Probes Readiness Startup Liveness You will implement Kubernetes Requests Limits Namespaces Resource Quota and Limit Range You will implement GKE Cluster Autoscaler and Horizontal Pod Autoscaler What are the requirements or prerequisites for taking your course You must have an Google Cloud account to follow with me for hands on activities You don t need to have any basic knowledge of Kubernetes Course will get started from very very basics of Kubernetes and take you to very advanced levels Any Cloud Platform basics is required to understand the terminology Who is this course for Infrastructure Architects or Sysadmins or Developers who are planning to master Kubernetes from Real World perspective on Google Cloud Platform GCP Any beginner who is interested in learning Kubernetes with Google Cloud Platform GCP Any beginner who is planning their career in DevOps Github Repositories used for this course Terraform on AWS EKS Kubernetes IaC SRE 50 Real World Demos https github com stacksimplify terraform on aws eks Course Presentation https github com stacksimplify terraform on aws eks tree main course presentation Kubernetes Fundamentals https github com stacksimplify kubernetes fundamentals Important Note Please go to these repositories and FORK these repositories and make use of them during the course Each of my courses come with Amazing Hands on Step By Step Learning Experiences Real Implementation Experience Friendly Support in the Q A section 30 Day No Questions Asked Money Back Guaranteed by Udemy My Other AWS Courses Udemy Enroll https www stacksimplify com azure aks courses stacksimplify best selling courses on udemy Stack Simplify Udemy Profile Udemy Profile https www udemy com user kalyan reddy 9 HashiCorp Certified Terraform Associate 50 Practical Demos Image https stacksimplify com course images hashicorp certified terraform associate highest rated png HashiCorp Certified Terraform Associate 50 Practical Demos https links stacksimplify com hashicorp certified terraform associate AWS EKS Elastic Kubernetes Service Masterclass Image https stacksimplify com course images AWS EKS Kubernetes Masterclass DevOps Microservices course png AWS EKS Kubernetes Masterclass https www udemy com course aws eks kubernetes masterclass devops microservices referralCode 257C9AD5B5AF8D12D1E1 Azure Kubernetes Service with Azure DevOps and Terraform Image https stacksimplify com course images azure kubernetes service with azure devops and terraform png Azure Kubernetes Service with Azure DevOps and Terraform https www udemy com course azure kubernetes service with azure devops and terraform referralCode 2499BF7F5FAAA506ED42 Terraform on AWS with SRE IaC DevOps Real World 20 Demos Image https stacksimplify com course images terraform on aws best seller png Terraform on AWS with SRE IaC DevOps Real World 20 Demos https links stacksimplify com terraform on aws with sre and iacdevops Azure HashiCorp Certified Terraform Associate 70 Demos Image https stacksimplify com course images azure hashicorp certified terraform associate highest rated png Azure HashiCorp Certified Terraform Associate 70 Demos https links stacksimplify com azure hashicorp certified terraform associate Terraform on Azure with IaC DevOps and SRE Real World 25 Demos Image https stacksimplify com course images terraform on azure with iac azure devops sre 1 png Terraform on Azure with IaC DevOps and SRE Real World 25 Demos https links stacksimplify com terraform on azure with iac devops sre Terraform on AWS EKS Kubernetes IaC SRE 50 Real World Demos https links stacksimplify com terraform on aws eks kubernetes iac sre Image https stacksimplify com course images terraform on aws eks kubernetes png Terraform on AWS EKS Kubernetes IaC SRE 50 Real World Demos https links stacksimplify com terraform on aws eks kubernetes iac sre
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing title GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing description: Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction 1. Requests will be routed in Load Balancer based on DNS Names 2. `app1-ingress.kalyanreddydaida.com` will send traffic to `App1 Pods` 3. `app2-ingress.kalyanreddydaida.com` will send traffic to `App2 Pods` 4. `default-ingress.kalyanreddydaida.com` will send traffic to `App3 Pods` ## Step-02: Review kube-manifests 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. NO CHANGES TO ABOVE 3 files - Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo ## Step-03: 04-Ingress-NameBasedVHost-Routing.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-namebasedvhost-routing annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: default-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2-ingress.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: 05-Managed-Certificate.yaml ```yaml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - default101-ingress.kalyanreddydaida.com - app101-ingress.kalyanreddydaida.com - app201-ingress.kalyanreddydaida.com ``` ## Step-05: 06-frontendconfig.yaml ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # List FrontendConfigs kubectl get frontendconfig # List Managed Certificates kubectl get managedcertificate # Describe Managed Certificates kubectl describe managedcertificate managed-cert-for-ingress Observation: 1. Wait for Domain Status to be changed from "Provisioning" to "ACTIVE" 2. It might take minimum 60 minutes for provisioning Google Managed SSL Certificates ``` ## Step-07: Access Application ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Observation: 1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 2. HTTP to HTTPS redirect should work ``` ## Step-08: Access Application - Negative usecase Testing ```t # Access Application - App1 DNS Name http://app1-ingress.kalyanreddydaida.com/app2/index.html Observation: SHOULD FAIL In Pod App1 we don't app2 context path (app2 folder) - 404 ERROR # Access Application - App2 DNS Name http://app2-ingress.kalyanreddydaida.com/app1/index.html Observation: SHOULD FAIL In Pod App2 we don't app1 context path (app1 folder) - 404 ERROR # Access Application - App3 or Default DNS Name http://default-ingress.kalyanreddydaida.com/app1/index.html Observation: SHOULD FAIL In Pod App3 we don't app1 context path (app1 folder) - 404 ERROR ``` ## Step-09: Clean-Up - DONT DELETE, WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL-POLICY ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing description Implement GCP Google Kubernetes Engine GKE Ingress Namebased Virtual Host Routing Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 External DNS Controller Installed Step 01 Introduction 1 Requests will be routed in Load Balancer based on DNS Names 2 app1 ingress kalyanreddydaida com will send traffic to App1 Pods 3 app2 ingress kalyanreddydaida com will send traffic to App2 Pods 4 default ingress kalyanreddydaida com will send traffic to App3 Pods Step 02 Review kube manifests 1 01 Nginx App1 Deployment and NodePortService yaml 2 02 Nginx App2 Deployment and NodePortService yaml 3 03 Nginx App3 Deployment and NodePortService yaml 4 NO CHANGES TO ABOVE 3 files Standard Deployment and NodePort Service we are using from previous Context Path based Routing Demo Step 03 04 Ingress NameBasedVHost Routing yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress namebasedvhost routing annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 Google Managed SSL Certificates networking gke io managed certificates managed cert for ingress SSL Redirect HTTP to HTTPS networking gke io v1beta1 FrontendConfig my frontend config External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname default ingress kalyanreddydaida com spec defaultBackend service name app3 nginx nodeport service port number 80 rules host app1 ingress kalyanreddydaida com http paths path pathType Prefix backend service name app1 nginx nodeport service port number 80 host app2 ingress kalyanreddydaida com http paths path pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 04 05 Managed Certificate yaml yaml apiVersion networking gke io v1 kind ManagedCertificate metadata name managed cert for ingress spec domains default101 ingress kalyanreddydaida com app101 ingress kalyanreddydaida com app201 ingress kalyanreddydaida com Step 05 06 frontendconfig yaml yaml apiVersion networking gke io v1beta1 kind FrontendConfig metadata name my frontend config spec redirectToHttps enabled true responseCodeName RESPONSE CODE Step 06 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc List Ingress Services kubectl get ingress Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Ingress Service should be present List FrontendConfigs kubectl get frontendconfig List Managed Certificates kubectl get managedcertificate Describe Managed Certificates kubectl describe managedcertificate managed cert for ingress Observation 1 Wait for Domain Status to be changed from Provisioning to ACTIVE 2 It might take minimum 60 minutes for provisioning Google Managed SSL Certificates Step 07 Access Application t Access Application http app1 ingress kalyanreddydaida com app1 index html http app2 ingress kalyanreddydaida com app2 index html http default ingress kalyanreddydaida com Observation 1 All 3 URLS should work as expected In your case replace YOUR DOMAIN name for testing 2 HTTP to HTTPS redirect should work Step 08 Access Application Negative usecase Testing t Access Application App1 DNS Name http app1 ingress kalyanreddydaida com app2 index html Observation SHOULD FAIL In Pod App1 we don t app2 context path app2 folder 404 ERROR Access Application App2 DNS Name http app2 ingress kalyanreddydaida com app1 index html Observation SHOULD FAIL In Pod App2 we don t app1 context path app1 folder 404 ERROR Access Application App3 or Default DNS Name http default ingress kalyanreddydaida com app1 index html Observation SHOULD FAIL In Pod App3 we don t app1 context path app1 folder 404 ERROR Step 09 Clean Up DONT DELETE WE ARE GOING TO USE THESE KUBERNETES RESOURCES IN NEXT DEMO RELATED TO SSL POLICY References Ingress Features https cloud google com kubernetes engine docs how to ingress features
gcp gke docs Implement GCP Google Kubernetes Engine GKE Autopilot Cluster Create GKE Autopilot Cluster Understand in detail about GKE Autopilot cluster title GCP Google Kubernetes Engine Autopilot Cluster Step 02 Pre requisite Verify if Cloud NAT Gateway created Step 01 Introduction
--- title: GCP Google Kubernetes Engine Autopilot Cluster description: Implement GCP Google Kubernetes Engine GKE Autopilot Cluster --- ## Step-01: Introduction - Create GKE Autopilot Cluster - Understand in detail about GKE Autopilot cluster ## Step-02: Pre-requisite: Verify if Cloud NAT Gateway created - Verify if Cloud NAT Gateway created in `Region:us-central1` where you are planning to create GKE Autopilot Private Cluster - This is required for Workload in Private subnets to connect to Internet. - Primarily to Connect to Docker Hub to pull the Docker Images - Go to Network Services -> Cloud NAT ## Step-03: Create GKE Autopilot Private Cluster - Go to Kubernetes Engine -> Clusters -> **CREATE** - Create Cluster -> GKE Autopilot -> **CONFIGURE** - **Name:** autopilot-cluster-private-1 - **Region:** us-central1 - **Network access:** Private Cluster - **Access control plane using its external IP address:** CHECK - **Control plane ip range:** 172.18.0.0/28 - **Enable control plane authorized networks:** CHECK - **Authorized networks:** - **Name:** internet-access - **Network:** 0.0.0.0/0 - Click on **DONE** - **Network:** default (LEAVE TO DEFAULTS) - **Node subnet:** default (LEAVE TO DEFAULTS) - **Cluster default pod address range:** /17 (LEAVE TO DEFAULTS) - **Service Address range:** /22 (LEAVE TO DEFAULTS) - **Release Channel:** Regular Channel (Default) - REST ALL LEAVE TO DEFAULTS - Click on **CREATE** ## Step-04: Configure kubectl for kubeconfig ```t # Configure kubectl for kubeconfig gcloud container clusters get-credentials CLUSTER-NAME --region REGION --project PROJECT-NAME # Replace values CLUSTER-NAME, REGION, PROJECT-NAME gcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes kubectl get nodes -o wide ``` ## Step-05: Review Kubernetes Manifests ### Step-05-01: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 5 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 resources: requests: memory: "128Mi" # 128 MebiByte is equal to 135 Megabyte (MB) cpu: "200m" # `m` means milliCPU limits: memory: "256Mi" cpu: "400m" # 1000m is equal to 1 VCPU core ``` ### Step-05-02: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Access Application http://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT> ``` ## Step-07: Scale your Application ```t # Scale your Application kubectl scale --replicas=15 deployment/myapp1-deployment # List Pods kubectl get pods # List Nodes kubectl get nodes ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete GKE Autopilot Cluster # NOTE: Dont delete this cluster, as we are going to use this in next demo. Go to Kubernetes Engine > Clusters -> autopilot-cluster-private-1 -> DELETE ``` ## References - https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#default_container_resource_requests - https://cloud.google.com/kubernetes-engine/quotas#limits_per_cluste
gcp gke docs
title GCP Google Kubernetes Engine Autopilot Cluster description Implement GCP Google Kubernetes Engine GKE Autopilot Cluster Step 01 Introduction Create GKE Autopilot Cluster Understand in detail about GKE Autopilot cluster Step 02 Pre requisite Verify if Cloud NAT Gateway created Verify if Cloud NAT Gateway created in Region us central1 where you are planning to create GKE Autopilot Private Cluster This is required for Workload in Private subnets to connect to Internet Primarily to Connect to Docker Hub to pull the Docker Images Go to Network Services Cloud NAT Step 03 Create GKE Autopilot Private Cluster Go to Kubernetes Engine Clusters CREATE Create Cluster GKE Autopilot CONFIGURE Name autopilot cluster private 1 Region us central1 Network access Private Cluster Access control plane using its external IP address CHECK Control plane ip range 172 18 0 0 28 Enable control plane authorized networks CHECK Authorized networks Name internet access Network 0 0 0 0 0 Click on DONE Network default LEAVE TO DEFAULTS Node subnet default LEAVE TO DEFAULTS Cluster default pod address range 17 LEAVE TO DEFAULTS Service Address range 22 LEAVE TO DEFAULTS Release Channel Regular Channel Default REST ALL LEAVE TO DEFAULTS Click on CREATE Step 04 Configure kubectl for kubeconfig t Configure kubectl for kubeconfig gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT NAME Replace values CLUSTER NAME REGION PROJECT NAME gcloud container clusters get credentials autopilot cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes kubectl get nodes o wide Step 05 Review Kubernetes Manifests Step 05 01 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 5 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 resources requests memory 128Mi 128 MebiByte is equal to 135 Megabyte MB cpu 200m m means milliCPU limits memory 256Mi cpu 400m 1000m is equal to 1 VCPU core Step 05 02 02 kubernetes loadbalancer service yaml yaml apiVersion v1 kind Service metadata name myapp1 lb service spec type LoadBalancer ClusterIp NodePort selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port Step 06 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Access Application http EXTERNAL IP OF GET SERVICE OUTPUT Step 07 Scale your Application t Scale your Application kubectl scale replicas 15 deployment myapp1 deployment List Pods kubectl get pods List Nodes kubectl get nodes Step 08 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Delete GKE Autopilot Cluster NOTE Dont delete this cluster as we are going to use this in next demo Go to Kubernetes Engine Clusters autopilot cluster private 1 DELETE References https cloud google com kubernetes engine docs concepts autopilot overview default container resource requests https cloud google com kubernetes engine quotas limits per cluste
gcp gke docs title GCP Google Kubernetes Engine GKE Service with External DNS gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine GKE Service with External DNS t
--- title: GCP Google Kubernetes Engine GKE Service with External DNS description: Implement GCP Google Kubernetes Engine GKE Service with External DNS --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction - Kubernetes Service of Type Load Balancer with External DNS - We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Kubernetes Service. - DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-03: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service annotations: # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: extdns-k8s-svc-demo.kalyanreddydaida.com spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-05: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Kubernetes Service should be present # Access Application http://<DNS-Name> http://extdns-k8s-svc-demo.kalyanreddydaida.com ``` ## Step-06: Delete kube-manifests ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Kubernetes Service should be not preset (already deleted) ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Service with External DNS description Implement GCP Google Kubernetes Engine GKE Service with External DNS Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 External DNS Controller Installed Step 01 Introduction Kubernetes Service of Type Load Balancer with External DNS We are going to use the Annotation external dns alpha kubernetes io hostname in Kubernetes Service DNS Recordsets will be automatically added to Google Cloud DNS using external dns controller when Ingress Service deployed Step 02 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 2 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 Step 03 02 kubernetes loadbalancer service yaml yaml apiVersion v1 kind Service metadata name myapp1 lb service annotations External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname extdns k8s svc demo kalyanreddydaida com spec type LoadBalancer ClusterIp NodePort selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port Step 05 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Kubernetes Service should be present Access Application http DNS Name http extdns k8s svc demo kalyanreddydaida com Step 06 Delete kube manifests t Delete Kubernetes Objects kubectl delete f kube manifests Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Kubernetes Service should be not preset already deleted
gcp gke docs Use GCP File Store for GKE Workloads Implement Backup and Restore title GKE Storage with GCP File Store Backup and Restore Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GKE Storage with GCP File Store - Backup and Restore description: Use GCP File Store for GKE Workloads - Implement Backup and Restore --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Implement Backups is `VolumeSnapshotClass` and `VolumeSnapshot` - Implement Restore of FileStore in myapp2 Application and Verify ## Step-02: YAML files are same as first FileStore Demo - **Project Folder:** 01-myapp1-kube-manifests - YAML files are same as first FileStore Demo - 01-filestore-pvc.yaml - 02-write-to-filestore-pod.yaml - 03-myapp1-deployment.yaml - 04-loadBalancer-service.yaml ## Step-03: Deploy 01-myapp1-kube-manifests and Verify ```t # Deploy 01-myapp1-kube-manifests kubectl apply -f 01-myapp1-kube-manifests # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-04: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-05: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-06: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-07: Access Application ```t # List Services kubectl get svc # myapp1 - Access Application http://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-08: Volume Backup: 01-VolumeSnapshotClass.yaml - **Project Folder:** 02-volume-backup-kube-manifests ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-gcp-filestore-backup-snap-class driver: filestore.csi.storage.gke.io parameters: type: backup deletionPolicy: Delete ``` ## Step-09: Volume Backup: 02-VolumeSnapshot.yaml - **Project Folder:** 02-volume-backup-kube-manifests ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: myapp1-volume-snapshot spec: volumeSnapshotClassName: csi-gcp-filestore-backup-snap-class source: persistentVolumeClaimName: gke-filestore-pvc ``` ## Step-10: Volume Backup: Deploy 02-volume-backup-kube-manifests and Verify ```t # Deploy 02-volume-backup-kube-manifests kubectl apply -f 02-volume-backup-kube-manifests # List VolumeSnapshotClass kubectl get volumesnapshotclass # Describe VolumeSnapshotClass kubectl describe volumesnapshotclass csi-gcp-filestore-backup-snap-class # List VolumeSnapshot kubectl get volumesnapshot # Describe VolumeSnapshot kubectl describe volumesnapshot myapp1-volume-snapshot ``` ## Step-11: Volume Backup: Verify GCP Cloud FileStore Backups - Go to FileStore -> Backups - Observation: You should find the Backup with name `snapshot-<SOME-ID>` (Example: snapshot-b4f24bd7-649b-45bb-8a0a-2b09d5b0e631) ## Step-12: Volume Restore: 01-filestore-pvc.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: restored-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti dataSource: kind: VolumeSnapshot name: myapp1-volume-snapshot apiGroup: snapshot.storage.k8s.io ``` ## Step-13: Volume Restore: 02-myapp2-deployment.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp2-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp2 template: metadata: # Dictionary name: myapp2-pod labels: # Dictionary app: myapp2 # Key value pairs spec: containers: # List - name: myapp2-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: restored-filestore-pvc ``` ## Step-14: Volume Restore: 03-myapp2-loadBalancer-service.yaml - **Project Folder:** 03-volume-restore-myapp2-kube-manifests ```yaml apiVersion: v1 kind: Service metadata: name: myapp2-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp2 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-13: Volume Restore: Deploy 03-volume-restore-myapp2-kube-manifests and Verify ```t # Deploy 03-volume-restore-myapp2-kube-manifests kubectl apply -f 03-volume-restore-myapp2-kube-manifests # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods # Verify if new FileStore Instance is Created Go to -> FileStore -> Instances ``` ## Step-14: Volume Restore: Connect to myapp2 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp2-deployment-6dccd6557-9x6dn -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp2-deployment-6dccd6557-mbbjm -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-15: Volume Restore: Access Applications ```t # List Services kubectl get svc # myapp1 - Access Application http://<MYAPP1-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt # myapp2 - Access Application http://<MYAPP2-EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt http://34.71.145.41/filestore/myapp1.txt OBSERVATION: 1. For MyApp1, writer app is writing to FileStore so we get latest timestamp lines (many lines and file growing) 2. For MyApp2, we have restored it from backup, which means the number of lines present in file at the time of snapshot will be only displayed. 3. KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads ``` ## Step-16: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f 01-myapp1-kube-manifests -f 02-volume-backup-kube-manifests -f 03-volume-restore-myapp2-kube-manifests # Verify if two FileStore Instances are deleted Go to -> FileStore -> Instances # Verify if FileStore Backup is deleted Go to -> FileStore -> Backups ``
gcp gke docs
title GKE Storage with GCP File Store Backup and Restore description Use GCP File Store for GKE Workloads Implement Backup and Restore Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction GKE Storage with GCP File Store Implement Backups is VolumeSnapshotClass and VolumeSnapshot Implement Restore of FileStore in myapp2 Application and Verify Step 02 YAML files are same as first FileStore Demo Project Folder 01 myapp1 kube manifests YAML files are same as first FileStore Demo 01 filestore pvc yaml 02 write to filestore pod yaml 03 myapp1 deployment yaml 04 loadBalancer service yaml Step 03 Deploy 01 myapp1 kube manifests and Verify t Deploy 01 myapp1 kube manifests kubectl apply f 01 myapp1 kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List Pods kubectl get pods Step 04 Verify GCP Cloud FileStore Instance Go to FileStore Instances Click on Instance ID pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f Note Instance ID dynamically generated it can be different in your case starting with pvc Step 05 Connect to filestore write app Kubernetes pods and Verify t FileStore write app Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty filestore writer app bin sh cd data ls tail f myapp1 txt exit Step 06 Connect to myapp1 Kubernetes pods and Verify t List Pods kubectl get pods myapp1 POD1 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit myapp1 POD2 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit Step 07 Access Application t List Services kubectl get svc myapp1 Access Application http EXTERNAL IP OF GET SERVICE OUTPUT filestore myapp1 txt http 35 232 145 61 filestore myapp1 txt curl http 35 232 145 61 filestore myapp1 txt Step 08 Volume Backup 01 VolumeSnapshotClass yaml Project Folder 02 volume backup kube manifests yaml apiVersion snapshot storage k8s io v1 kind VolumeSnapshotClass metadata name csi gcp filestore backup snap class driver filestore csi storage gke io parameters type backup deletionPolicy Delete Step 09 Volume Backup 02 VolumeSnapshot yaml Project Folder 02 volume backup kube manifests yaml apiVersion snapshot storage k8s io v1 kind VolumeSnapshot metadata name myapp1 volume snapshot spec volumeSnapshotClassName csi gcp filestore backup snap class source persistentVolumeClaimName gke filestore pvc Step 10 Volume Backup Deploy 02 volume backup kube manifests and Verify t Deploy 02 volume backup kube manifests kubectl apply f 02 volume backup kube manifests List VolumeSnapshotClass kubectl get volumesnapshotclass Describe VolumeSnapshotClass kubectl describe volumesnapshotclass csi gcp filestore backup snap class List VolumeSnapshot kubectl get volumesnapshot Describe VolumeSnapshot kubectl describe volumesnapshot myapp1 volume snapshot Step 11 Volume Backup Verify GCP Cloud FileStore Backups Go to FileStore Backups Observation You should find the Backup with name snapshot SOME ID Example snapshot b4f24bd7 649b 45bb 8a0a 2b09d5b0e631 Step 12 Volume Restore 01 filestore pvc yaml Project Folder 03 volume restore myapp2 kube manifests yaml kind PersistentVolumeClaim apiVersion v1 metadata name restored filestore pvc spec accessModes ReadWriteMany storageClassName standard rwx resources requests storage 1Ti dataSource kind VolumeSnapshot name myapp1 volume snapshot apiGroup snapshot storage k8s io Step 13 Volume Restore 02 myapp2 deployment yaml Project Folder 03 volume restore myapp2 kube manifests yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp2 deployment spec Dictionary replicas 2 selector matchLabels app myapp2 template metadata Dictionary name myapp2 pod labels Dictionary app myapp2 Key value pairs spec containers List name myapp2 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 volumeMounts name persistent storage mountPath usr share nginx html filestore volumes name persistent storage persistentVolumeClaim claimName restored filestore pvc Step 14 Volume Restore 03 myapp2 loadBalancer service yaml Project Folder 03 volume restore myapp2 kube manifests yaml apiVersion v1 kind Service metadata name myapp2 lb service spec type LoadBalancer ClusterIp NodePort selector app myapp2 ports name http port 80 Service Port targetPort 80 Container Port Step 13 Volume Restore Deploy 03 volume restore myapp2 kube manifests and Verify t Deploy 03 volume restore myapp2 kube manifests kubectl apply f 03 volume restore myapp2 kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List Pods kubectl get pods Verify if new FileStore Instance is Created Go to FileStore Instances Step 14 Volume Restore Connect to myapp2 Kubernetes pods and Verify t List Pods kubectl get pods myapp1 POD1 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp2 deployment 6dccd6557 9x6dn bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit myapp1 POD2 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp2 deployment 6dccd6557 mbbjm bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit Step 15 Volume Restore Access Applications t List Services kubectl get svc myapp1 Access Application http MYAPP1 EXTERNAL IP OF GET SERVICE OUTPUT filestore myapp1 txt http 35 232 145 61 filestore myapp1 txt myapp2 Access Application http MYAPP2 EXTERNAL IP OF GET SERVICE OUTPUT filestore myapp1 txt http 34 71 145 41 filestore myapp1 txt OBSERVATION 1 For MyApp1 writer app is writing to FileStore so we get latest timestamp lines many lines and file growing 2 For MyApp2 we have restored it from backup which means the number of lines present in file at the time of snapshot will be only displayed 3 KEY here is we are able to successfully use the filestore backup for our Kubernetes Workloads Step 16 Clean Up t Delete Kubernetes Objects kubectl delete f 01 myapp1 kube manifests f 02 volume backup kube manifests f 03 volume restore myapp2 kube manifests Verify if two FileStore Instances are deleted Go to FileStore Instances Verify if FileStore Backup is deleted Go to FileStore Backups
gcp gke docs Implement GCP Google Kubernetes Engine GKE Artifact Registry title GCP Google Kubernetes Engine GKE Artifact Registry Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Artifact Registry description: Implement GCP Google Kubernetes Engine GKE Artifact Registry --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal. ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Build a Docker Image - Create a Docker repository in Google Artifact Registry. - Set up authentication. - Push an image to the repository. - Pull the image from the repository and Create Deployment in GKE Cluster - Access Sample Application in browser and verify ## Step-02: Create Dockefile - **Dockerfile** ```t FROM nginx COPY index.html /usr/share/nginx/html ``` ## Step-03: Build Docker Image ```t # Change Directory cd google-kubernetes-engine/56-GKE-Artifact-Registry/ cd 01-Docker-Image # Build Docker Image docker build -t myapp1:v1 . # List Docker Image docker images myapp1 ``` ## Step-04: Run Docker Image ```t # Run Docker Image docker run --name myapp1 -p 80:80 -d myapp1:v1 # Access in browser http://localhost # List Running Docker Containers docker ps # Stop Docker Container docker stop myapp1 # List All Docker Containers (Stopped Containers) docker ps -a # Delete Stopped Container docker rm myapp1 # List All Docker Containers (Stopped Containers) docker ps -a ``` ## Step-05: Create Google Artifact Registry - Go to Artifact Registry -> Repositories -> Create ```t # Create Google Artifact Registry Name: gke-artifact-repo1 Format: Docker Region: us-central-1 Encryption: Google-managed encryption key Click on Create ``` ## Step-06: Configure Google Artifact Repository authentication ```t # Google Artifact Repository authentication ## To set up authentication to Docker repositories in the region us-central1 gcloud auth configure-docker <LOCATION>-docker.pkg.dev gcloud auth configure-docker us-central1-docker.pkg.dev ``` ## Step-07: Tag & push the Docker image to Google Artifact Registry ```t # Tag the Docker Image docker tag myapp1:v1 <LOCATION>-docker.pkg.dev/<GOOGLE-PROJECT-ID>/<GOOGLE-ARTIFACT-REGISTRY-NAME>/<IMAGE-NAME>:<IMAGE-TAG> # Replace Values for docker tag command # - LOCATION, # - GOOGLE-PROJECT-ID, # - GOOGLE-ARTIFACT-REGISTRY-NAME, # - IMAGE-NAME, # - IMAGE-TAG docker tag myapp1:v1 us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 # Push the Docker Image to Google Artifact Registry docker push us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 ``` ## Step-08: Verify the Docker Image on Google Artifact Registry - Go to Google Artifact Registry -> Repositories -> gke-artifact-repo1 - Review **myapp1** Docker Image ## Step-09: Update Docker Image and Review kube-manifests - **Project-Folder:** 02-kube-manifests ```yaml # Dcoker Image image: us-central1-docker.pkg.dev/<GCP-PROJECT-ID>/<ARTIFACT-REPO>/myapp1:v1 # Update Docker Image in 01-kubernetes-deployment.yaml image: us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1 ``` ## Step-10: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f 02-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod kubectl describe pod <POD-NAME> ## Observation - Verify Events command "kubectl describe pod <POD-NAME>" ### We should see image pulled from "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 86s default-scheduler Successfully assigned default/myapp1-deployment-5f8d5c6f48-pb686 to gke-standard-cluster-1-default-pool-2c852f67-46hv Normal Pulling 85s kubelet Pulling image "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" Normal Pulled 81s kubelet Successfully pulled image "us-central1-docker.pkg.dev/kdaida123/gke-artifact-repo1/myapp1:v1" in 4.285567138s Normal Created 81s kubelet Created container myapp1-container Normal Started 80s kubelet Started container myapp1-container Kalyans-MacBook-Pro:41-GKE-Artiact-Registry kdaida$ # List Services kubectl get svc # Access Application http://<SVC-EXTERNAL-IP> ``` ## Step-11: Clean-Up ```t # Undeploy sample App kubectl delete -f 02-kube-manifests ``` ## References - [Google Artifact Registry](https://cloud.google.com/artifact-registry/docs/overview
gcp gke docs
title GCP Google Kubernetes Engine GKE Artifact Registry description Implement GCP Google Kubernetes Engine GKE Artifact Registry Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Build a Docker Image Create a Docker repository in Google Artifact Registry Set up authentication Push an image to the repository Pull the image from the repository and Create Deployment in GKE Cluster Access Sample Application in browser and verify Step 02 Create Dockefile Dockerfile t FROM nginx COPY index html usr share nginx html Step 03 Build Docker Image t Change Directory cd google kubernetes engine 56 GKE Artifact Registry cd 01 Docker Image Build Docker Image docker build t myapp1 v1 List Docker Image docker images myapp1 Step 04 Run Docker Image t Run Docker Image docker run name myapp1 p 80 80 d myapp1 v1 Access in browser http localhost List Running Docker Containers docker ps Stop Docker Container docker stop myapp1 List All Docker Containers Stopped Containers docker ps a Delete Stopped Container docker rm myapp1 List All Docker Containers Stopped Containers docker ps a Step 05 Create Google Artifact Registry Go to Artifact Registry Repositories Create t Create Google Artifact Registry Name gke artifact repo1 Format Docker Region us central 1 Encryption Google managed encryption key Click on Create Step 06 Configure Google Artifact Repository authentication t Google Artifact Repository authentication To set up authentication to Docker repositories in the region us central1 gcloud auth configure docker LOCATION docker pkg dev gcloud auth configure docker us central1 docker pkg dev Step 07 Tag push the Docker image to Google Artifact Registry t Tag the Docker Image docker tag myapp1 v1 LOCATION docker pkg dev GOOGLE PROJECT ID GOOGLE ARTIFACT REGISTRY NAME IMAGE NAME IMAGE TAG Replace Values for docker tag command LOCATION GOOGLE PROJECT ID GOOGLE ARTIFACT REGISTRY NAME IMAGE NAME IMAGE TAG docker tag myapp1 v1 us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 Push the Docker Image to Google Artifact Registry docker push us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 Step 08 Verify the Docker Image on Google Artifact Registry Go to Google Artifact Registry Repositories gke artifact repo1 Review myapp1 Docker Image Step 09 Update Docker Image and Review kube manifests Project Folder 02 kube manifests yaml Dcoker Image image us central1 docker pkg dev GCP PROJECT ID ARTIFACT REPO myapp1 v1 Update Docker Image in 01 kubernetes deployment yaml image us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 Step 10 Deploy kube manifests t Deploy kube manifests kubectl apply f 02 kube manifests List Deployments kubectl get deploy List Pods kubectl get pods Describe Pod kubectl describe pod POD NAME Observation Verify Events command kubectl describe pod POD NAME We should see image pulled from us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 Events Type Reason Age From Message Normal Scheduled 86s default scheduler Successfully assigned default myapp1 deployment 5f8d5c6f48 pb686 to gke standard cluster 1 default pool 2c852f67 46hv Normal Pulling 85s kubelet Pulling image us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 Normal Pulled 81s kubelet Successfully pulled image us central1 docker pkg dev kdaida123 gke artifact repo1 myapp1 v1 in 4 285567138s Normal Created 81s kubelet Created container myapp1 container Normal Started 80s kubelet Started container myapp1 container Kalyans MacBook Pro 41 GKE Artiact Registry kdaida List Services kubectl get svc Access Application http SVC EXTERNAL IP Step 11 Clean Up t Undeploy sample App kubectl delete f 02 kube manifests References Google Artifact Registry https cloud google com artifact registry docs overview
gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Ingress Cookie Affinity 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Ingress Cookie Affinity description: Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service - BackendConfig - GENERATED_COOKIE Affinity for Ingress Service - We are going to create two projects - **Project-01:** GENERATED_COOKIE Affinity enabled - **Project-02:** GENERATED_COOKIE Affinity disabled ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS) gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Create External IP Address 2 gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip2 --global # Describe External IP Address to get gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip2 --global # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Project-01: Review YAML Manifests - **Project Folder:** 01-kube-manifests-with-cookie-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "GENERATED_COOKIE" ``` ## Step-04: Project-02: Review YAML Manifests - **Project Folder:** 02-kube-manifests-without-cookie-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 ``` ## Step-05: Deploy Kubernetes Manifests ```t # Project-01: Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests-with-cookie-affinity # Project-02: Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-without-cookie-affinity # Verify Deployments kubectl get deploy # Verify Pods kubectl get pods # Verify Node Port Services kubectl get svc # Verify Ingress Services kubectl get ingress # Verify Backend Config kubectl get backendconfig # Project-01: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting Observation: Cookie Affinity setting should be in enabled state # Project-02: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Cookie Affinity Setting Cookie Affinity setting should be in disabled state ``` ## Step-06: Access Application ```t # Project-01: Access Application using DNS or ExtIP http://ingress-with-cookie-affinity.kalyanreddydaida.com http://<EXT-IP-1> Observation: 1. Request will keep going always to only one POD due to GENERATED_COOKIE Affinity we configured # Project-02: Access Application using DNS or ExtIP http://ingress-without-cookie-affinity.kalyanreddydaida.com http://<EXT-IP-2> Observation: 1. Requests will be load balanced to 4 pods created as part of "cdn-demo2" deployment. ``` ## Step-07: Clean-Up ```t # Project-01: Delete Kubernetes Resources kubectl delete -f 01-kube-manifests-with-cookie-affinity # Project-02: Delete Kubernetes Resources kubectl delete -f 02-kube-manifests-without-cookie-affinity ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) http://ingress-without-cookie-affinity.kalyanreddydaida.com http://ingress-with-cookie-affinity.kalyanreddydaida.co
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress Cookie Affinity description Implement GCP Google Kubernetes Engine GKE Ingress Cookie Affinity Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 ExternalDNS Controller should be installed and ready to use t List Namespaces external dns ns namespace should be present kubectl get ns List External DNS Pods kubectl n external dns ns get pods Step 01 Introduction Implement following Features for Ingress Service BackendConfig GENERATED COOKIE Affinity for Ingress Service We are going to create two projects Project 01 GENERATED COOKIE Affinity enabled Project 02 GENERATED COOKIE Affinity disabled Step 02 Create External IP Address using gcloud t Create External IP Address 1 IF NOT CREATED ALREADY CREATED IN PREVIOUS SECTIONS gcloud compute addresses create ADDRESS NAME global gcloud compute addresses create gke ingress extip1 global Create External IP Address 2 gcloud compute addresses create ADDRESS NAME global gcloud compute addresses create gke ingress extip2 global Describe External IP Address to get gcloud compute addresses describe ADDRESS NAME global gcloud compute addresses describe gke ingress extip2 global Verify Go to VPC Network IP Addresses External IP Address Step 03 Project 01 Review YAML Manifests Project Folder 01 kube manifests with cookie affinity 01 kubernetes deployment yaml 02 kubernetes NodePort service yaml 03 ingress yaml 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 sessionAffinity affinityType GENERATED COOKIE Step 04 Project 02 Review YAML Manifests Project Folder 02 kube manifests without cookie affinity 01 kubernetes deployment yaml 02 kubernetes NodePort service yaml 03 ingress yaml 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig2 spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 Step 05 Deploy Kubernetes Manifests t Project 01 Deploy Kubernetes Manifests kubectl apply f 01 kube manifests with cookie affinity Project 02 Deploy Kubernetes Manifests kubectl apply f 02 kube manifests without cookie affinity Verify Deployments kubectl get deploy Verify Pods kubectl get pods Verify Node Port Services kubectl get svc Verify Ingress Services kubectl get ingress Verify Backend Config kubectl get backendconfig Project 01 Verify Load Balancer Settings Go to Network Services Load Balancing Load Balancer Backends Verify Cookie Affinity Setting Observation Cookie Affinity setting should be in enabled state Project 02 Verify Load Balancer Settings Go to Network Services Load Balancing Load Balancer Backends Verify Cookie Affinity Setting Cookie Affinity setting should be in disabled state Step 06 Access Application t Project 01 Access Application using DNS or ExtIP http ingress with cookie affinity kalyanreddydaida com http EXT IP 1 Observation 1 Request will keep going always to only one POD due to GENERATED COOKIE Affinity we configured Project 02 Access Application using DNS or ExtIP http ingress without cookie affinity kalyanreddydaida com http EXT IP 2 Observation 1 Requests will be load balanced to 4 pods created as part of cdn demo2 deployment Step 07 Clean Up t Project 01 Delete Kubernetes Resources kubectl delete f 01 kube manifests with cookie affinity Project 02 Delete Kubernetes Resources kubectl delete f 02 kube manifests without cookie affinity References Ingress Features https cloud google com kubernetes engine docs how to ingress features http ingress without cookie affinity kalyanreddydaida com http ingress with cookie affinity kalyanreddydaida co
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Ingress and Cloud CDN 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Ingress and Cloud CDN description: Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service 1. BackendConfig for Ingress Service 2. Backend Service Timeout 3. Connection Draining 4. Ingress Service HTTP Access Logging 5. Enable Cloud CDN ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: cdn-demo-deployment spec: replicas: 2 selector: matchLabels: app: cdn-demo template: metadata: labels: app: cdn-demo spec: containers: - name: cdn-demo image: us-docker.pkg.dev/google-samples/containers/gke/hello-app-cdn:1.0 ports: - containerPort: 8080 ``` ## Step-03: 02-kubernetes-NodePort-service.yaml - Update Backend Config with annotation **cloud.google.com/backend-config: '{"default": "my-backendconfig"}'** ```yaml apiVersion: v1 kind: Service metadata: name: cdn-demo-nodeport-service annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: cdn-demo ports: - port: 80 targetPort: 8080 ``` ## Step-04: 03-ingress.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cdn-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingress-cdn-demo.kalyanreddydaida.com spec: defaultBackend: service: name: cdn-demo-nodeport-service port: number: 80 ``` ## Step-05: 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 cdn: enabled: true cachePolicy: includeHost: true includeProtocol: true includeQueryString: false ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Service kubectl get ingress # List Backend Config kubectl get backendconfig kubectl describe backendconfig my-backendconfig ``` ## Step-07: Verify Settings in Load Balancer - Go to Network Services -> Load Balancing -> Click on Load Balancer - Go to Backend -> Backend Services - Verify the Settings - **Timeout:** 42 seconds - **Connection draining timeout:** 62 seconds - **Cloud CDN:** Enabled - **Logging: Enabled:** (sample rate: 1) ## Step-08: Verify Cloud CDN - Go to Network Services -> Cloud CDN -> (Automatically created when Ingress Deployed k8s1-c6634a10-default-cdn-demo-nodeport-service-80-553facae) - Verify Settings - DETAILS TAB - MONITORING TAB - CACHING TAB ## Step-09: Access Application and Verify Cache Age ```t # Access Application http://<DNS-NAME-FROM-INGRESS-SERVICE> [or] http://<IP-ADDRESS-FROM-INGRESS-SERVICE-OUTPUT> # Access Application using DNS Name http://ingress-cdn-demo.kalyanreddydaida.com curl -v http://ingress-cdn-demo.kalyanreddydaida.com/?cache=true curl -v http://ingress-cdn-demo.kalyanreddydaida.com curl -v http://ingress-cdn-demo.kalyanreddydaida.com ## Important Note: 1. The output shows the response headers and body. 2. In the response headers, you can see that the content was cached. The Age header tells you how many seconds the content has been cached ## Sample Output Kalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ curl -v http://ingress-cdn-demo.kalyanreddydaida.com * Trying 34.120.32.120:80... * Connected to ingress-cdn-demo.kalyanreddydaida.com (34.120.32.120) port 80 (#0) > GET / HTTP/1.1 > Host: ingress-cdn-demo.kalyanreddydaida.com > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Content-Length: 76 < Via: 1.1 google < Date: Thu, 23 Jun 2022 04:47:42 GMT < Content-Type: text/plain; charset=utf-8 < Age: 1625 < Cache-Control: max-age=3600,public < Hello, world! Version: 1.0.0 Hostname: cdn-demo-deployment-6f4c8f655d-htpsn * Connection #0 to host ingress-cdn-demo.kalyanreddydaida.com left intact Kalyans-Mac-mini:46-GKE-Ingress-Cloud-CDN kalyanreddy$ ``` ## Step-10: Verify Cloud CDN Monitoring Tab - Go to Network Services -> Cloud CDN -> MONITORING Tab - Review Charts - CDN Bandwidth - CDN Hit Rate - CDN Fill Rate - CDN Egress Rate - Requests - Response Codes ## Step-11: Verify Ingress Service Logs in Cloud Logging - Go to Cloud Logging -> Logs Explorer -> Log Fields -> Select - Resource Type: Cloud HTTP Load Balancer - Severity: Info - Project ID: kdaida123 - Review the logs - Access Application and parallely review the logs ```t # Access Application curl -v http://ingress-cdn-demo.kalyanreddydaida.com ``` ## Step-12: Verify Ingress Service Logs in Cloud Logging using Other Approach - Go to Cloud Logging -> Logs Dashboard - Go to Chart -> HTTP/S Load Balancer Logs By Severity -> Click on **VIEW LOGS** ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) - [Caching overview](https://cloud.google.com/cdn/docs/caching#cacheability)
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress and Cloud CDN description Implement GCP Google Kubernetes Engine GKE Ingress and Cloud CDN Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 ExternalDNS Controller should be installed and ready to use t List Namespaces external dns ns namespace should be present kubectl get ns List External DNS Pods kubectl n external dns ns get pods Step 01 Introduction Implement following Features for Ingress Service 1 BackendConfig for Ingress Service 2 Backend Service Timeout 3 Connection Draining 4 Ingress Service HTTP Access Logging 5 Enable Cloud CDN Step 02 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata name cdn demo deployment spec replicas 2 selector matchLabels app cdn demo template metadata labels app cdn demo spec containers name cdn demo image us docker pkg dev google samples containers gke hello app cdn 1 0 ports containerPort 8080 Step 03 02 kubernetes NodePort service yaml Update Backend Config with annotation cloud google com backend config default my backendconfig yaml apiVersion v1 kind Service metadata name cdn demo nodeport service annotations cloud google com backend config default my backendconfig spec type NodePort selector app cdn demo ports port 80 targetPort 8080 Step 04 03 ingress yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress cdn demo annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname ingress cdn demo kalyanreddydaida com spec defaultBackend service name cdn demo nodeport service port number 80 Step 05 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 cdn enabled true cachePolicy includeHost true includeProtocol true includeQueryString false Step 06 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc List Ingress Service kubectl get ingress List Backend Config kubectl get backendconfig kubectl describe backendconfig my backendconfig Step 07 Verify Settings in Load Balancer Go to Network Services Load Balancing Click on Load Balancer Go to Backend Backend Services Verify the Settings Timeout 42 seconds Connection draining timeout 62 seconds Cloud CDN Enabled Logging Enabled sample rate 1 Step 08 Verify Cloud CDN Go to Network Services Cloud CDN Automatically created when Ingress Deployed k8s1 c6634a10 default cdn demo nodeport service 80 553facae Verify Settings DETAILS TAB MONITORING TAB CACHING TAB Step 09 Access Application and Verify Cache Age t Access Application http DNS NAME FROM INGRESS SERVICE or http IP ADDRESS FROM INGRESS SERVICE OUTPUT Access Application using DNS Name http ingress cdn demo kalyanreddydaida com curl v http ingress cdn demo kalyanreddydaida com cache true curl v http ingress cdn demo kalyanreddydaida com curl v http ingress cdn demo kalyanreddydaida com Important Note 1 The output shows the response headers and body 2 In the response headers you can see that the content was cached The Age header tells you how many seconds the content has been cached Sample Output Kalyans Mac mini 46 GKE Ingress Cloud CDN kalyanreddy curl v http ingress cdn demo kalyanreddydaida com Trying 34 120 32 120 80 Connected to ingress cdn demo kalyanreddydaida com 34 120 32 120 port 80 0 GET HTTP 1 1 Host ingress cdn demo kalyanreddydaida com User Agent curl 7 79 1 Accept Mark bundle as not supporting multiuse HTTP 1 1 200 OK Content Length 76 Via 1 1 google Date Thu 23 Jun 2022 04 47 42 GMT Content Type text plain charset utf 8 Age 1625 Cache Control max age 3600 public Hello world Version 1 0 0 Hostname cdn demo deployment 6f4c8f655d htpsn Connection 0 to host ingress cdn demo kalyanreddydaida com left intact Kalyans Mac mini 46 GKE Ingress Cloud CDN kalyanreddy Step 10 Verify Cloud CDN Monitoring Tab Go to Network Services Cloud CDN MONITORING Tab Review Charts CDN Bandwidth CDN Hit Rate CDN Fill Rate CDN Egress Rate Requests Response Codes Step 11 Verify Ingress Service Logs in Cloud Logging Go to Cloud Logging Logs Explorer Log Fields Select Resource Type Cloud HTTP Load Balancer Severity Info Project ID kdaida123 Review the logs Access Application and parallely review the logs t Access Application curl v http ingress cdn demo kalyanreddydaida com Step 12 Verify Ingress Service Logs in Cloud Logging using Other Approach Go to Cloud Logging Logs Dashboard Go to Chart HTTP S Load Balancer Logs By Severity Click on VIEW LOGS References Ingress Features https cloud google com kubernetes engine docs how to ingress features Caching overview https cloud google com cdn docs caching cacheability
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress with External IP title GCP Google Kubernetes Engine GKE Ingress with External IP Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress with External IP description: Implement GCP Google Kubernetes Engine GKE Ingress with External IP --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains ## Step-01: Introduction - Reserve an External IP Address - Using Annotaiton `kubernetes.io/ingress.global-static-ip-name` associate this External IP to Ingress Service ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Describe External IP Address gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip1 --global # List External IP Address gcloud compute addresses list # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Add RECORDSET Google Cloud DNS for this External IP - Go to Network Services -> Cloud DNS -> kalyanreddydaida.com -> **ADD RECORD SET** - DNS NAME: demo1.kalyanreddydaida.com - **IPv4 Address:** <EXTERNAL-IP-RESERVERD-IN-STEP-02> - Click on **CREATE** ## Step-04: Verify DNS resolving to IP ```t # nslookup test nslookup demo1.kalyanreddydaida.com ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ nslookup demo1.kalyanreddydaida.com Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: demo1.kalyanreddydaida.com Address: 34.120.32.120 Kalyans-Mac-mini:google-kubernetes-engine kalyanreddy$ ``` ## Step-05: 04-Ingress-external-ip.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-external-ip annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: No changes to other 3 YAML Files - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-07: Deploy kube-manifests and verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-external-ip ``` ## Step-08: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http://<DNS-DOMAIN-NAME>/app1/index.html http://<DNS-DOMAIN-NAME>/app2/index.html http://<DNS-DOMAIN-NAME>/ # Replace Domain Name registered in Cloud DNS http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ ``` ## Step-09: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress with External IP description Implement GCP Google Kubernetes Engine GKE Ingress with External IP Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Registered Domain using Google Cloud Domains Step 01 Introduction Reserve an External IP Address Using Annotaiton kubernetes io ingress global static ip name associate this External IP to Ingress Service Step 02 Create External IP Address using gcloud t Create External IP Address gcloud compute addresses create ADDRESS NAME global gcloud compute addresses create gke ingress extip1 global Describe External IP Address gcloud compute addresses describe ADDRESS NAME global gcloud compute addresses describe gke ingress extip1 global List External IP Address gcloud compute addresses list Verify Go to VPC Network IP Addresses External IP Address Step 03 Add RECORDSET Google Cloud DNS for this External IP Go to Network Services Cloud DNS kalyanreddydaida com ADD RECORD SET DNS NAME demo1 kalyanreddydaida com IPv4 Address EXTERNAL IP RESERVERD IN STEP 02 Click on CREATE Step 04 Verify DNS resolving to IP t nslookup test nslookup demo1 kalyanreddydaida com Sample Output Kalyans Mac mini google kubernetes engine kalyanreddy nslookup demo1 kalyanreddydaida com Server 192 168 2 1 Address 192 168 2 1 53 Non authoritative answer Name demo1 kalyanreddydaida com Address 34 120 32 120 Kalyans Mac mini google kubernetes engine kalyanreddy Step 05 04 Ingress external ip yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress external ip annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 06 No changes to other 3 YAML Files 01 Nginx App1 Deployment and NodePortService yaml 02 Nginx App2 Deployment and NodePortService yaml 03 Nginx App3 Deployment and NodePortService yaml Step 07 Deploy kube manifests and verify t Deploy Kubernetes manifests kubectl apply f kube manifests List Pods kubectl get pods List Services kubectl get svc List Ingress Load Balancers kubectl get ingress Describe Ingress and view Rules kubectl describe ingress ingress external ip Step 08 Access Application t Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors Access Application http DNS DOMAIN NAME app1 index html http DNS DOMAIN NAME app2 index html http DNS DOMAIN NAME Replace Domain Name registered in Cloud DNS http demo1 kalyanreddydaida com app1 index html http demo1 kalyanreddydaida com app2 index html http demo1 kalyanreddydaida com Step 09 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Verify Load Balancer Deleted Go to Network Services Load Balancing No Load balancers should be present
gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GCP Google Kubernetes Engine GKE NodePort Service Implement GCP Google Kubernetes Engine GKE NodePort Service t
--- title: GCP Google Kubernetes Engine GKE NodePort Service description: Implement GCP Google Kubernetes Engine GKE NodePort Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # List GKE Kubernetes Worker Nodes kubectl get nodes # List GKE Kubernetes Worker Nodes with -o wide option kubectl get nodes -o wide Observation: 1. You should see External-IP Address (Public IP accesible via internet) 2. That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster ``` ## Step-01: Introduction - Implement Kubernetes NodePort Service ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-03: 02-kubernetes-nodeport-service.yaml - If you don't speciy `nodePort: 30080` it will dynamically assign one port from range `30000-32768` ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-nodeport-service spec: type: NodePort # clusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port nodePort: 30080 # NodePort (Optional)(Node Port Range: 30000-32768) ``` ## Step-04: Deply Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc ``` ## Step-05: Access Application ```t # List Kubernetes Worker Node with -0 wide kubectl get nodes -o wide Observation: 1. Make a note of any one Node External-IP (Public IP Address) # Access Application http://<NODE-EXTERNAL-IP>:<NodePort> http://104.154.52.12:30080 Observation: 1. This should fail ``` ## Step-06: Create Firewall Rule ```t # Create Firewall Rule gcloud compute firewall-rules create fw-rule-gke-node-port \ --allow tcp:NODE_PORT # Replace NODE_PORT gcloud compute firewall-rules create fw-rule-gke-node-port \ --allow tcp:30080 # List Firewall Rules gcloud compute firewall-rules list ``` ## Step-07:Access Application ```t # List Kubernetes Worker Node with -0 wide kubectl get nodes -o wide Observation: 1. Make a note of any one Node External-IP (Public IP Address) # Access Application http://<NODE-EXTERNAL-IP>:<NodePort> http://104.154.52.12:30080 Observation: 1. This should Pass ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete NodePort Service Firewall Rule gcloud compute firewall-rules delete fw-rule-gke-node-port # List Firewall Rules gcloud compute firewall-rules list ```
gcp gke docs
title GCP Google Kubernetes Engine GKE NodePort Service description Implement GCP Google Kubernetes Engine GKE NodePort Service Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 List GKE Kubernetes Worker Nodes kubectl get nodes List GKE Kubernetes Worker Nodes with o wide option kubectl get nodes o wide Observation 1 You should see External IP Address Public IP accesible via internet 2 That is the key thing for testing the Kubernetes NodePort Service on GKE Cluster Step 01 Introduction Implement Kubernetes NodePort Service Step 02 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 2 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 Step 03 02 kubernetes nodeport service yaml If you don t speciy nodePort 30080 it will dynamically assign one port from range 30000 32768 yaml apiVersion v1 kind Service metadata name myapp1 nodeport service spec type NodePort clusterIP NodePort LoadBalancer ExternalName selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port nodePort 30080 NodePort Optional Node Port Range 30000 32768 Step 04 Deply Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get po List Services kubectl get svc Step 05 Access Application t List Kubernetes Worker Node with 0 wide kubectl get nodes o wide Observation 1 Make a note of any one Node External IP Public IP Address Access Application http NODE EXTERNAL IP NodePort http 104 154 52 12 30080 Observation 1 This should fail Step 06 Create Firewall Rule t Create Firewall Rule gcloud compute firewall rules create fw rule gke node port allow tcp NODE PORT Replace NODE PORT gcloud compute firewall rules create fw rule gke node port allow tcp 30080 List Firewall Rules gcloud compute firewall rules list Step 07 Access Application t List Kubernetes Worker Node with 0 wide kubectl get nodes o wide Observation 1 Make a note of any one Node External IP Public IP Address Access Application http NODE EXTERNAL IP NodePort http 104 154 52 12 30080 Observation 1 This should Pass Step 08 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Delete NodePort Service Firewall Rule gcloud compute firewall rules delete fw rule gke node port List Firewall Rules gcloud compute firewall rules list
gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Existing StorageClass premium rwo 2 Verify if kubeconfig for kubectl is configured in your local terminal Use existing storageclass premium rwo in Kubernetes Workloads Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks Existing StorageClass premium-rwo description: Use existing storageclass premium-rwo in Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand Kubernetes Objects 01. Kubernetes PersistentVolumeClaim 02. Kubernetes ConfigMap 03. Kubernetes Deployment 04. Kubernetes Volumes 05. Kubernetes Volume Mounts 06. Kubernetes Environment Variables 07. Kubernetes ClusterIP Service 08. Kubernetes Init Containers 09. Kubernetes Service of Type LoadBalancer 10. Kubernetes StorageClass - Use the predefined Storage class `premium-rwo` - By default, dynamically provisioned PersistentVolumes use the default StorageClass and are backed by `standard hard disks`. - If you need faster SSDs, you can use the `premium-rwo` storage class from the Compute Engine persistent disk CSI Driver to provision your volumes. - This can be done by setting the storageClassName field to `premium-rwo` in your PersistentVolumeClaim - `premium-rwo Storage Class` will provision `SSD Persistent Disk` ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: premium-rwo resources: requests: storage: 4Gi ``` ## Step-04: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section 1. 01-persistent-volume-claim.yaml 2. 02-UserManagement-ConfigMap.yaml 3. 03-mysql-deployment.yaml 4. 04-mysql-clusterip-service.yaml 5. 05-UserMgmtWebApp-Deployment.yaml 6. 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-05: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-06: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** You should see the disk type as **SSD persistent disk** ## Step-07: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 ``` ## Step-08: Clean-Up ```t # Delete kube-manifests kubectl delete -f kube-manifests/ ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver
gcp gke docs
title GKE Persistent Disks Existing StorageClass premium rwo description Use existing storageclass premium rwo in Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Understand Kubernetes Objects 01 Kubernetes PersistentVolumeClaim 02 Kubernetes ConfigMap 03 Kubernetes Deployment 04 Kubernetes Volumes 05 Kubernetes Volume Mounts 06 Kubernetes Environment Variables 07 Kubernetes ClusterIP Service 08 Kubernetes Init Containers 09 Kubernetes Service of Type LoadBalancer 10 Kubernetes StorageClass Use the predefined Storage class premium rwo By default dynamically provisioned PersistentVolumes use the default StorageClass and are backed by standard hard disks If you need faster SSDs you can use the premium rwo storage class from the Compute Engine persistent disk CSI Driver to provision your volumes This can be done by setting the storageClassName field to premium rwo in your PersistentVolumeClaim premium rwo Storage Class will provision SSD Persistent Disk Step 02 List Kubernetes Storage Classes in GKE Cluster t List Storage Classes kubectl get sc Step 03 01 persistent volume claim yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName premium rwo resources requests storage 4Gi Step 04 Other Kubernetes YAML Manifests No changes to other Kubernetes YAML Manifests They are same as previous section 1 01 persistent volume claim yaml 2 02 UserManagement ConfigMap yaml 3 03 mysql deployment yaml 4 04 mysql clusterip service yaml 5 05 UserMgmtWebApp Deployment yaml 6 06 UserMgmtWebApp LoadBalancer Service yaml Step 05 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Storage Classes kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 06 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation You should see the disk type as SSD persistent disk Step 07 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Step 08 Clean Up t Delete kube manifests kubectl delete f kube manifests Reference Using the Compute Engine persistent disk CSI Driver https cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver
gcp gke docs Edit Deployment We can update deployments using two options Learn and Implement Kubernetes Update Deployment Step 00 Introduction Set Image Step 01 Updating Application version V1 to V2 using Set Image Option title Kubernetes Update Deployment
--- title: Kubernetes - Update Deployment description: Learn and Implement Kubernetes Update Deployment --- ## Step-00: Introduction - We can update deployments using two options - Set Image - Edit Deployment ## Step-01: Updating Application version V1 to V2 using "Set Image" Option ### Update Deployment - **Observation:** Please Check the container name in `spec.container.name` yaml output and make a note of it and replace in `kubectl set image` command <Container-Name> ```t # Get Container Name from current deployment kubectl get deployment my-first-deployment -o yaml # Update Deployment - SHOULD WORK NOW kubectl set image deployment/<Deployment-Name> <Container-Name>=<Container-Image> kubectl set image deployment/my-first-deployment kubenginx=stacksimplify/kubenginx:2.0.0 ``` ### Verify Rollout Status (Deployment Status) - **Observation:** By default, rollout happens in a rolling update model, so no downtime. ```t # Verify Rollout Status kubectl rollout status deployment/my-first-deployment # Verify Deployment kubectl get deploy ``` ### Describe Deployment - **Observation:** - Verify the Events and understand that Kubernetes by default do "Rolling Update" for new application releases. - With that said, we will not have downtime for our application. ```t # Descibe Deployment kubectl describe deployment my-first-deployment ``` ### Verify ReplicaSet - **Observation:** New ReplicaSet will be created for new version ```t # Verify ReplicaSet kubectl get rs ``` ### Verify Pods - **Observation:** Pod template hash label of new replicaset should be present for PODs letting us know these pods belong to new ReplicaSet. ```t # List Pods kubectl get po ``` ### Access the Application using Public IP - We should see `Application Version:V2` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http://<External-IP-from-get-service-output> ``` ### Update Change-Cause for the Kubernetes Deployment - Rollout History - **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. ```t # Verify Rollout History kubectl rollout history deployment/my-first-deployment # Update REVISION CHANGE-CAUSE kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment UPDATE - App Version 2.0.0 - SET IMAGE OPTION" # Verify Rollout History kubectl rollout history deployment/my-first-deployment ``` ## Step-02: Update the Application from V2 to V3 using "Edit Deployment" Option ### Edit Deployment ```t # Edit Deployment kubectl edit deployment/<Deployment-Name> kubectl edit deployment/my-first-deployment ``` ```yaml # Change From 2.0.0 spec: containers: - image: stacksimplify/kubenginx:2.0.0 # Change To 3.0.0 spec: containers: - image: stacksimplify/kubenginx:3.0.0 ``` ### Verify Rollout Status - **Observation:** Rollout happens in a rolling update model, so no downtime. ```t # Verify Rollout Status kubectl rollout status deployment/my-first-deployment # Describe Deployment kubectl describe deployment/my-first-deployment ``` ### Verify Replicasets - **Observation:** We should see 3 ReplicaSets now, as we have updated our application to 3rd version 3.0.0 ```t # Verify ReplicaSet and Pods kubectl get rs kubectl get po ``` ### Access the Application using Public IP - We should see `Application Version:V3` whenever we access the application in browser ```t # Get Load Balancer IP kubectl get svc # Application URL http://<External-IP-from-get-service-output> ``` ### Update Change-Cause for the Kubernetes Deployment - Rollout History - **Observation:** We have the rollout history, so we can switch back to older revisions using revision history available to us. ```t # Verify Rollout History kubectl rollout history deployment/my-first-deployment # Update REVISION CHANGE-CAUSE kubectl annotate deployment/my-first-deployment kubernetes.io/change-cause="Deployment UPDATE - App Version 3.0.0 - EDIT DEPLOYMENT OPTION" # Verify Rollout History kubectl rollout history deployment/my-first-deployment ``
gcp gke docs
title Kubernetes Update Deployment description Learn and Implement Kubernetes Update Deployment Step 00 Introduction We can update deployments using two options Set Image Edit Deployment Step 01 Updating Application version V1 to V2 using Set Image Option Update Deployment Observation Please Check the container name in spec container name yaml output and make a note of it and replace in kubectl set image command Container Name t Get Container Name from current deployment kubectl get deployment my first deployment o yaml Update Deployment SHOULD WORK NOW kubectl set image deployment Deployment Name Container Name Container Image kubectl set image deployment my first deployment kubenginx stacksimplify kubenginx 2 0 0 Verify Rollout Status Deployment Status Observation By default rollout happens in a rolling update model so no downtime t Verify Rollout Status kubectl rollout status deployment my first deployment Verify Deployment kubectl get deploy Describe Deployment Observation Verify the Events and understand that Kubernetes by default do Rolling Update for new application releases With that said we will not have downtime for our application t Descibe Deployment kubectl describe deployment my first deployment Verify ReplicaSet Observation New ReplicaSet will be created for new version t Verify ReplicaSet kubectl get rs Verify Pods Observation Pod template hash label of new replicaset should be present for PODs letting us know these pods belong to new ReplicaSet t List Pods kubectl get po Access the Application using Public IP We should see Application Version V2 whenever we access the application in browser t Get Load Balancer IP kubectl get svc Application URL http External IP from get service output Update Change Cause for the Kubernetes Deployment Rollout History Observation We have the rollout history so we can switch back to older revisions using revision history available to us t Verify Rollout History kubectl rollout history deployment my first deployment Update REVISION CHANGE CAUSE kubectl annotate deployment my first deployment kubernetes io change cause Deployment UPDATE App Version 2 0 0 SET IMAGE OPTION Verify Rollout History kubectl rollout history deployment my first deployment Step 02 Update the Application from V2 to V3 using Edit Deployment Option Edit Deployment t Edit Deployment kubectl edit deployment Deployment Name kubectl edit deployment my first deployment yaml Change From 2 0 0 spec containers image stacksimplify kubenginx 2 0 0 Change To 3 0 0 spec containers image stacksimplify kubenginx 3 0 0 Verify Rollout Status Observation Rollout happens in a rolling update model so no downtime t Verify Rollout Status kubectl rollout status deployment my first deployment Describe Deployment kubectl describe deployment my first deployment Verify Replicasets Observation We should see 3 ReplicaSets now as we have updated our application to 3rd version 3 0 0 t Verify ReplicaSet and Pods kubectl get rs kubectl get po Access the Application using Public IP We should see Application Version V3 whenever we access the application in browser t Get Load Balancer IP kubectl get svc Application URL http External IP from get service output Update Change Cause for the Kubernetes Deployment Rollout History Observation We have the rollout history so we can switch back to older revisions using revision history available to us t Verify Rollout History kubectl rollout history deployment my first deployment Update REVISION CHANGE CAUSE kubectl annotate deployment my first deployment kubernetes io change cause Deployment UPDATE App Version 3 0 0 EDIT DEPLOYMENT OPTION Verify Rollout History kubectl rollout history deployment my first deployment
gcp gke docs Use GCP File Store for GKE Workloads with Custom StorageClass Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GKE Storage with GCP File Store Custom StorageClass t
--- title: GKE Storage with GCP File Store - Custom StorageClass description: Use GCP File Store for GKE Workloads with Custom StorageClass --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Custom StorageClass ## Step-02: Enable Filestore CSI driver (If not enabled) - Go to Kubernetes Engine -> standard-cluster-private -> Details -> Features -> Filestore CSI driver - Click on Checkbox **Enable Filestore CSI Driver** - Click on **SAVE CHANGES** ## Step-03: Verify if Filestore CSI Driver enabled ```t # Verify Filestore CSI Daemonset in kube-system namespace kubectl -n kube-system get ds | grep file Observation: 1. You should find the Daemonset with name "filestore-node" # Verify Filestore CSI Daemonset pods in kube-system namespace kubectl -n kube-system get pods | grep file Observation: 1. You should find the pods with name "filestore-node-*" ``` ## Step-04: Existing Storage Class - After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances: - **standard-rwx:** using the Basic HDD Filestore service tier - **premium-rwx:** using the Basic SSD Filestore service tier ```t # Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation: Below two storage class will be created by default standard-rwx premium-rwx ``` ## Step-05: 00-filestore-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: filestore-storage-class provisioner: filestore.csi.storage.gke.io # File Store CSI Driver volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: tier: standard # Allowed values standard, premium, or enterprise network: default # The network parameter can be used when provisioning Filestore instances on non-default VPCs. Non-default VPCs require special firewall rules to be set up. ``` ## Step-06: Other YAML files are same as previous section - Other YAML files are same as previous section - 01-filestore-pvc.yaml - 02-write-to-filestore-pod.yaml - 03-myapp1-deployment.yaml - 04-loadBalancer-service.yaml ## Step-07: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-08: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-09: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-10: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-11: Access Application ```t # List Services kubectl get svc # Access Application http://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if FileStore Instance is deleted Go to -> FileStore -> Instances ``
gcp gke docs
title GKE Storage with GCP File Store Custom StorageClass description Use GCP File Store for GKE Workloads with Custom StorageClass Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction GKE Storage with GCP File Store Custom StorageClass Step 02 Enable Filestore CSI driver If not enabled Go to Kubernetes Engine standard cluster private Details Features Filestore CSI driver Click on Checkbox Enable Filestore CSI Driver Click on SAVE CHANGES Step 03 Verify if Filestore CSI Driver enabled t Verify Filestore CSI Daemonset in kube system namespace kubectl n kube system get ds grep file Observation 1 You should find the Daemonset with name filestore node Verify Filestore CSI Daemonset pods in kube system namespace kubectl n kube system get pods grep file Observation 1 You should find the pods with name filestore node Step 04 Existing Storage Class After you enable the Filestore CSI driver GKE automatically installs the following StorageClasses for provisioning Filestore instances standard rwx using the Basic HDD Filestore service tier premium rwx using the Basic SSD Filestore service tier t Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation Below two storage class will be created by default standard rwx premium rwx Step 05 00 filestore storage class yaml yaml apiVersion storage k8s io v1 kind StorageClass metadata name filestore storage class provisioner filestore csi storage gke io File Store CSI Driver volumeBindingMode WaitForFirstConsumer allowVolumeExpansion true parameters tier standard Allowed values standard premium or enterprise network default The network parameter can be used when provisioning Filestore instances on non default VPCs Non default VPCs require special firewall rules to be set up Step 06 Other YAML files are same as previous section Other YAML files are same as previous section 01 filestore pvc yaml 02 write to filestore pod yaml 03 myapp1 deployment yaml 04 loadBalancer service yaml Step 07 Deploy kube manifests t Deploy kube manifests kubectl apply f kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List Pods kubectl get pods Step 08 Verify GCP Cloud FileStore Instance Go to FileStore Instances Click on Instance ID pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f Note Instance ID dynamically generated it can be different in your case starting with pvc Step 09 Connect to filestore write app Kubernetes pods and Verify t FileStore write app Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty filestore writer app bin sh cd data ls tail f myapp1 txt exit Step 10 Connect to myapp1 Kubernetes pods and Verify t List Pods kubectl get pods myapp1 POD1 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit myapp1 POD2 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit Step 11 Access Application t List Services kubectl get svc Access Application http EXTERNAL IP OF GET SERVICE OUTPUT filestore myapp1 txt http 35 232 145 61 filestore myapp1 txt curl http 35 232 145 61 filestore myapp1 txt Step 12 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Verify if FileStore Instance is deleted Go to FileStore Instances
gcp gke docs Use GCP File Store for GKE Workloads with Default StorageClass Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GKE Storage with GCP File Store Default StorageClass 1 Verify if GKE Cluster is created t
--- title: GKE Storage with GCP File Store - Default StorageClass description: Use GCP File Store for GKE Workloads with Default StorageClass --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Storage with GCP File Store - Default StorageClass ## Step-02: Enable Filestore CSI driver (If not enabled) - Go to Kubernetes Engine -> standard-cluster-private-1 -> Details -> Features -> Filestore CSI driver - Click on Checkbox **Enable Filestore CSI Driver** - Click on **SAVE CHANGES** ## Step-03: Verify if Filestore CSI Driver enabled ```t # Verify Filestore CSI Daemonset in kube-system namespace kubectl -n kube-system get ds | grep file Observation: 1. You should find the Daemonset with name "filestore-node" # Verify Filestore CSI Daemonset pods in kube-system namespace kubectl -n kube-system get pods | grep file Observation: 1. You should find the pods with name "filestore-node-*" ``` ## Step-04: Existing Storage Class - After you enable the Filestore CSI driver, GKE automatically installs the following StorageClasses for provisioning Filestore instances: - **standard-rwx:** using the Basic HDD Filestore service tier - **premium-rwx:** using the Basic SSD Filestore service tier - **enterprise-rwx** - **enterprise-multishare-rwx** ```t # Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation: Below four storage class will be created by default standard-rwx premium-rwx enterprise-rwx enterprise-multishare-rwx ``` ## Step-05: 01-filestore-pvc.yaml ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gke-filestore-pvc spec: accessModes: - ReadWriteMany storageClassName: standard-rwx resources: requests: storage: 1Ti ``` ## Step-06: 02-write-to-filestore-pod.yaml ```yaml apiVersion: v1 kind: Pod metadata: name: filestore-writer-app spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo GCP File Store used as PV in GKE $(date -u) >> /data/myapp1.txt; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /data volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ``` ## Step-07: 03-myapp1-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html/filestore volumes: - name: persistent-storage persistentVolumeClaim: claimName: gke-filestore-pvc ``` ## Step-08: 04-loadBalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-09: Enable Cloud FileStore API (if not enabled) - Go to Search -> FileStore -> ENABLE ## Step-09: Deploy kube-manifests ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods ``` ## Step-10: Verify GCP Cloud FileStore Instance - Go to FileStore -> Instances - Click on **Instance ID: pvc-27cd5c27-0ed0-48d1-bc5f-925adfb8495f** - **Note:** Instance ID dynamically generated, it can be different in your case starting with pvc-* ## Step-11: Connect to filestore write app Kubernetes pods and Verify ```t # FileStore write app - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty filestore-writer-app -- /bin/sh cd /data ls tail -f myapp1.txt exit ``` ## Step-12: Connect to myapp1 Kubernetes pods and Verify ```t # List Pods kubectl get pods # myapp1 POD1 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit # myapp1 POD2 - Connect to Kubernetes Pod kubectl exec --stdin --tty <POD-NAME> -- /bin/sh kubectl exec --stdin --tty myapp1-deployment-5d469f6478-2kp97 -- /bin/sh cd /usr/share/nginx/html/filestore ls tail -f myapp1.txt exit ``` ## Step-13: Access Application ```t # List Services kubectl get svc # Access Application http://<EXTERNAL-IP-OF-GET-SERVICE-OUTPUT>/filestore/myapp1.txt http://35.232.145.61/filestore/myapp1.txt curl http://35.232.145.61/filestore/myapp1.txt ``` ## Step-14: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if FileStore Instance is deleted Go to -> FileStore -> Instances ``
gcp gke docs
title GKE Storage with GCP File Store Default StorageClass description Use GCP File Store for GKE Workloads with Default StorageClass Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction GKE Storage with GCP File Store Default StorageClass Step 02 Enable Filestore CSI driver If not enabled Go to Kubernetes Engine standard cluster private 1 Details Features Filestore CSI driver Click on Checkbox Enable Filestore CSI Driver Click on SAVE CHANGES Step 03 Verify if Filestore CSI Driver enabled t Verify Filestore CSI Daemonset in kube system namespace kubectl n kube system get ds grep file Observation 1 You should find the Daemonset with name filestore node Verify Filestore CSI Daemonset pods in kube system namespace kubectl n kube system get pods grep file Observation 1 You should find the pods with name filestore node Step 04 Existing Storage Class After you enable the Filestore CSI driver GKE automatically installs the following StorageClasses for provisioning Filestore instances standard rwx using the Basic HDD Filestore service tier premium rwx using the Basic SSD Filestore service tier enterprise rwx enterprise multishare rwx t Default Storage Class created as part of FileStore CSI Enablement kubectl get sc Observation Below four storage class will be created by default standard rwx premium rwx enterprise rwx enterprise multishare rwx Step 05 01 filestore pvc yaml yaml kind PersistentVolumeClaim apiVersion v1 metadata name gke filestore pvc spec accessModes ReadWriteMany storageClassName standard rwx resources requests storage 1Ti Step 06 02 write to filestore pod yaml yaml apiVersion v1 kind Pod metadata name filestore writer app spec containers name app image centos command bin sh args c while true do echo GCP File Store used as PV in GKE date u data myapp1 txt sleep 5 done volumeMounts name persistent storage mountPath data volumes name persistent storage persistentVolumeClaim claimName gke filestore pvc Step 07 03 myapp1 deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 2 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 volumeMounts name persistent storage mountPath usr share nginx html filestore volumes name persistent storage persistentVolumeClaim claimName gke filestore pvc Step 08 04 loadBalancer service yaml yaml apiVersion v1 kind Service metadata name myapp1 lb service spec type LoadBalancer ClusterIp NodePort selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port Step 09 Enable Cloud FileStore API if not enabled Go to Search FileStore ENABLE Step 09 Deploy kube manifests t Deploy kube manifests kubectl apply f kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List Pods kubectl get pods Step 10 Verify GCP Cloud FileStore Instance Go to FileStore Instances Click on Instance ID pvc 27cd5c27 0ed0 48d1 bc5f 925adfb8495f Note Instance ID dynamically generated it can be different in your case starting with pvc Step 11 Connect to filestore write app Kubernetes pods and Verify t FileStore write app Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty filestore writer app bin sh cd data ls tail f myapp1 txt exit Step 12 Connect to myapp1 Kubernetes pods and Verify t List Pods kubectl get pods myapp1 POD1 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit myapp1 POD2 Connect to Kubernetes Pod kubectl exec stdin tty POD NAME bin sh kubectl exec stdin tty myapp1 deployment 5d469f6478 2kp97 bin sh cd usr share nginx html filestore ls tail f myapp1 txt exit Step 13 Access Application t List Services kubectl get svc Access Application http EXTERNAL IP OF GET SERVICE OUTPUT filestore myapp1 txt http 35 232 145 61 filestore myapp1 txt curl http 35 232 145 61 filestore myapp1 txt Step 14 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Verify if FileStore Instance is deleted Go to FileStore Instances
gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine Ingress Internal Load Balancer 1 Verify if GKE Cluster is created t Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress
--- title: GCP Google Kubernetes Engine Ingress Internal Load Balancer description: Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Ingress Internal Load Balancer ## Step-02: Review Kubernetes Deployment manifests - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-03: 04-Ingress-internal-lb.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-internal-lb annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer # Internal Load Balancer kubernetes.io/ingress.class: "gce-internal" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc # List Backend Configs kubectl get backendconfig # List Ingress Service kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-internal-lb # Verify Load Balancer Go to Network Services -> Load Balancing -> Load Balancer ``` ## Step-05: Review Curl Kubernetes Manifests - **Project Folder:** 02-kube-manifests-curl ```yaml apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ``` ## Step-06: Deply Curl-pod and Verify Internal LB ```t # Deploy curl-pod kubectl apply -f 02-kube-manifests-curl # Will open up a terminal session into the container kubectl exec -it curl-pod -- sh # App1 Curl Test curl http://<INTERNAL-INGRESS-LB-IP>/app1/index.html # App2 Curl Test curl http://<INTERNAL-INGRESS-LB-IP>/app2/index.html # App3 Curl Test curl http://<INTERNAL-INGRESS-LB-IP> ``` ## Step-07: Clean-Up ```t # Delete Kubernetes Manifests kubectl delete -f 01-kube-manifests kubectl delete -f 02-kube-manifests-curl ``` ## References - [Ingress for Internal HTTP(S) Load Balancing](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb
gcp gke docs
title GCP Google Kubernetes Engine Ingress Internal Load Balancer description Implement GCP Google Kubernetes Engine GKE Internal Load Balancer with Ingress Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Ingress Internal Load Balancer Step 02 Review Kubernetes Deployment manifests 01 Nginx App1 Deployment and NodePortService yaml 02 Nginx App2 Deployment and NodePortService yaml 03 Nginx App3 Deployment and NodePortService yaml Step 03 04 Ingress internal lb yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress internal lb annotations If the class annotation is not specified it defaults to gce gce external load balancer gce internal internal load balancer Internal Load Balancer kubernetes io ingress class gce internal spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 04 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 01 kube manifests List Deployments kubectl get deploy List Pods kubectl get po List Services kubectl get svc List Backend Configs kubectl get backendconfig List Ingress Service kubectl get ingress Describe Ingress Service kubectl describe ingress ingress internal lb Verify Load Balancer Go to Network Services Load Balancing Load Balancer Step 05 Review Curl Kubernetes Manifests Project Folder 02 kube manifests curl yaml apiVersion v1 kind Pod metadata name curl pod spec containers name curl image curlimages curl command sleep 600 Step 06 Deply Curl pod and Verify Internal LB t Deploy curl pod kubectl apply f 02 kube manifests curl Will open up a terminal session into the container kubectl exec it curl pod sh App1 Curl Test curl http INTERNAL INGRESS LB IP app1 index html App2 Curl Test curl http INTERNAL INGRESS LB IP app2 index html App3 Curl Test curl http INTERNAL INGRESS LB IP Step 07 Clean Up t Delete Kubernetes Manifests kubectl delete f 01 kube manifests kubectl delete f 02 kube manifests curl References Ingress for Internal HTTP S Load Balancing https cloud google com kubernetes engine docs concepts ingress ilb
gcp gke docs Fix kubectl version to match with GKE Cluster Server Version Configure kubeconfig for kubectl on your local terminal Install gcloud CLI on WindowsOS Verify if you are able to reach GKE Cluster using kubectl from your local terminal title gcloud cli install on macOS Learn to install gcloud cli on WindowsOS Step 01 Introduction
--- title: gcloud cli install on macOS description: Learn to install gcloud cli on WindowsOS --- ## Step-01: Introduction - Install gcloud CLI on WindowsOS - Configure kubeconfig for kubectl on your local terminal - Verify if you are able to reach GKE Cluster using kubectl from your local terminal - Fix kubectl version to match with GKE Cluster Server Version. ## Step-02: Install gcloud cli on WindowsOS - [Install gcloud cli on WindowsOS](https://cloud.google.com/sdk/docs/install-sdk#windows) ```t ## Important Note: Download the latest version available on that respective day Dowload Link: https://cloud.google.com/sdk/docs/install-sdk#windows ## Run the Installer GoogleCloudSDKInstaller.exe ``` ## Step-03: Verify gcloud cli version ```t # gcloud cli version gcloud version ``` ## Step-04: Intialize gcloud CLI in local Terminal ```t # Initialize gcloud CLI gcloud init # List accounts whose credentials are stored on the local system: gcloud auth list # List the properties in your active gcloud CLI configuration gcloud config list # View information about your gcloud CLI installation and the active configuration gcloud info # gcloud config Configurations Commands (For Reference) gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename ``` ## Step-05: Verify gke-gcloud-auth-plugin ```t ## Important Note about gke-gcloud-auth-plugin: 1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version # Install gke-gcloud-auth-plugin gcloud components install gke-gcloud-auth-plugin # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version ``` ## Step-06: Remove any existing kubectl clients ```t # Verify kubectl version kubectl version --output=yaml Observation: 1. If any kubectl exists before installing it from gcloud then uninstall it. 2. Usually if docker is installed on our desktop, its equivalent kubectl package mostly will be installed and set on PATH. If exists please remove it. ``` ## Step-07: Install kubectl client from gcloud CLI ```t # List gcloud components gcloud components list ## SAMPLE OUTPUT Status: Not Installed Name: kubectl ID: kubectl Size: < 1 MiB # Install kubectl client gcloud components install kubectl # Verify kubectl version kubectl version --output=yaml ``` ## Step-08: Configure kubeconfig for kubectl in local desktop terminal ```t # Verify kubeconfig file kubectl config view # Configure kubeconfig for kubectl gcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT> gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Verify kubeconfig file kubectl config view # Verify Kubernetes Worker Nodes kubectl get nodes Observation: 1. It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version 2. Lets fix that in next step. ``` ## Step-09: Fix kubectl client version equal to GKE Cluster version - **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. - For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters. - As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26 ```t # Verify kubectl version kubectl version --output=yaml # Change Directroy Go to Google Cloud SDK "bin" directory # Backup existing kubectl Backup "kubectl" to "kubectl_bkup_1.24" # Copy latest kubectl COPY "kubectl.1.26" as "kubectl" # Verify kubectl version kubectl version --output=yaml ``` ## References - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac
gcp gke docs
title gcloud cli install on macOS description Learn to install gcloud cli on WindowsOS Step 01 Introduction Install gcloud CLI on WindowsOS Configure kubeconfig for kubectl on your local terminal Verify if you are able to reach GKE Cluster using kubectl from your local terminal Fix kubectl version to match with GKE Cluster Server Version Step 02 Install gcloud cli on WindowsOS Install gcloud cli on WindowsOS https cloud google com sdk docs install sdk windows t Important Note Download the latest version available on that respective day Dowload Link https cloud google com sdk docs install sdk windows Run the Installer GoogleCloudSDKInstaller exe Step 03 Verify gcloud cli version t gcloud cli version gcloud version Step 04 Intialize gcloud CLI in local Terminal t Initialize gcloud CLI gcloud init List accounts whose credentials are stored on the local system gcloud auth list List the properties in your active gcloud CLI configuration gcloud config list View information about your gcloud CLI installation and the active configuration gcloud info gcloud config Configurations Commands For Reference gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename Step 05 Verify gke gcloud auth plugin t Important Note about gke gcloud auth plugin 1 Kubernetes clients require an authentication plugin gke gcloud auth plugin which uses the Client go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters Verify if gke gcloud auth plugin installed gke gcloud auth plugin version Install gke gcloud auth plugin gcloud components install gke gcloud auth plugin Verify if gke gcloud auth plugin installed gke gcloud auth plugin version Step 06 Remove any existing kubectl clients t Verify kubectl version kubectl version output yaml Observation 1 If any kubectl exists before installing it from gcloud then uninstall it 2 Usually if docker is installed on our desktop its equivalent kubectl package mostly will be installed and set on PATH If exists please remove it Step 07 Install kubectl client from gcloud CLI t List gcloud components gcloud components list SAMPLE OUTPUT Status Not Installed Name kubectl ID kubectl Size 1 MiB Install kubectl client gcloud components install kubectl Verify kubectl version kubectl version output yaml Step 08 Configure kubeconfig for kubectl in local desktop terminal t Verify kubeconfig file kubectl config view Configure kubeconfig for kubectl gcloud container clusters get credentials GKE CLUSTER NAME region REGION project PROJECT gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 Verify kubeconfig file kubectl config view Verify Kubernetes Worker Nodes kubectl get nodes Observation 1 It should throw warning at the end about huge difference in kubectl client version and GKE Cluster Server Version 2 Lets fix that in next step Step 09 Fix kubectl client version equal to GKE Cluster version Important Note You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane For example a 1 24 kubectl client works with Kubernetes Cluster 1 23 1 24 and 1 25 clusters As our GKE cluster version is 1 26 we will also upgrade our kubectl to 1 26 t Verify kubectl version kubectl version output yaml Change Directroy Go to Google Cloud SDK bin directory Backup existing kubectl Backup kubectl to kubectl bkup 1 24 Copy latest kubectl COPY kubectl 1 26 as kubectl Verify kubectl version kubectl version output yaml References gcloud CLI https cloud google com sdk gcloud Install the Google Cloud CLI https cloud google com sdk docs install sdk mac
gcp gke docs title GCP Google Kubernetes Engine Kubernetes Namespaces Imperative Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative t
--- title: GCP Google Kubernetes Engine Kubernetes Namespaces Imperative description: Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Namespaces allow to split-up resources into different groups. - Resource names should be unique in a namespace - We can use namespaces to create multiple environments like dev, staging and production etc - Kubernetes will always list the resources from `default namespace` unless we provide exclusively from which namespace we need information from. ## Step-02: Namespaces Imperative - Create dev Namespace ### Step-02-01: Create Namespace ```t # List Namespaces kubectl get ns # Craete Namespace kubectl create namespace <namespace-name> kubectl create namespace dev # List Namespaces kubectl get ns ``` ### Step-02-02: Deploy All k8s Objects ```t # Deploy All k8s Objects kubectl apply -f 01-kube-manifests-imperative/ -n dev # List Namespaces kubectl get ns # List Deployments from dev Namespace kubectl get deploy -n dev # List Pods from dev Namespace kubectl get pods -n dev # List Services from dev Namespace kubectl get svc -n dev # List all objects from dev Namespaces kubectl get all -n dev # Access Application http://<LB-Service-External-IP>/ ``` ## Step-03: Namespace Declarative - Create qa Namespace ### Step-03-01: Namespace Kubernetes YAML Manifest - **File Name:** 00-kubernetes-namespace.yaml ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ### Step-03-02: Update Namespace in Deployment and Service YAML Manifest - We are going to update the `namespace: qa` in `metadata` section of Deployment and Service ```yaml # Deployment YAML Manifest apiVersion: apps/v1 kind: Deployment metadata: name: myapp1-deployment namespace: qa spec: # Service YAML Manifest apiVersion: v1 kind: Service metadata: name: myapp1-lb-service namespace: qa spec: ``` ### Step-03-03: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-declarative # List Namespaces kubectl get ns # List Deployments from qa Namespace kubectl get deploy -n qa # List Pods from qa Namespace kubectl get pods -n qa # List Services from qa Namespace kubectl get svc -n qa # List all objects from qa Namespaces kubectl get all -n qa # Access Application http://<LB-Service-External-IP>/ ``` ## Step-04: Clean-Up Resources - If we delete Namespace, all resources associated with namespace will get deleted. ```t # Delete dev Namespace kubectl delete ns dev # List Namespaces kubectl get ns Observation: 1. dev namespace should not be present # Verify Pods from dev Namespace kubectl get pods -n dev Observation: We should not find any pods because namespace itself doesnt exists # Delete qa Namespace Resources (only) kubectl delete -f 02-kube-manifests-declarative # List Namespaces kubectl get ns # Delete qa Namespace kubectl delete ns qa # List Namespaces kubectl get ns ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough
gcp gke docs
title GCP Google Kubernetes Engine Kubernetes Namespaces Imperative description Implement GCP Google Kubernetes Engine Kubernetes Namespaces Imperative Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Namespaces allow to split up resources into different groups Resource names should be unique in a namespace We can use namespaces to create multiple environments like dev staging and production etc Kubernetes will always list the resources from default namespace unless we provide exclusively from which namespace we need information from Step 02 Namespaces Imperative Create dev Namespace Step 02 01 Create Namespace t List Namespaces kubectl get ns Craete Namespace kubectl create namespace namespace name kubectl create namespace dev List Namespaces kubectl get ns Step 02 02 Deploy All k8s Objects t Deploy All k8s Objects kubectl apply f 01 kube manifests imperative n dev List Namespaces kubectl get ns List Deployments from dev Namespace kubectl get deploy n dev List Pods from dev Namespace kubectl get pods n dev List Services from dev Namespace kubectl get svc n dev List all objects from dev Namespaces kubectl get all n dev Access Application http LB Service External IP Step 03 Namespace Declarative Create qa Namespace Step 03 01 Namespace Kubernetes YAML Manifest File Name 00 kubernetes namespace yaml yaml apiVersion v1 kind Namespace metadata name qa Step 03 02 Update Namespace in Deployment and Service YAML Manifest We are going to update the namespace qa in metadata section of Deployment and Service yaml Deployment YAML Manifest apiVersion apps v1 kind Deployment metadata name myapp1 deployment namespace qa spec Service YAML Manifest apiVersion v1 kind Service metadata name myapp1 lb service namespace qa spec Step 03 03 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 02 kube manifests declarative List Namespaces kubectl get ns List Deployments from qa Namespace kubectl get deploy n qa List Pods from qa Namespace kubectl get pods n qa List Services from qa Namespace kubectl get svc n qa List all objects from qa Namespaces kubectl get all n qa Access Application http LB Service External IP Step 04 Clean Up Resources If we delete Namespace all resources associated with namespace will get deleted t Delete dev Namespace kubectl delete ns dev List Namespaces kubectl get ns Observation 1 dev namespace should not be present Verify Pods from dev Namespace kubectl get pods n dev Observation We should not find any pods because namespace itself doesnt exists Delete qa Namespace Resources only kubectl delete f 02 kube manifests declarative List Namespaces kubectl get ns Delete qa Namespace kubectl delete ns qa List Namespaces kubectl get ns References https kubernetes io docs tasks administer cluster namespaces walkthrough
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes title GCP Google Kubernetes Engine Ingress Custom Health Check Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine Ingress Custom Health Check description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Ingress Context Path based Routing - Ingress Custom Health Checks for each application using Kubernetes Readiness Probes - **App1 Health Check Path:** /app1/index.html - **App2 Health Check Path:** /app2/index.html - **App3 Health Check Path:** /index.html ## Step-02: 01-Nginx-App1-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app1-nginx-deployment labels: app: app1-nginx spec: replicas: 1 selector: matchLabels: app: app1-nginx template: metadata: labels: app: app1-nginx spec: containers: - name: app1-nginx image: stacksimplify/kube-nginxapp1:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app1/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-03: 02-Nginx-App2-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app2-nginx-deployment labels: app: app2-nginx spec: replicas: 1 selector: matchLabels: app: app2-nginx template: metadata: labels: app: app2-nginx spec: containers: - name: app2-nginx image: stacksimplify/kube-nginxapp2:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /app2/index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-04: 03-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-05: 04-Ingress-Custom-Healthcheck.yaml - NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name `ingress-custom-healthcheck` ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-custom-healthcheck annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ``` ## Step-06: Deploy kube-manifests and verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-custom-healthcheck ``` ## Step-07: Verify Health Checks - Go to Load Balancing -> Click on LB - DETAILS TAB - Backend services -> First Backend -> Click on Health Check Link - OR - Go to Compute Engine -> Instance Groups -> Health Checks - Review all 3 Health Checks and their Paths - **App1 Health Check Path:** /app1/index.html - **App2 Health Check Path:** /app2/index.html - **App3 Health Check Path:** /index.html ## Step-08: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app1/index.html http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app2/index.html http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/ ``` ## Step-09: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ``` ## References - [GKE Ingress Healthchecks](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks)
gcp gke docs
title GCP Google Kubernetes Engine Ingress Custom Health Check description Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks using Readiness Probes Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction Ingress Context Path based Routing Ingress Custom Health Checks for each application using Kubernetes Readiness Probes App1 Health Check Path app1 index html App2 Health Check Path app2 index html App3 Health Check Path index html Step 02 01 Nginx App1 Deployment and NodePortService yaml yaml apiVersion apps v1 kind Deployment metadata name app1 nginx deployment labels app app1 nginx spec replicas 1 selector matchLabels app app1 nginx template metadata labels app app1 nginx spec containers name app1 nginx image stacksimplify kube nginxapp1 1 0 0 ports containerPort 80 Readiness Probe https cloud google com kubernetes engine docs concepts ingress def inf hc readinessProbe httpGet scheme HTTP path app1 index html port 80 initialDelaySeconds 10 periodSeconds 15 timeoutSeconds 5 Step 03 02 Nginx App2 Deployment and NodePortService yaml yaml apiVersion apps v1 kind Deployment metadata name app2 nginx deployment labels app app2 nginx spec replicas 1 selector matchLabels app app2 nginx template metadata labels app app2 nginx spec containers name app2 nginx image stacksimplify kube nginxapp2 1 0 0 ports containerPort 80 Readiness Probe https cloud google com kubernetes engine docs concepts ingress def inf hc readinessProbe httpGet scheme HTTP path app2 index html port 80 initialDelaySeconds 10 periodSeconds 15 timeoutSeconds 5 Step 04 03 Nginx App3 Deployment and NodePortService yaml yaml apiVersion apps v1 kind Deployment metadata name app3 nginx deployment labels app app3 nginx spec replicas 1 selector matchLabels app app3 nginx template metadata labels app app3 nginx spec containers name app3 nginx image stacksimplify kubenginx 1 0 0 ports containerPort 80 Readiness Probe https cloud google com kubernetes engine docs concepts ingress def inf hc readinessProbe httpGet scheme HTTP path index html port 80 initialDelaySeconds 10 periodSeconds 15 timeoutSeconds 5 Step 05 04 Ingress Custom Healthcheck yaml NO CHANGES FROM CONTEXT PATH ROUTING DEMO other than Ingress Service name ingress custom healthcheck yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress custom healthcheck annotations External Load Balancer kubernetes io ingress class gce spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 path pathType Prefix backend service name app3 nginx nodeport service port number 80 Step 06 Deploy kube manifests and verify t Deploy Kubernetes manifests kubectl apply f kube manifests List Pods kubectl get pods List Services kubectl get svc List Ingress Load Balancers kubectl get ingress Describe Ingress and view Rules kubectl describe ingress ingress custom healthcheck Step 07 Verify Health Checks Go to Load Balancing Click on LB DETAILS TAB Backend services First Backend Click on Health Check Link OR Go to Compute Engine Instance Groups Health Checks Review all 3 Health Checks and their Paths App1 Health Check Path app1 index html App2 Health Check Path app2 index html App3 Health Check Path index html Step 08 Access Application t Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors Access Application http ADDRESS FIELD FROM GET INGRESS OUTPUT app1 index html http ADDRESS FIELD FROM GET INGRESS OUTPUT app2 index html http ADDRESS FIELD FROM GET INGRESS OUTPUT Step 09 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Verify Load Balancer Deleted Go to Network Services Load Balancing No Load balancers should be present References GKE Ingress Healthchecks https cloud google com kubernetes engine docs concepts ingress health checks
gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Preexisting PD 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Google Disks Preexisting PD for Kubernetes Workloads 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks Preexisting PD description: Use Google Disks Preexisting PD for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Use the **pre-existing Persistent Disk** created in previous demo. - As part of this demo, we are going to provision the **Persistent Volume (PV)** manually. We call this as Static Provisioning. ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-persistent-volume.yaml ```yaml apiVersion: v1 kind: PersistentVolume metadata: name: preexisting-pd spec: storageClassName: standard-rwo capacity: storage: 8Gi accessModes: - ReadWriteOnce claimRef: namespace: default name: mysql-pv-claim gcePersistentDisk: pdName: preexisting-pd fsType: ext4 ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 8Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk - **Observation:** You should see the disk type **In Use By** updated and bound to **gke-standard-cluster-1-default-pool-db7b638f-j5lk** ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 Observation: 1. You should see admin102 already present. 2. This is because in previous demo, we already created admin102 and that data disk we have mounted here using "Static Provisioning PV" concept. ``` ## Step-09: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # List PVC kubectl get pvc # List PV kubectl get pv # Delete Persistent Disk: preexisting-pd 1. "preexisting-pd" will not get deleted automatically 2. We should manually delete it 3. We should observe that its "In Use By" field is empty (Not associated to anything) 4. Go to Compute Engine -> Disks -> preexisting-pd -> DELETE ```
gcp gke docs
title GKE Persistent Disks Preexisting PD description Use Google Disks Preexisting PD for Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Use the pre existing Persistent Disk created in previous demo As part of this demo we are going to provision the Persistent Volume PV manually We call this as Static Provisioning Step 02 List Kubernetes Storage Classes in GKE Cluster t List Storage Classes kubectl get sc Step 03 00 persistent volume yaml yaml apiVersion v1 kind PersistentVolume metadata name preexisting pd spec storageClassName standard rwo capacity storage 8Gi accessModes ReadWriteOnce claimRef namespace default name mysql pv claim gcePersistentDisk pdName preexisting pd fsType ext4 Step 04 01 persistent volume claim yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName standard rwo resources requests storage 8Gi Step 05 Other Kubernetes YAML Manifests No changes to other Kubernetes YAML Manifests They are same as previous section 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 06 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 07 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 8GB Persistent Disk Observation You should see the disk type In Use By updated and bound to gke standard cluster 1 default pool db7b638f j5lk Step 08 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Observation 1 You should see admin102 already present 2 This is because in previous demo we already created admin102 and that data disk we have mounted here using Static Provisioning PV concept Step 09 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests List PVC kubectl get pvc List PV kubectl get pv Delete Persistent Disk preexisting pd 1 preexisting pd will not get deleted automatically 2 We should manually delete it 3 We should observe that its In Use By field is empty Not associated to anything 4 Go to Compute Engine Disks preexisting pd DELETE
gcp gke docs Step 00 Pre requisites title GKE Storage with GCP Cloud SQL Without ExternalName Service 2 Verify if kubeconfig for kubectl is configured in your local terminal 1 Verify if GKE Cluster is created Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service t
--- title: GKE Storage with GCP Cloud SQL - Without ExternalName Service description: Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Private IP - [GKE is a Fully Integrated Network Model](https://cloud.google.com/architecture/gke-compare-network-models) - GKE is a Fully Integrated Network Model for Kubernetes, that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself. - We are going to update the UMS Kubernetes Deployment `DB_HOSTNAME` with `Cloud SQL Private IP` and it should work without any issues. ## Step-02: 03-UserMgmtWebApp-Deployment.yaml - **Change-1:** Update Cloud SQL IP Address in `command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";']` - **Change-2:** Update Cloud SQL IP Address for `DB_HOSTNAME` value `value: 10.64.18.3` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 #command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z 10.64.18.3 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME #value: "mysql-externalname-service" value: 10.64.18.3 - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 ``` ## Step-05: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Delete Cloud SQL MySQL Instance 1. Go to SQL -> ums-db-private-instance -> DELETE 2. Instance ID: ums-db-private-instance 3. Click on DELETE ``` ## References - [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console) - [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access) - [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering) - [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access) - [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access)
gcp gke docs
title GKE Storage with GCP Cloud SQL Without ExternalName Service description Use GCP Cloud SQL MySQL DB for GKE Workloads without ExternalName Service Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction GKE Private Cluster GCP Cloud SQL with Private IP GKE is a Fully Integrated Network Model https cloud google com architecture gke compare network models GKE is a Fully Integrated Network Model for Kubernetes that said without ExternalName service we can directly connect to Private or Public IP of Cloud SQL from GKE Cluster itself We are going to update the UMS Kubernetes Deployment DB HOSTNAME with Cloud SQL Private IP and it should work without any issues Step 02 03 UserMgmtWebApp Deployment yaml Change 1 Update Cloud SQL IP Address in command sh c echo e Checking for the availability of MySQL Server deployment while nc z 10 64 18 3 3306 do sleep 1 printf done echo e MySQL DB Server has started Change 2 Update Cloud SQL IP Address for DB HOSTNAME value value 10 64 18 3 yaml apiVersion apps v1 kind Deployment metadata name usermgmt webapp labels app usermgmt webapp spec replicas 1 selector matchLabels app usermgmt webapp template metadata labels app usermgmt webapp spec initContainers name init db image busybox 1 31 command sh c echo e Checking for the availability of MySQL Server deployment while nc z mysql externalname service 3306 do sleep 1 printf done echo e MySQL DB Server has started command sh c echo e Checking for the availability of MySQL Server deployment while nc z 10 64 18 3 3306 do sleep 1 printf done echo e MySQL DB Server has started containers name usermgmt webapp image stacksimplify kube usermgmt webapp 1 0 0 MySQLDB imagePullPolicy Always ports containerPort 8080 env name DB HOSTNAME value mysql externalname service value 10 64 18 3 name DB PORT value 3306 name DB NAME value webappdb name DB USERNAME value root name DB PASSWORD valueFrom secretKeyRef name mysql db password key db password Step 03 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 04 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Step 05 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Delete Cloud SQL MySQL Instance 1 Go to SQL ums db private instance DELETE 2 Instance ID ums db private instance 3 Click on DELETE References Private Service Access with MySQL https cloud google com sql docs mysql configure private services access console Private Service Access https cloud google com vpc docs private services access VPC Network Peering Limits https cloud google com vpc docs quota vpc peering Configuring Private Service Access https cloud google com vpc docs configure private services access Additional Reference Only Enabling private services access https cloud google com service infrastructure docs enabling private services access
gcp gke docs title GCP Google Kubernetes Engine GKE Ingress Basics Step 00 Pre requisites Implement GCP Google Kubernetes Engine GKE Ingress Basics 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress Basics description: Implement GCP Google Kubernetes Engine GKE Ingress Basics --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Learn Ingress Basics - [Ingress Diagram Reference](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#ingress_to_resource_mappings) ## Step-02: Verify HTTP Load Balancing enabled for your GKE Cluster - Go to Kubernetes Engine -> standard-cluster-private1 -> DETAILS tab -> Networking - Verify `HTTP Load Balancing: Enabled` ## Step-03: Kubernetes Deployment: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-04: Kubernetes NodePort Service: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service labels: app: app3-nginx annotations: spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ``` ## Step-05: 02-ingress-basic.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-basics annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy kube-manifests and Verify ```t # Deploy kube-manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress kubectl get ingress Observation: 1. Wait for ADDRESS field to populate the Public IP Address # Describe Ingress kubectl describe ingress ingress-basics # Access Application http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT> Important Note: 1. If you get 502 error, wait for 2 to 3 mins and retry. 2. It takes time to create load balancer on GCP. ``` ## Step-07: Verify Load Balancer - Go to Load Balancing -> Click on Load balancer ### Load Balancer View - DETAILS Tab - Frontend - Host and Path Rules - Backend Services - Health Checks - MONITORING TAB - CACHING TAB ### Load Balancer Components View - FORWARDING RULES - TARGET PROXIES - BACKEND SERVICES - BACKEND BUCKETS - CERTIFICATES - TARGET POOLS ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify if load balancer got deleted Go to Load Balancing -> Should not see any load balancers ``` ## GKE Ingress References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) -[Ingress Concepts](https://cloud.google.com/kubernetes-engine/docs/concepts/ingress) - [Service Networking](https://cloud.google.com/kubernetes-engine/docs/concepts/service-networking
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress Basics description Implement GCP Google Kubernetes Engine GKE Ingress Basics Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction Learn Ingress Basics Ingress Diagram Reference https cloud google com kubernetes engine docs concepts ingress ingress to resource mappings Step 02 Verify HTTP Load Balancing enabled for your GKE Cluster Go to Kubernetes Engine standard cluster private1 DETAILS tab Networking Verify HTTP Load Balancing Enabled Step 03 Kubernetes Deployment 01 Nginx App3 Deployment and NodePortService yaml yaml apiVersion apps v1 kind Deployment metadata name app3 nginx deployment labels app app3 nginx spec replicas 1 selector matchLabels app app3 nginx template metadata labels app app3 nginx spec containers name app3 nginx image stacksimplify kubenginx 1 0 0 ports containerPort 80 Step 04 Kubernetes NodePort Service 01 Nginx App3 Deployment and NodePortService yaml yaml apiVersion v1 kind Service metadata name app3 nginx nodeport service labels app app3 nginx annotations spec type NodePort selector app app3 nginx ports port 80 targetPort 80 Step 05 02 ingress basic yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress basics annotations If the class annotation is not specified it defaults to gce gce external load balancer gce internal internal load balancer kubernetes io ingress class gce spec defaultBackend service name app3 nginx nodeport service port number 80 Step 06 Deploy kube manifests and Verify t Deploy kube manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc List Ingress kubectl get ingress Observation 1 Wait for ADDRESS field to populate the Public IP Address Describe Ingress kubectl describe ingress ingress basics Access Application http ADDRESS FIELD FROM GET INGRESS OUTPUT Important Note 1 If you get 502 error wait for 2 to 3 mins and retry 2 It takes time to create load balancer on GCP Step 07 Verify Load Balancer Go to Load Balancing Click on Load balancer Load Balancer View DETAILS Tab Frontend Host and Path Rules Backend Services Health Checks MONITORING TAB CACHING TAB Load Balancer Components View FORWARDING RULES TARGET PROXIES BACKEND SERVICES BACKEND BUCKETS CERTIFICATES TARGET POOLS Step 08 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Verify if load balancer got deleted Go to Load Balancing Should not see any load balancers GKE Ingress References Ingress Features https cloud google com kubernetes engine docs how to ingress features Ingress Concepts https cloud google com kubernetes engine docs concepts ingress Service Networking https cloud google com kubernetes engine docs concepts service networking
gcp gke docs Step 00 Pre requisites title GKE Persistent Disks Custom StorageClass 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Custom storageclass to provision Google Disks for Kubernetes Workloads 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks Custom StorageClass description: Use Custom storageclass to provision Google Disks for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - **Feaute-1:** Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster. custom storage class `gke-pd-standard-rwo-sc` - **Feature-2:** Test `allowVolumeExpansion: true` in Storage Class - **Feature-3:** Use `reclaimPolicy: Retain` in Storage Class and Test it ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gke-pd-standard-rwo-sc provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Retain parameters: type: pd-balanced # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: storage: 4Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc Observation: 1. You should find the new custom storage class object created with name as "gke-pd-standard-rwo-sc" # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** You should see the disk type as **Balanced persistent disk** ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 ``` ## Step-09: Update 01-persistent-volume-claim.yaml from 4Gi to 8Gi ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: #storage: 4Gi # Commment at Step-09 storage: 8Gi # UnCommment at Step-09 ``` ## Step-10: Deploy updated kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List PVC kubectl get pvc Observation: 1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi # List PV kubectl get pv Observation: 1. Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 Observation: 1. No impact to underlying MySQL Database data. 2. VolumeExpansion is seamless without impacting the real data. 3. We should find the two users which are present before VolumeExpansion as-is. ``` ## Step-11: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk, as 4GB disk expaned to 8GB now. - **Observation:** You should see the disk type as **Balanced persistent disk** ## Step-12: Verify reclaimPolicy: Retain ```t # Delete kube-manifests kubectl delete -f kube-manifests/ # List Storage Class kubectl get sc Observation: 1. Custom storage class deleted # List PVC kubectl get pvc Observation: 1. PVC deleted # List PV kubectl get pv Observation: 1. PV still present 2. PV STATUS will be in "Released", not used by anyoe. ``` ## Step-13: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - **Observation:** You should see the disk is still present even after all kube-manifests (storageclass, pvc) all deleted. - This is due to we have used **reclaimPolicy: Retain** in Custom Storage Class ## Step-14: Clone Persistent Disk - **Question:** Why we are cloning the disk ? - **Answer:** In the next demo, we are going use the **pre-existing persistent disk** in our demo. For that purpose we are cloning it. - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - Click on **Clone Disk** - **Name:** preexisting-pd - **Description:** preexisting-pd Demo with GKE - **Location:** Single - **Snapshot Schedule:** UNCHECK - Click on **CREATE** ## Step-15: Delete Retained Persistent Disk from this Demo - Go to Compute Engine -> Storage -> Disks - Search for `8GB` Persistent Disk. - **Disk Name:** pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 - Click on **DELETE DISK** ```t # List PV kubectl get pv # Delete PV kubectl delete pv pvc-3f2c1daa-122d-4bdb-a7b6-b9943631cc14 # List PV kubectl get pv ``` ## Step-16: Change PVC 8Gi to 4Gi: 01-persistent-volume-claim.yaml - Change PVC 8Gi to 4Gi so that `kube-manifests` will be demo ready for students. ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: gke-pd-standard-rwo-sc resources: requests: storage: 4Gi # Commment at Step-09 #storage: 8Gi # UnCommment at Step-09 ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver)
gcp gke docs
title GKE Persistent Disks Custom StorageClass description Use Custom storageclass to provision Google Disks for Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Feaute 1 Create custom Kubernetes StorageClass instead of using predefined one in GKE Cluster custom storage class gke pd standard rwo sc Feature 2 Test allowVolumeExpansion true in Storage Class Feature 3 Use reclaimPolicy Retain in Storage Class and Test it Step 02 List Kubernetes Storage Classes in GKE Cluster t List Storage Classes kubectl get sc Step 03 00 storage class yaml yaml apiVersion storage k8s io v1 kind StorageClass metadata name gke pd standard rwo sc provisioner pd csi storage gke io volumeBindingMode WaitForFirstConsumer allowVolumeExpansion true reclaimPolicy Retain parameters type pd balanced STORAGE CLASS 1 A StorageClass provides a way for administrators to describe the classes of storage they offer 2 Here we are offering GCP PD Storage for GKE Cluster Step 04 01 persistent volume claim yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName gke pd standard rwo sc resources requests storage 4Gi Step 05 Other Kubernetes YAML Manifests No changes to other Kubernetes YAML Manifests They are same as previous section 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 06 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Storage Classes kubectl get sc Observation 1 You should find the new custom storage class object created with name as gke pd standard rwo sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 07 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation You should see the disk type as Balanced persistent disk Step 08 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User Used for testing allowVolumeExpansion true Option Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 Step 09 Update 01 persistent volume claim yaml from 4Gi to 8Gi yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName gke pd standard rwo sc resources requests storage 4Gi Commment at Step 09 storage 8Gi UnCommment at Step 09 Step 10 Deploy updated kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List PVC kubectl get pvc Observation 1 Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi List PV kubectl get pv Observation 1 Wait for 2 to 3 mins and automatically CAPACITY value changes from 4Gi to 8Gi Access Application http ExternalIP from get service output Username admin101 Password password101 Observation 1 No impact to underlying MySQL Database data 2 VolumeExpansion is seamless without impacting the real data 3 We should find the two users which are present before VolumeExpansion as is Step 11 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 8GB Persistent Disk as 4GB disk expaned to 8GB now Observation You should see the disk type as Balanced persistent disk Step 12 Verify reclaimPolicy Retain t Delete kube manifests kubectl delete f kube manifests List Storage Class kubectl get sc Observation 1 Custom storage class deleted List PVC kubectl get pvc Observation 1 PVC deleted List PV kubectl get pv Observation 1 PV still present 2 PV STATUS will be in Released not used by anyoe Step 13 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 8GB Persistent Disk Observation You should see the disk is still present even after all kube manifests storageclass pvc all deleted This is due to we have used reclaimPolicy Retain in Custom Storage Class Step 14 Clone Persistent Disk Question Why we are cloning the disk Answer In the next demo we are going use the pre existing persistent disk in our demo For that purpose we are cloning it Go to Compute Engine Storage Disks Search for 8GB Persistent Disk Click on Clone Disk Name preexisting pd Description preexisting pd Demo with GKE Location Single Snapshot Schedule UNCHECK Click on CREATE Step 15 Delete Retained Persistent Disk from this Demo Go to Compute Engine Storage Disks Search for 8GB Persistent Disk Disk Name pvc 3f2c1daa 122d 4bdb a7b6 b9943631cc14 Click on DELETE DISK t List PV kubectl get pv Delete PV kubectl delete pv pvc 3f2c1daa 122d 4bdb a7b6 b9943631cc14 List PV kubectl get pv Step 16 Change PVC 8Gi to 4Gi 01 persistent volume claim yaml Change PVC 8Gi to 4Gi so that kube manifests will be demo ready for students yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName gke pd standard rwo sc resources requests storage 4Gi Commment at Step 09 storage 8Gi UnCommment at Step 09 Reference Using the Compute Engine persistent disk CSI Driver https cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver
gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline title GCP Google Kubernetes Engine GKE CD
--- title: GCP Google Kubernetes Engine GKE CD description: Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Continuous Delivery Pipeline for GKE Workloads using - Google Cloud Source - Google Cloud Build - Google Artifact Repository ## Step-02: Assign Kubernetes Engine Developer IAM Role to Cloud Build - To deploy the application in your Googke GKE Kubernetes cluster, **Cloud Build** needs the **Kubernetes Engine Developer Identity and Access Management Role.** ```t # Verify if changes took place using Google Cloud Console 1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions 2. Kubernetes Engine -> Should be in "DISABLED" state # Get current project PROJECT_ID PROJECT_ID="$(gcloud config get-value project)" echo ${PROJECT_ID} # Get Google Cloud Project Number PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" echo ${PROJECT_NUMBER} # Associate Kubernetes Engine Developer IAM Role to Cloud Build gcloud projects add-iam-policy-binding ${PROJECT_NUMBER} \ --member=serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com \ --role=roles/container.developer # Verify if changes took place using Google Cloud Console 1. Go to Cloud Build -> Settings -> SERVICE ACCOUNT -> Service account permissions 2. Kubernetes Engine -> Should be in "ENABLED" state ``` ## Step-03: Review File cloudbuild-delivery.yaml - **File Location:** 01-myapp1-k8s-repo ```yaml # [START cloudbuild-delivery] steps: # This step deploys the new version of our container image # in the "standard-cluster-private-1" Google Kubernetes Engine cluster. - name: 'gcr.io/cloud-builders/kubectl' id: Deploy args: - 'apply' - '-f' - 'kubernetes.yaml' env: - 'CLOUDSDK_COMPUTE_REGION=us-central1' #- 'CLOUDSDK_COMPUTE_ZONE=us-central1-c' - 'CLOUDSDK_CONTAINER_CLUSTER=standard-cluster-private-1' # Provide GKE Cluster Name # This step copies the applied manifest to the production branch # The COMMIT_SHA variable is automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/git' id: Copy to production branch entrypoint: /bin/sh args: - '-c' - | set -x && \ # Configure Git to create commits with Cloud Build's service account git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') && \ # Switch to the production branch and copy the kubernetes.yaml file from the candidate branch git fetch origin production && git checkout production && \ git checkout $COMMIT_SHA kubernetes.yaml && \ # Commit the kubernetes.yaml file with a descriptive commit message git commit -m "Manifest from commit $COMMIT_SHA $(git log --format=%B -n 1 $COMMIT_SHA)" && \ # Push the changes back to Cloud Source Repository git push origin production # [END cloudbuild-delivery] ``` ## Step-04: Create and Initialize myapp1-k8s-repo Repo, Copy Files and Push to Cloud Source Repository ```t # Change Directory cd course-repos # List Cloud Source Repositories gcloud source repos list # Create Cloud Source Gith Repo: myapp1-k8s-repo gcloud source repos create myapp1-k8s-repo # Initialize myapp1-k8s-repo Repo gcloud source repos clone myapp1-k8s-repo # Copy Files to myapp1-k8s-repo cloudbuild-delivery.yaml from "58-GKE-Continuous-Delivery-with-CloudBuild/01-myapp1-k8s-repo" # Change Directory cd myapp1-k8s-repo # Commit Changes git add . git commit -m "Create cloudbuild-delivery.yaml for k8s deployment" # Create a candidate branch and push to be available in Cloud Source Repositories. git checkout -b candidate git push origin candidate # Create a production branch and push to be available in Cloud Source Repositories. git checkout -b production git push origin production ``` ## Step-05: Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account - Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the **myapp1-k8s-repo** repository. ```t # Get current project PROJECT_ID PROJECT_ID="$(gcloud config get-value project)" echo ${PROJECT_ID} # GET GCP PROJECT NUMBER PROJECT_NUMBER="$(gcloud projects describe ${PROJECT_ID} --format='get(projectNumber)')" echo ${PROJECT_NUMBER} # Change Directory cd 02-Source-Writer-IAM-Role # Clean-Up File (put the file empty - No Content) >myapp1-k8s-repo-policy.yaml # Create IAM Policy YAML File cat >myapp1-k8s-repo-policy.yaml <<EOF bindings: - members: - serviceAccount:${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com role: roles/source.writer EOF # Verify IAM Policy File created with PROJECT_NUMBER cat myapp1-k8s-repo-policy.yaml # Set IAM Policy to Cloud Source Repository: myapp1-k8s-repo gcloud source repos set-iam-policy \ myapp1-k8s-repo myapp1-k8s-repo-policy.yaml ``` ## Step-06: Create the trigger for the continuous delivery pipeline - Go to Cloud Build -> Triggers -> Region: us-central-1 -> Click on **CREATE TRIGGER** - **Name:** myapp1-cd - **Region:** us-central1 - **Description:** myapp1 Continuous Deployment Pipeline - **Tags:** environment=dev - **Event:** Push to a branch - **Source:** myapp1-k8s-repo - **Branch:** candidate - **Configuration:** Cloud Build configuration file (yaml or json) - **Location:** Repository - **Cloud Build Configuration file location:** cloudbuild-delivery.yaml - **Approval:** leave unchecked - **Service account:** leave to default - Click on **CREATE** ## Step-06: Review files in folder 03-myapp1-app-repo 1. Dockerfile 2. index.html 3. kubernetes.yaml.tpl 4. cloudbuild-trigger-cd.yaml 5. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml) ```yaml # [START cloudbuild - Docker Image Build] steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' # [END cloudbuild - Docker Image Build] # [START cloudbuild-trigger-cd] # This step clones the myapp1-k8s-repo repository - name: 'gcr.io/cloud-builders/gcloud' id: Clone myapp1-k8s-repo repository entrypoint: /bin/sh args: - '-c' - | gcloud source repos clone myapp1-k8s-repo && \ cd myapp1-k8s-repo && \ git checkout candidate && \ git config user.email $(gcloud auth list --filter=status:ACTIVE --format='value(account)') # This step generates the new manifest - name: 'gcr.io/cloud-builders/gcloud' id: Generate Kubernetes manifest entrypoint: /bin/sh args: - '-c' - | sed "s/GOOGLE_CLOUD_PROJECT/${PROJECT_ID}/g" kubernetes.yaml.tpl | \ sed "s/COMMIT_SHA/${SHORT_SHA}/g" > myapp1-k8s-repo/kubernetes.yaml # This step pushes the manifest back to myapp1-k8s-repo - name: 'gcr.io/cloud-builders/gcloud' id: Push manifest entrypoint: /bin/sh args: - '-c' - | set -x && \ cd myapp1-k8s-repo && \ git add kubernetes.yaml && \ git commit -m "Deploying image us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:${SHORT_SHA} Built from commit ${COMMIT_SHA} of repository myapp1-app-repo Author: $(git log --format='%an <%ae>' -n 1 HEAD)" && \ git push origin candidate # [END cloudbuild-trigger-cd] ``` ## Step-07: Update index.html in myapp1-app-repo, Push and Verify ```t # Change Directory (GIT REPO) cd myapp1-app-repo # Update index.html <p>Application Version: V4</p> # Add additional files to myapp1-app-repo 1. kubernetes.yaml.tpl 2. cloudbuild-trigger-cd.yaml 3. cloudbuild.yaml (Just a copy of cloudbuild-trigger-cd.yaml) # Git Commit and Push to Remote Repository git status git add . git commit -am "V4 Commit CI CD" git push # Verify Cloud Source Repository: myapp1-app-repo https://source.cloud.google.com/ myapp1-app-repo # Verify Cloud Source Repository: myapp1-k8s-repo https://source.cloud.google.com/ myapp1-k8s-repo Branch: Candidate You should find "kubernetes.yaml" file with latest commit code for Image from "myapp1-app-repo" ``` ## Step-08: Verify myapp1-ci and myapp1-cd builds - Go to Cloud Build -> History - Review latest **myapp1-ci** build steps - Review latest **myapp1-cd** build steps ## Step-09: Verify Files in Cloud Source Repositories - Go to Cloud Source - **myapp1-app-repo:** New files should be present - **myapp1-k8s-repo:** kubernetes.yaml file with values replaced related to GOOGLE GOOGLE_CLOUD_PROJECT and COMMIT_SHA should be replaced `image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:2a3e72a` ## Step-10: Verify Google Artifact Registry - Go to Artifact Registry -> Repositories -> myapps-repository -> myapp1 - Shoud see a new docker image ## Step-11: Access Application ```t # List Pods kubect get pods # List Deployments kubectl get deploy # List Services kubectl get svc # Access Application http://<SERVICE-EXTERNALIP> Observation: 1. Should see v4 version of application deployed ``` ## Step-12: Test CI CD one more time - Update index.html to V5 ```t # Change Directory (GIT REPO) cd myapp1-app-repo # Update index.html <p>Application Version: V5</p> # Git Commit and Push to Remote Repository git status git add . git commit -am "V5 Commit CI CD" git push # Verify Build process Go to Cloud Build -> myapp1-ci -> BUILD LOG Go to Cloud Build -> myapp1-cd -> BUILD LOG # Access Application http://<SERVICE-EXTERNALIP> Observation: 1. Should see v5 version of application deployed ``` ## Step-13: Verify Application Rollback by just rebuilding CD Pipeline - Go to ANY version of `myapp1-cd` and click on `REBUILD` - Verify by accessing Application ```t # List Pods kubect get pods # List Deployments kubectl get deploy # List Services kubectl get svc # Access Application http://<SERVICE-EXTERNALIP> Observation: 1. Should see V4 version of application deployed ``` ## Step-14: Clean-Up ```t # Disable / Delete CI CD Pipelines 1. Go to Cloud Build -> myapp1-ci -> 3 dots -> Delete 2. Go to Cloud Build -> myapp1-cd -> 3 dots -> Delete # Delete Cloud Source Repositories Go to Cloud Source (https://source.cloud.google.com/repos) 1. myapp1-app-repo -> Settings -> Delete this repository 2. myapp1-k8s-repo -> Settings -> Delete this repository # Delete Kubernetes Deployment kubect get deploy kubectl delete deploy myapp1-deployment # Delete Kubernetes Service kubectl get svc kubectl delete svc myapp1-lb-service # Delete Artifact Registry Go to Artifact Registry -> Repositories -> myapps-repository -> DELETE # Delete Local Repos cd course-repos rm -rf myapp1-app-repo rm -rf myapp1-k8s-repo ``` ## References - https://github.com/GoogleCloudPlatform/gke-gitops-tutorial-cloudbuild
gcp gke docs
title GCP Google Kubernetes Engine GKE CD description Implement GCP Google Kubernetes Engine GKE Continuous Delivery Pipeline Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Implement Continuous Delivery Pipeline for GKE Workloads using Google Cloud Source Google Cloud Build Google Artifact Repository Step 02 Assign Kubernetes Engine Developer IAM Role to Cloud Build To deploy the application in your Googke GKE Kubernetes cluster Cloud Build needs the Kubernetes Engine Developer Identity and Access Management Role t Verify if changes took place using Google Cloud Console 1 Go to Cloud Build Settings SERVICE ACCOUNT Service account permissions 2 Kubernetes Engine Should be in DISABLED state Get current project PROJECT ID PROJECT ID gcloud config get value project echo PROJECT ID Get Google Cloud Project Number PROJECT NUMBER gcloud projects describe PROJECT ID format get projectNumber echo PROJECT NUMBER Associate Kubernetes Engine Developer IAM Role to Cloud Build gcloud projects add iam policy binding PROJECT NUMBER member serviceAccount PROJECT NUMBER cloudbuild gserviceaccount com role roles container developer Verify if changes took place using Google Cloud Console 1 Go to Cloud Build Settings SERVICE ACCOUNT Service account permissions 2 Kubernetes Engine Should be in ENABLED state Step 03 Review File cloudbuild delivery yaml File Location 01 myapp1 k8s repo yaml START cloudbuild delivery steps This step deploys the new version of our container image in the standard cluster private 1 Google Kubernetes Engine cluster name gcr io cloud builders kubectl id Deploy args apply f kubernetes yaml env CLOUDSDK COMPUTE REGION us central1 CLOUDSDK COMPUTE ZONE us central1 c CLOUDSDK CONTAINER CLUSTER standard cluster private 1 Provide GKE Cluster Name This step copies the applied manifest to the production branch The COMMIT SHA variable is automatically replaced by Cloud Build name gcr io cloud builders git id Copy to production branch entrypoint bin sh args c set x Configure Git to create commits with Cloud Build s service account git config user email gcloud auth list filter status ACTIVE format value account Switch to the production branch and copy the kubernetes yaml file from the candidate branch git fetch origin production git checkout production git checkout COMMIT SHA kubernetes yaml Commit the kubernetes yaml file with a descriptive commit message git commit m Manifest from commit COMMIT SHA git log format B n 1 COMMIT SHA Push the changes back to Cloud Source Repository git push origin production END cloudbuild delivery Step 04 Create and Initialize myapp1 k8s repo Repo Copy Files and Push to Cloud Source Repository t Change Directory cd course repos List Cloud Source Repositories gcloud source repos list Create Cloud Source Gith Repo myapp1 k8s repo gcloud source repos create myapp1 k8s repo Initialize myapp1 k8s repo Repo gcloud source repos clone myapp1 k8s repo Copy Files to myapp1 k8s repo cloudbuild delivery yaml from 58 GKE Continuous Delivery with CloudBuild 01 myapp1 k8s repo Change Directory cd myapp1 k8s repo Commit Changes git add git commit m Create cloudbuild delivery yaml for k8s deployment Create a candidate branch and push to be available in Cloud Source Repositories git checkout b candidate git push origin candidate Create a production branch and push to be available in Cloud Source Repositories git checkout b production git push origin production Step 05 Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account Grant the Cloud Source Repository Writer IAM role to the Cloud Build service account for the myapp1 k8s repo repository t Get current project PROJECT ID PROJECT ID gcloud config get value project echo PROJECT ID GET GCP PROJECT NUMBER PROJECT NUMBER gcloud projects describe PROJECT ID format get projectNumber echo PROJECT NUMBER Change Directory cd 02 Source Writer IAM Role Clean Up File put the file empty No Content myapp1 k8s repo policy yaml Create IAM Policy YAML File cat myapp1 k8s repo policy yaml EOF bindings members serviceAccount PROJECT NUMBER cloudbuild gserviceaccount com role roles source writer EOF Verify IAM Policy File created with PROJECT NUMBER cat myapp1 k8s repo policy yaml Set IAM Policy to Cloud Source Repository myapp1 k8s repo gcloud source repos set iam policy myapp1 k8s repo myapp1 k8s repo policy yaml Step 06 Create the trigger for the continuous delivery pipeline Go to Cloud Build Triggers Region us central 1 Click on CREATE TRIGGER Name myapp1 cd Region us central1 Description myapp1 Continuous Deployment Pipeline Tags environment dev Event Push to a branch Source myapp1 k8s repo Branch candidate Configuration Cloud Build configuration file yaml or json Location Repository Cloud Build Configuration file location cloudbuild delivery yaml Approval leave unchecked Service account leave to default Click on CREATE Step 06 Review files in folder 03 myapp1 app repo 1 Dockerfile 2 index html 3 kubernetes yaml tpl 4 cloudbuild trigger cd yaml 5 cloudbuild yaml Just a copy of cloudbuild trigger cd yaml yaml START cloudbuild Docker Image Build steps This step builds the container image name gcr io cloud builders docker id Build args build t us central1 docker pkg dev PROJECT ID myapps repository myapp1 SHORT SHA This step pushes the image to Artifact Registry The PROJECT ID and SHORT SHA variables are automatically replaced by Cloud Build name gcr io cloud builders docker id Push args push us central1 docker pkg dev PROJECT ID myapps repository myapp1 SHORT SHA END cloudbuild Docker Image Build START cloudbuild trigger cd This step clones the myapp1 k8s repo repository name gcr io cloud builders gcloud id Clone myapp1 k8s repo repository entrypoint bin sh args c gcloud source repos clone myapp1 k8s repo cd myapp1 k8s repo git checkout candidate git config user email gcloud auth list filter status ACTIVE format value account This step generates the new manifest name gcr io cloud builders gcloud id Generate Kubernetes manifest entrypoint bin sh args c sed s GOOGLE CLOUD PROJECT PROJECT ID g kubernetes yaml tpl sed s COMMIT SHA SHORT SHA g myapp1 k8s repo kubernetes yaml This step pushes the manifest back to myapp1 k8s repo name gcr io cloud builders gcloud id Push manifest entrypoint bin sh args c set x cd myapp1 k8s repo git add kubernetes yaml git commit m Deploying image us central1 docker pkg dev PROJECT ID myapps repository myapp1 SHORT SHA Built from commit COMMIT SHA of repository myapp1 app repo Author git log format an ae n 1 HEAD git push origin candidate END cloudbuild trigger cd Step 07 Update index html in myapp1 app repo Push and Verify t Change Directory GIT REPO cd myapp1 app repo Update index html p Application Version V4 p Add additional files to myapp1 app repo 1 kubernetes yaml tpl 2 cloudbuild trigger cd yaml 3 cloudbuild yaml Just a copy of cloudbuild trigger cd yaml Git Commit and Push to Remote Repository git status git add git commit am V4 Commit CI CD git push Verify Cloud Source Repository myapp1 app repo https source cloud google com myapp1 app repo Verify Cloud Source Repository myapp1 k8s repo https source cloud google com myapp1 k8s repo Branch Candidate You should find kubernetes yaml file with latest commit code for Image from myapp1 app repo Step 08 Verify myapp1 ci and myapp1 cd builds Go to Cloud Build History Review latest myapp1 ci build steps Review latest myapp1 cd build steps Step 09 Verify Files in Cloud Source Repositories Go to Cloud Source myapp1 app repo New files should be present myapp1 k8s repo kubernetes yaml file with values replaced related to GOOGLE GOOGLE CLOUD PROJECT and COMMIT SHA should be replaced image us central1 docker pkg dev kdaida123 myapps repository myapp1 2a3e72a Step 10 Verify Google Artifact Registry Go to Artifact Registry Repositories myapps repository myapp1 Shoud see a new docker image Step 11 Access Application t List Pods kubect get pods List Deployments kubectl get deploy List Services kubectl get svc Access Application http SERVICE EXTERNALIP Observation 1 Should see v4 version of application deployed Step 12 Test CI CD one more time Update index html to V5 t Change Directory GIT REPO cd myapp1 app repo Update index html p Application Version V5 p Git Commit and Push to Remote Repository git status git add git commit am V5 Commit CI CD git push Verify Build process Go to Cloud Build myapp1 ci BUILD LOG Go to Cloud Build myapp1 cd BUILD LOG Access Application http SERVICE EXTERNALIP Observation 1 Should see v5 version of application deployed Step 13 Verify Application Rollback by just rebuilding CD Pipeline Go to ANY version of myapp1 cd and click on REBUILD Verify by accessing Application t List Pods kubect get pods List Deployments kubectl get deploy List Services kubectl get svc Access Application http SERVICE EXTERNALIP Observation 1 Should see V4 version of application deployed Step 14 Clean Up t Disable Delete CI CD Pipelines 1 Go to Cloud Build myapp1 ci 3 dots Delete 2 Go to Cloud Build myapp1 cd 3 dots Delete Delete Cloud Source Repositories Go to Cloud Source https source cloud google com repos 1 myapp1 app repo Settings Delete this repository 2 myapp1 k8s repo Settings Delete this repository Delete Kubernetes Deployment kubect get deploy kubectl delete deploy myapp1 deployment Delete Kubernetes Service kubectl get svc kubectl delete svc myapp1 lb service Delete Artifact Registry Go to Artifact Registry Repositories myapps repository DELETE Delete Local Repos cd course repos rm rf myapp1 app repo rm rf myapp1 k8s repo References https github com GoogleCloudPlatform gke gitops tutorial cloudbuild
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress SSL 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress SSL description: Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains 4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ## Step-01: Introduction - Google Managed Certificates for GKE Ingress - Ingress SSL - Certificate Validity: 90 days - 30 days before expiry google starts renewal process. We dont need to worry about it. - **Important Note:** Google-managed certificates are only supported with GKE Ingress using the external HTTP(S) load balancer. Google-managed certificates do not support third-party Ingress controllers. ## Step-02: kube-manifest - NO CHANGES - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ## Step-03: 05-Managed-Certificate.yaml - **Pre-requisite-1:** Registered Domain using Google Cloud Domains - **Pre-requisite-2:** DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ```yaml apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: managed-cert-for-ingress spec: domains: - demo1.kalyanreddydaida.com ``` ## Step-04: 04-Ingress-SSL.yaml - Add the annotation `networking.gke.io/managed-certificates` to Ingress Service with Managed Certificate name. ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy kube-manifests and Verify ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-ssl ``` ## Step-07: Verify Managed Certificates ```t # List Managed Certificate kubectl get managedcertificate # Describe managed certificate kubectl describe managedcertificate managed-cert-for-ingress Observation: 1. Wait for the Google-managed certificate to finish provisioning. 2. This might take up to 60 minutes. 3. Status of the certificate should change from PROVISIONING to ACTIVE demo1.kalyanreddydaida.com: PROVISIONING # List Certificates gcloud compute ssl-certificates list ``` ## Step-08: Verify SSL Certificates from Certificate Tab in Load Balancer ### Load Balancers Component View - View in **Load Balancers Component View** - Click on **CERTIFICATES** tab ### Load Balancers View - Review FRONTEND with HTTPS Protocol and associated with Certificate ## Step-09: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http://<DNS-DOMAIN-NAME>/app1/index.html http://<DNS-DOMAIN-NAME>/app2/index.html http://<DNS-DOMAIN-NAME>/ # Note: Replace Domain Name registered in Cloud DNS # HTTP URLs http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ # HTTPS URLs https://demo1.kalyanreddydaida.com/app1/index.html https://demo1.kalyanreddydaida.com/app2/index.html https://demo1.kalyanreddydaida.com/ ``` ## References - https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs - https://cloud.google.com/load-balancing/docs/ssl-certificates/troubleshooting - https://github.com/GoogleCloudPlatform/gke-managed-cert
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress SSL description Implement GCP Google Kubernetes Engine GKE Ingress SSL with Google Managed Certificates Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Registered Domain using Google Cloud Domains 4 DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS Example demo1 kalyanreddydaida com Step 01 Introduction Google Managed Certificates for GKE Ingress Ingress SSL Certificate Validity 90 days 30 days before expiry google starts renewal process We dont need to worry about it Important Note Google managed certificates are only supported with GKE Ingress using the external HTTP S load balancer Google managed certificates do not support third party Ingress controllers Step 02 kube manifest NO CHANGES 01 Nginx App1 Deployment and NodePortService yaml 02 Nginx App2 Deployment and NodePortService yaml 03 Nginx App3 Deployment and NodePortService yaml Step 03 05 Managed Certificate yaml Pre requisite 1 Registered Domain using Google Cloud Domains Pre requisite 2 DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS Example demo1 kalyanreddydaida com yaml apiVersion networking gke io v1 kind ManagedCertificate metadata name managed cert for ingress spec domains demo1 kalyanreddydaida com Step 04 04 Ingress SSL yaml Add the annotation networking gke io managed certificates to Ingress Service with Managed Certificate name yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress ssl annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 Google Managed SSL Certificates networking gke io managed certificates managed cert for ingress spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 06 Deploy kube manifests and Verify t Deploy Kubernetes manifests kubectl apply f kube manifests List Pods kubectl get pods List Services kubectl get svc List Ingress Load Balancers kubectl get ingress Describe Ingress and view Rules kubectl describe ingress ingress ssl Step 07 Verify Managed Certificates t List Managed Certificate kubectl get managedcertificate Describe managed certificate kubectl describe managedcertificate managed cert for ingress Observation 1 Wait for the Google managed certificate to finish provisioning 2 This might take up to 60 minutes 3 Status of the certificate should change from PROVISIONING to ACTIVE demo1 kalyanreddydaida com PROVISIONING List Certificates gcloud compute ssl certificates list Step 08 Verify SSL Certificates from Certificate Tab in Load Balancer Load Balancers Component View View in Load Balancers Component View Click on CERTIFICATES tab Load Balancers View Review FRONTEND with HTTPS Protocol and associated with Certificate Step 09 Access Application t Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors Access Application http DNS DOMAIN NAME app1 index html http DNS DOMAIN NAME app2 index html http DNS DOMAIN NAME Note Replace Domain Name registered in Cloud DNS HTTP URLs http demo1 kalyanreddydaida com app1 index html http demo1 kalyanreddydaida com app2 index html http demo1 kalyanreddydaida com HTTPS URLs https demo1 kalyanreddydaida com app1 index html https demo1 kalyanreddydaida com app2 index html https demo1 kalyanreddydaida com References https cloud google com kubernetes engine docs how to managed certs https cloud google com load balancing docs ssl certificates troubleshooting https github com GoogleCloudPlatform gke managed cert
gcp gke docs Use Google Disks Volume Clone for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal 1 Verify if GKE Cluster is created title GKE Persistent Disks Volume Clone t
--- title: GKE Persistent Disks - Volume Clone description: Use Google Disks Volume Clone for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand how to implement cloned Disks in GKE ## Step-02: Kubernetes YAML Manifests - **Project Folder:** 01-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - 01-persistent-volume-claim.yaml - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-c - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-05: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User admin102 Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Create New User admin103 Username: admin103 Password: password103 First Name: fname103 Last Name: lname103 Email Address: admin103@stacksimplify.com Social Security Address: ssn103 ``` ## Step-06: Volume Clone: 01-podpvc-clone.yaml - **Project Folder:** 02-Use-Cloned-Volume-kube-manifests ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: podpvc-clone spec: dataSource: name: mysql-pv-claim # the name of the source PersistentVolumeClaim that you created as part of UMS Web App kind: PersistentVolumeClaim accessModes: - ReadWriteOnce storageClassName: standard-rwo # same as the StorageClass of the source PersistentVolumeClaim. resources: requests: storage: 4Gi # the amount of storage to request, which must be at least the size of the source PersistentVolumeClaim ``` ## Step-07: 03-mysql-deployment.yaml - **Change-1:** Change the `claimName: mysql-pv-claim` to `claimName: podpvc-clone` - ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql2 spec: replicas: 1 selector: matchLabels: app: mysql2 strategy: type: Recreate template: metadata: labels: app: mysql2 spec: containers: - name: mysql2 image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: podpvc-clone - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script2 ``` ## Step-08: Kubernetes YAML Manifests - **Project Folder:** 02-Use-Cloned-Volume-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - For all the resource names and labels append with 2 (Example: mysql to mysql2, usermgmt-webapp to usermgmt-webapp2) - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-Use-Cloned-Volume-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp2-6ff7d7d849-7lrg5 ``` ## Step-10: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-11: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 Observation: 1. You should see both "admin102" and "admin103" users already present. 2. This is because we have used the cloned disk from "01-kube-manifests" ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f 01-kube-manifests -f 02-Use-Cloned-Volume-kube-manifests ``` ```t # Reference https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/ # Get Nodes kubectl get nodes # Show Node Labels kubectl get nodes --show-labels # Label Node kubectl label nodes <your-node-name> nodetype=db kubectl label nodes gke-standard-cluster-pri-default-pool-4f7ab141-p0gz nodetype=db # Show Node Labels kubectl get nodes --show-labels ```
gcp gke docs
title GKE Persistent Disks Volume Clone description Use Google Disks Volume Clone for GKE Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Understand how to implement cloned Disks in GKE Step 02 Kubernetes YAML Manifests Project Folder 01 kube manifests No changes to Kubernetes YAML Manifests same as Section 21 GKE PD existing SC standard rwo 01 persistent volume claim yaml 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 03 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f 01 kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 04 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation Review the below items Zones us central1 c Type Balanced persistent disk In use by gke standard cluster 1 default pool db7b638f j5lk Step 05 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User admin102 Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 Create New User admin103 Username admin103 Password password103 First Name fname103 Last Name lname103 Email Address admin103 stacksimplify com Social Security Address ssn103 Step 06 Volume Clone 01 podpvc clone yaml Project Folder 02 Use Cloned Volume kube manifests yaml apiVersion v1 kind PersistentVolumeClaim metadata name podpvc clone spec dataSource name mysql pv claim the name of the source PersistentVolumeClaim that you created as part of UMS Web App kind PersistentVolumeClaim accessModes ReadWriteOnce storageClassName standard rwo same as the StorageClass of the source PersistentVolumeClaim resources requests storage 4Gi the amount of storage to request which must be at least the size of the source PersistentVolumeClaim Step 07 03 mysql deployment yaml Change 1 Change the claimName mysql pv claim to claimName podpvc clone yaml apiVersion apps v1 kind Deployment metadata name mysql2 spec replicas 1 selector matchLabels app mysql2 strategy type Recreate template metadata labels app mysql2 spec containers name mysql2 image mysql 8 0 env name MYSQL ROOT PASSWORD value dbpassword11 ports containerPort 3306 name mysql volumeMounts name mysql persistent storage mountPath var lib mysql name usermanagement dbcreation script mountPath docker entrypoint initdb d https hub docker com mysql Refer Initializing a fresh instance volumes name mysql persistent storage persistentVolumeClaim claimName mysql pv claim claimName podpvc clone name usermanagement dbcreation script configMap name usermanagement dbcreation script2 Step 08 Kubernetes YAML Manifests Project Folder 02 Use Cloned Volume kube manifests No changes to Kubernetes YAML Manifests same as Section 21 GKE PD existing SC standard rwo For all the resource names and labels append with 2 Example mysql to mysql2 usermgmt webapp to usermgmt webapp2 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 09 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f 02 Use Cloned Volume kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp2 6ff7d7d849 7lrg5 Step 10 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation Review the below items Type Balanced persistent disk In use by gke standard cluster 1 default pool db7b638f j5lk Step 11 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Observation 1 You should see both admin102 and admin103 users already present 2 This is because we have used the cloned disk from 01 kube manifests Step 12 Clean Up t Delete Kubernetes Objects kubectl delete f 01 kube manifests f 02 Use Cloned Volume kube manifests t Reference https kubernetes io docs tasks configure pod container assign pods nodes Get Nodes kubectl get nodes Show Node Labels kubectl get nodes show labels Label Node kubectl label nodes your node name nodetype db kubectl label nodes gke standard cluster pri default pool 4f7ab141 p0gz nodetype db Show Node Labels kubectl get nodes show labels
gcp gke docs title GCP Google Kubernetes Engine Kubernetes Resource Quota Implement GCP Google Kubernetes Engine Kubernetes Resource Quota Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine Kubernetes Resource Quota description: Implement GCP Google Kubernetes Engine Kubernetes Resource Quota --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` # Step-01: Introduction 1. Kubernetes Namespaces - ResourceQuota 2. Kubernetes Namespaces - Declarative using YAML ## Step-02: Create Namespace manifest - **Important Note:** File name starts with `01-` so that when creating k8s objects namespace will get created first so it don't throw an error. ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ## Step-03: Create Kubernetes ResourceQuota manifest - **File Name:** 02-kubernetes-resourcequota.yaml ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: ns-resource-quota namespace: qa spec: hard: requests.cpu: "1" requests.memory: 1Gi limits.cpu: "2" limits.memory: 2Gi pods: "3" configmaps: "3" persistentvolumeclaims: "3" secrets: "3" services: "3" ``` ## Step-04: Create Kubernetes objects & Test ```t # Create All Objects kubectl apply -f kube-manifests/ # List Pods kubectl get pods -n qa -w # View Pod Specification (CPU & Memory) kubectl describe pod <pod-name> -n qa # Get Resource Quota - default Namespace kubectl get resourcequota kubectl describe resourcequota gke-resource-quotas Observation: 1. gke-resource-quotas will be precreated by GKE Cluster for each namespace. 2. Any new quotas we define below the GKE Resource quota limits, that quota will be overrided by default GKE Resource Quota in a Namespace. # Get Resource Quota - qa Namespace kubectl get resourcequota -n qa # Describe Resource Quota - qa Namespace kubectl describe resourcequota qa-namespace-resource-quota -n qa # Test Quota by increasing the pods to 4 where in resource quota is 3 pods only kubectl get deploy -n qa kubectl get pods -n qa kubectl scale --replicas=4 deployment/myapp1-deployment -n qa kubectl get pods -n qa kubectl get deploy -n qa # Verify Deployment and ReplicaSet Events kubectl describe deploy <Deployment-Name> -n qa kubectl describe rs <ReplicaSet-Name> -n qa Observation: In ReplicaSet Events we should find the error ## WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA Warning FailedCreate 77s replicaset-controller Error creating: pods "myapp1-deployment-5b4bdfc49d-92t9z" is forbidden: exceeded quota: qa-namespace-resource-quota, requested: pods=1, used: pods=3, limited: pods=3 # List Services kubectl get svc -n qa # Access Application http://<SVC-EXTERNAL-IP> ``` ## Step-05: Clean-Up - Delete all Kubernetes objects created as part of this section ```t # Delete All kubectl delete -f kube-manifests/ -n qa # List Namespaces kubectl get ns ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/ ## Additional References: - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/
gcp gke docs
title GCP Google Kubernetes Engine Kubernetes Resource Quota description Implement GCP Google Kubernetes Engine Kubernetes Resource Quota Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction 1 Kubernetes Namespaces ResourceQuota 2 Kubernetes Namespaces Declarative using YAML Step 02 Create Namespace manifest Important Note File name starts with 01 so that when creating k8s objects namespace will get created first so it don t throw an error yaml apiVersion v1 kind Namespace metadata name qa Step 03 Create Kubernetes ResourceQuota manifest File Name 02 kubernetes resourcequota yaml yaml apiVersion v1 kind ResourceQuota metadata name ns resource quota namespace qa spec hard requests cpu 1 requests memory 1Gi limits cpu 2 limits memory 2Gi pods 3 configmaps 3 persistentvolumeclaims 3 secrets 3 services 3 Step 04 Create Kubernetes objects Test t Create All Objects kubectl apply f kube manifests List Pods kubectl get pods n qa w View Pod Specification CPU Memory kubectl describe pod pod name n qa Get Resource Quota default Namespace kubectl get resourcequota kubectl describe resourcequota gke resource quotas Observation 1 gke resource quotas will be precreated by GKE Cluster for each namespace 2 Any new quotas we define below the GKE Resource quota limits that quota will be overrided by default GKE Resource Quota in a Namespace Get Resource Quota qa Namespace kubectl get resourcequota n qa Describe Resource Quota qa Namespace kubectl describe resourcequota qa namespace resource quota n qa Test Quota by increasing the pods to 4 where in resource quota is 3 pods only kubectl get deploy n qa kubectl get pods n qa kubectl scale replicas 4 deployment myapp1 deployment n qa kubectl get pods n qa kubectl get deploy n qa Verify Deployment and ReplicaSet Events kubectl describe deploy Deployment Name n qa kubectl describe rs ReplicaSet Name n qa Observation In ReplicaSet Events we should find the error WARNING MESSAGE IN REPLICASET EVENTS ABOUT RESOURCE QUOTA Warning FailedCreate 77s replicaset controller Error creating pods myapp1 deployment 5b4bdfc49d 92t9z is forbidden exceeded quota qa namespace resource quota requested pods 1 used pods 3 limited pods 3 List Services kubectl get svc n qa Access Application http SVC EXTERNAL IP Step 05 Clean Up Delete all Kubernetes objects created as part of this section t Delete All kubectl delete f kube manifests n qa List Namespaces kubectl get ns References https kubernetes io docs tasks administer cluster namespaces walkthrough https kubernetes io docs tasks administer cluster manage resources quota memory cpu namespace Additional References https kubernetes io docs tasks administer cluster manage resources cpu constraint namespace https kubernetes io docs tasks administer cluster manage resources memory constraint namespace
gcp gke docs Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GCP Google Kubernetes Engine Kubernetes Liveness Probes Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine Kubernetes Liveness Probes description: Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement `Liveness Probe` and Test it ## Step-02: Understand Liveness Probe 1. Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not. 2. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy. 3. If our application is healthy Kubernetes will not involve with the pod functioning. If our application is unhealthy Kubernetes will mark the pod as unhealthy. 4. In short, Use liveness probe to remove unhealthy pods ## Step-03: Liveness Probe Type: Command ### Step-03-01: Review Liveness Probe Type: Command - **File Name:** `01-liveness-probe-linux-command/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe Linux Command livenessProbe: exec: command: - /bin/sh - -c - nc -z localhost 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-03-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-liveness-probe-linux-command # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http://<LB-IP> Username: admin101 Password: password101 ``` ### Step-03-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 01-liveness-probe-linux-command ``` ## Step-04: Liveness Probe Type: HTTP Request ### Step-04-01: Review Liveness Probe Type: HTTP Request - **File Name:** `02-liveness-probe-HTTP-Request/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe HTTP Request livenessProbe: httpGet: path: /login port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-04-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 02-liveness-probe-HTTP-Request # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http://<LB-IP> Username: admin101 Password: password101 ``` ### Step-04-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 02-liveness-probe-HTTP-Request ``` ## Step-05: Liveness Probe Type: TCP Request ### Step-05-01: Review Liveness Probe Type: TCP Request - **File Name:** `03-liveness-probe-TCP-Request/05-UserMgmtWebApp-Deployment.yaml` ```yaml # Liveness Probe TCP request livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 60 # initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe. periodSeconds: 10 # periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds. failureThreshold: 3 # Default Value successThreshold: 1 # Default value ``` ### Step-05-02: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 03-liveness-probe-TCP-Request # List Pods kubectl get pods Observation: # List Services kubectl get svc # Access Application http://<LB-IP> Username: admin101 Password: password101 ``` ### Step-05-03: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 03-liveness-probe-TCP-Request ```
gcp gke docs
title GCP Google Kubernetes Engine Kubernetes Liveness Probes description Implement GCP Google Kubernetes Engine Kubernetes Liveness Probes Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Implement Liveness Probe and Test it Step 02 Understand Liveness Probe 1 Liveness probes lets Kubernetes know whether our application running in a container inside a pod is healthy or not 2 If our application is healthy Kubernetes will not involve with the pod functioning If our application is unhealthy Kubernetes will mark the pod as unhealthy 3 If our application is healthy Kubernetes will not involve with the pod functioning If our application is unhealthy Kubernetes will mark the pod as unhealthy 4 In short Use liveness probe to remove unhealthy pods Step 03 Liveness Probe Type Command Step 03 01 Review Liveness Probe Type Command File Name 01 liveness probe linux command 05 UserMgmtWebApp Deployment yaml yaml Liveness Probe Linux Command livenessProbe exec command bin sh c nc z localhost 8080 initialDelaySeconds 60 initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe periodSeconds 10 periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds failureThreshold 3 Default Value successThreshold 1 Default value Step 03 02 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 01 liveness probe linux command List Pods kubectl get pods Observation List Services kubectl get svc Access Application http LB IP Username admin101 Password password101 Step 03 03 Clean Up t Delete Kubernetes Resources kubectl delete f 01 liveness probe linux command Step 04 Liveness Probe Type HTTP Request Step 04 01 Review Liveness Probe Type HTTP Request File Name 02 liveness probe HTTP Request 05 UserMgmtWebApp Deployment yaml yaml Liveness Probe HTTP Request livenessProbe httpGet path login port 8080 httpHeaders name Custom Header value Awesome initialDelaySeconds 60 initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe periodSeconds 10 periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds failureThreshold 3 Default Value successThreshold 1 Default value Step 04 02 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 02 liveness probe HTTP Request List Pods kubectl get pods Observation List Services kubectl get svc Access Application http LB IP Username admin101 Password password101 Step 04 03 Clean Up t Delete Kubernetes Resources kubectl delete f 02 liveness probe HTTP Request Step 05 Liveness Probe Type TCP Request Step 05 01 Review Liveness Probe Type TCP Request File Name 03 liveness probe TCP Request 05 UserMgmtWebApp Deployment yaml yaml Liveness Probe TCP request livenessProbe tcpSocket port 8080 initialDelaySeconds 60 initialDelaySeconds field tells the kubelet that it should wait 60 seconds before performing the first probe periodSeconds 10 periodSeconds field specifies kubelet should perform a liveness probe every 10 seconds failureThreshold 3 Default Value successThreshold 1 Default value Step 05 02 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 03 liveness probe TCP Request List Pods kubectl get pods Observation List Services kubectl get svc Access Application http LB IP Username admin101 Password password101 Step 05 03 Clean Up t Delete Kubernetes Resources kubectl delete f 03 liveness probe TCP Request
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks title GCP Google Kubernetes Engine GKE Ingress Custom Health Checks Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description: Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction 1. Implement Self Signed SSL Certificates with GKE Ingress Service 2. Create SSL Certificates using OpenSSL. 3. Create Kubernetes Secret with SSL Certificate and Private Key 4. Reference these Kubernetes Secrets in Ingress Service **Ingress spec.tls** ## Step-02: App1 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app1 key: openssl genrsa -out app1-ingress.key 2048 # Create your app1 certificate signing request: openssl req -new -key app1-ingress.key -out app1-ingress.csr -subj "/CN=app1.kalyanreddydaida.com" # Create your app1 certificate: openssl x509 -req -days 7300 -in app1-ingress.csr -signkey app1-ingress.key -out app1-ingress.crt # Create a Secret that holds your app1 certificate and key: kubectl create secret tls app1-secret --cert app1-ingress.crt --key app1-ingress.key # List Secrets kubectl get secrets ``` ## Step-03: App2 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app2 key: openssl genrsa -out app2-ingress.key 2048 # Create your app2 certificate signing request: openssl req -new -key app2-ingress.key -out app2-ingress.csr -subj "/CN=app2.kalyanreddydaida.com" # Create your app2 certificate: openssl x509 -req -days 7300 -in app2-ingress.csr -signkey app2-ingress.key -out app2-ingress.crt # Create a Secret that holds your app2 certificate and key: kubectl create secret tls app2-secret --cert app2-ingress.crt --key app2-ingress.key # List Secrets kubectl get secrets ``` ## Step-03: App3 - Create Self-Signed SSL Certificates and Kubernetes Secrets ```t # Change Directory cd SSL-SelfSigned-Certs # Create your app3 key: openssl genrsa -out app3-ingress.key 2048 # Create your app3 certificate signing request: openssl req -new -key app3-ingress.key -out app3-ingress.csr -subj "/CN=app3-default.kalyanreddydaida.com" # Create your app3 certificate: openssl x509 -req -days 7300 -in app3-ingress.csr -signkey app3-ingress.key -out app3-ingress.crt # Create a Secret that holds your app3 certificate and key: kubectl create secret tls app3-secret --cert app3-ingress.crt --key app3-ingress.key # List Secrets kubectl get secrets ``` ## Step-04: No changes to following kube-manifests from previous Ingress Name Based Virtual Host Routing Demo 1. 01-Nginx-App1-Deployment-and-NodePortService.yaml 2. 02-Nginx-App2-Deployment-and-NodePortService.yaml 3. 03-Nginx-App3-Deployment-and-NodePortService.yaml 4. 05-frontendconfig.yaml ## Step-05: Review 04-ingress-self-signed-ssl.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-selfsigned-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: app3-default.kalyanreddydaida.com spec: # SSL Certs - Associate using Kubernetes Secrets tls: - secretName: app1-secret - secretName: app2-secret - secretName: app3-secret defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - host: app1.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - host: app2.kalyanreddydaida.com http: paths: - path: / pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-06: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-selfsigned-ssl # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # List FrontendConfigs kubectl get frontendconfig # Verify SSL Certificates Go to Load Balancers 1. Load Balancers View -> In Frontends 2. Load Balancers Components View -> Certificates Tab ``` ## Step-07: Access Application ```t # Access Application http://app1.kalyanreddydaida.com/app1/index.html http://app2.kalyanreddydaida.com/app2/index.html http://app3-default.kalyanreddydaida.com Observation: 1. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 2. HTTP to HTTPS redirect should work 3. You will get a warning "The certificate is not trusted because it is self-signed.". Click on "Accept the risk and continue" ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # List Kubernetes Secrets kubectl get secrets # Delete Kubernetes Secrets kubectl delete secret app1-secret kubectl delete secret app2-secret kubectl delete secret app3-secret ``` ## References - [User Managed Certificates](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl#user-managed-certs)
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress Custom Health Checks description Implement GCP Google Kubernetes Engine GKE Ingress Custom Health Checks Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 ExternalDNS Controller should be installed and ready to use t List Namespaces external dns ns namespace should be present kubectl get ns List External DNS Pods kubectl n external dns ns get pods Step 01 Introduction 1 Implement Self Signed SSL Certificates with GKE Ingress Service 2 Create SSL Certificates using OpenSSL 3 Create Kubernetes Secret with SSL Certificate and Private Key 4 Reference these Kubernetes Secrets in Ingress Service Ingress spec tls Step 02 App1 Create Self Signed SSL Certificates and Kubernetes Secrets t Change Directory cd SSL SelfSigned Certs Create your app1 key openssl genrsa out app1 ingress key 2048 Create your app1 certificate signing request openssl req new key app1 ingress key out app1 ingress csr subj CN app1 kalyanreddydaida com Create your app1 certificate openssl x509 req days 7300 in app1 ingress csr signkey app1 ingress key out app1 ingress crt Create a Secret that holds your app1 certificate and key kubectl create secret tls app1 secret cert app1 ingress crt key app1 ingress key List Secrets kubectl get secrets Step 03 App2 Create Self Signed SSL Certificates and Kubernetes Secrets t Change Directory cd SSL SelfSigned Certs Create your app2 key openssl genrsa out app2 ingress key 2048 Create your app2 certificate signing request openssl req new key app2 ingress key out app2 ingress csr subj CN app2 kalyanreddydaida com Create your app2 certificate openssl x509 req days 7300 in app2 ingress csr signkey app2 ingress key out app2 ingress crt Create a Secret that holds your app2 certificate and key kubectl create secret tls app2 secret cert app2 ingress crt key app2 ingress key List Secrets kubectl get secrets Step 03 App3 Create Self Signed SSL Certificates and Kubernetes Secrets t Change Directory cd SSL SelfSigned Certs Create your app3 key openssl genrsa out app3 ingress key 2048 Create your app3 certificate signing request openssl req new key app3 ingress key out app3 ingress csr subj CN app3 default kalyanreddydaida com Create your app3 certificate openssl x509 req days 7300 in app3 ingress csr signkey app3 ingress key out app3 ingress crt Create a Secret that holds your app3 certificate and key kubectl create secret tls app3 secret cert app3 ingress crt key app3 ingress key List Secrets kubectl get secrets Step 04 No changes to following kube manifests from previous Ingress Name Based Virtual Host Routing Demo 1 01 Nginx App1 Deployment and NodePortService yaml 2 02 Nginx App2 Deployment and NodePortService yaml 3 03 Nginx App3 Deployment and NodePortService yaml 4 05 frontendconfig yaml Step 05 Review 04 ingress self signed ssl yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress selfsigned ssl annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 SSL Redirect HTTP to HTTPS networking gke io v1beta1 FrontendConfig my frontend config External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname app3 default kalyanreddydaida com spec SSL Certs Associate using Kubernetes Secrets tls secretName app1 secret secretName app2 secret secretName app3 secret defaultBackend service name app3 nginx nodeport service port number 80 rules host app1 kalyanreddydaida com http paths path pathType Prefix backend service name app1 nginx nodeport service port number 80 host app2 kalyanreddydaida com http paths path pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 06 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc List Ingress Services kubectl get ingress Describe Ingress Service kubectl describe ingress ingress selfsigned ssl Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Ingress Service should be present List FrontendConfigs kubectl get frontendconfig Verify SSL Certificates Go to Load Balancers 1 Load Balancers View In Frontends 2 Load Balancers Components View Certificates Tab Step 07 Access Application t Access Application http app1 kalyanreddydaida com app1 index html http app2 kalyanreddydaida com app2 index html http app3 default kalyanreddydaida com Observation 1 All 3 URLS should work as expected In your case replace YOUR DOMAIN name for testing 2 HTTP to HTTPS redirect should work 3 You will get a warning The certificate is not trusted because it is self signed Click on Accept the risk and continue Step 08 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests List Kubernetes Secrets kubectl get secrets Delete Kubernetes Secrets kubectl delete secret app1 secret kubectl delete secret app2 secret kubectl delete secret app3 secret References User Managed Certificates https cloud google com kubernetes engine docs how to ingress multi ssl user managed certs
gcp gke docs Implement GCP Google Kubernetes Engine GKE External DNS Install Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created title GCP Google Kubernetes Engine GKE External DNS Install gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE External DNS Install description: Implement GCP Google Kubernetes Engine GKE External DNS Install --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction 1. Create GCP IAM Service Account: external-dns-gsa 2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding) 3. Create Kubernetes Namespace: external-dns-ns 4. Create Kubernetes Service Account: external-dns-ksa 5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding) 6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount) 7. Install Helm CLI on your local desktop (if not installed) 8. Install External-DNS using Helm 9. Verify External-DNS Logs 10. Additional Reference: Install [ExternalDNS Controller using Helm](https://github.com/kubernetes-sigs/external-dns) ## Step-03: Create GCP IAM Service Account ```t # List IAM Service Accounts gcloud iam service-accounts list # Create GCP IAM Service Account gcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT GSA_NAME: the name of the new IAM service account. GSA_PROJECT: the project ID of the Google Cloud project for your IAM service account. # Replace GSA_NAME and GSA_PROJECT gcloud iam service-accounts create external-dns-gsa --project=kdaida123 # List IAM Service Accounts gcloud iam service-accounts list ``` ## Step-04: Add IAM Roles to GCP IAM Service Account ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:external-dns-gsa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" ``` ## Step-05: Create Kubernetes Namepsace and Kubernetes Service Account ```t # Create Kubernetes Namespace kubectl create namespace <NAMESPACE> kubectl create namespace external-dns-ns # List Namespaces kubectl get ns # Create Service Account kubectl create serviceaccount <KSA_NAME> --namespace <NAMESPACE> kubectl create serviceaccount external-dns-ksa --namespace external-dns-ns # List Service Accounts kubectl -n external-dns-ns get sa ``` ## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account - Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. - This binding allows the Kubernetes service account to act as the IAM service account. ```t # Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" # Replace GSA_NAME, GSA_PROJECT, PROJECT_ID, NAMESPACE, KSA_NAME gcloud iam service-accounts add-iam-policy-binding external-dns-gsa@kdaida123.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:kdaida123.svc.id.goog[external-dns-ns/external-dns-ksa]" ``` ## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address - Annotate the Kubernetes service account with the email address of the IAM service account. ```t # Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA_NAME \ --namespace NAMESPACE \ iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com # Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT kubectl annotate serviceaccount external-dns-ksa \ --namespace external-dns-ns \ iam.gke.io/gcp-service-account=external-dns-gsa@kdaida123.iam.gserviceaccount.com # Describe Kubernetes Service Account kubectl -n external-dns-ns describe sa external-dns-ksa ``` ## Step-08: Install Helm Client on Local Desktop - [Install Helm](https://helm.sh/docs/intro/install/) ```t # Install Helm brew install helm # Verify Helm version helm version ``` ## Step-09: Review external-dns values.yaml - [external-dns values.yaml](https://github.com/kubernetes-sigs/external-dns/blob/master/charts/external-dns/values.yaml) - [external-dns Configuration](https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns#configuration) ## Step-10: Review external-dns Deployment Configs ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: external-dns --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services","endpoints","pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions","networking.k8s.io"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: external-dns spec: strategy: type: Recreate selector: matchLabels: app: external-dns template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: k8s.gcr.io/external-dns/external-dns:v0.8.0 args: - --source=service - --source=ingress - --domain-filter=external-dns-test.gcp.zalan.do # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones - --provider=google # - --google-project=zalando-external-dns-test # Use this to specify a project different from the one external-dns is running inside - --google-zone-visibility=private # Use this to filter to only zones with this visibility. Set to either 'public' or 'private'. Omitting will match public and private zones - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization - --registry=txt - --txt-owner-id=my-identifier ``` ## Step-11: Install external-dns using Helm ```t # Add external-dns repo to Helm helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/ # Install Helm Chart helm upgrade --install external-dns external-dns/external-dns \ --set provider=google \ --set policy=sync \ --set google-zone-visibility=public \ --set txt-owner-id=k8s \ --set serviceAccount.create=false \ --set serviceAccount.name=external-dns-ksa \ -n external-dns-ns # Optional Setting (Important Note: will make ExternalDNS see only the Cloud DNS zones matching provided domain, omit to process all available Cloud DNS zones) --set domain-filter=kalyanreddydaida.com \ ``` ## Step-12: Verify external-dns deployment ```t # List Helm helm list -n external-dns-ns # List Kubernetes Service Account kubectl -n external-dns-ns get sa # Describe Kubernetes Service Account kubectl -n external-dns-ns describe sa external-dns-ksa # List All resources from default Namespace kubectl -n external-dns-ns get all # List pods (external-dns pod should be in running state) kubectl -n external-dns-ns get pods # Verify Deployment by checking logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> ``` ## References - https://github.com/kubernetes-sigs/external-dns/tree/master/charts/external-dns - https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md ## External-DNS Logs from Reference ```log W0624 07:14:15.829747 14199 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke Error from server (BadRequest): container "external-dns" in pod "external-dns-6f49549d96-2jd5q" is waiting to start: ContainerCreating Kalyans-Mac-mini:48-GKE-Ingress-IAP kalyanreddy$ kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') W0624 07:14:23.520269 14201 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke W0624 07:14:24.512312 14203 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead. To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke time="2022-06-24T01:44:18Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s DefaultTargets:[] ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false IgnoreIngressRulesSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:google GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s GoogleZoneVisibility: DomainFilter:[] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AWSSDServiceCleanup:false AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatDNSConfiguration: BluecatConfigFile:/etc/kubernetes/bluecat.json BluecatDNSView: BluecatGatewayHost: BluecatRootZone: BluecatDNSServerName: BluecatDNSDeployType:no-deploy BluecatSkipTLSVerify:false CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 InfobloxFQDNRegEx: InfobloxCreatePTR:false InfobloxCacheDuration:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix: TXTSuffix: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:info TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s RFC2136BatchChangeSize:50 NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false OCPRouterName:}" time="2022-06-24T01:44:18Z" level=info msg="Instantiating new Kubernetes client" time="2022-06-24T01:44:18Z" level=info msg="Using inCluster-config based on serviceaccount-token" time="2022-06-24T01:44:18Z" level=info msg="Created Kubernetes client https://10.104.0.1:443" time="2022-06-24T01:44:18Z" level=info msg="Google project auto-detected: kdaida123" time="2022-06-24T01:44:23Z" level=error msg="Get \"https://dns.googleapis.com/dns/v1/projects/kdaida123/managedZones?alt=json&prettyPrint=false\": compute: Received 403 `Unable to generate access token; IAM returned 403 Forbidden: The caller does not have permission\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\nFor more information, refer to the Workload Identity documentation:\n\thttps://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to\n\n`" ``
gcp gke docs
title GCP Google Kubernetes Engine GKE External DNS Install description Implement GCP Google Kubernetes Engine GKE External DNS Install Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction 1 Create GCP IAM Service Account external dns gsa 2 Add IAM Roles to GCP IAM Service Account add iam policy binding 3 Create Kubernetes Namespace external dns ns 4 Create Kubernetes Service Account external dns ksa 5 Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding 6 Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount 7 Install Helm CLI on your local desktop if not installed 8 Install External DNS using Helm 9 Verify External DNS Logs 10 Additional Reference Install ExternalDNS Controller using Helm https github com kubernetes sigs external dns Step 03 Create GCP IAM Service Account t List IAM Service Accounts gcloud iam service accounts list Create GCP IAM Service Account gcloud iam service accounts create GSA NAME project GSA PROJECT GSA NAME the name of the new IAM service account GSA PROJECT the project ID of the Google Cloud project for your IAM service account Replace GSA NAME and GSA PROJECT gcloud iam service accounts create external dns gsa project kdaida123 List IAM Service Accounts gcloud iam service accounts list Step 04 Add IAM Roles to GCP IAM Service Account t Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID member serviceAccount GSA NAME GSA PROJECT iam gserviceaccount com role ROLE NAME PROJECT ID your Google Cloud project ID GSA NAME the name of your IAM service account GSA PROJECT the project ID of the Google Cloud project of your IAM service account ROLE NAME the IAM role to assign to your service account like roles spanner viewer Replace PROJECT ID GSA NAME GSA PROJECT ROLE NAME gcloud projects add iam policy binding kdaida123 member serviceAccount external dns gsa kdaida123 iam gserviceaccount com role roles dns admin Step 05 Create Kubernetes Namepsace and Kubernetes Service Account t Create Kubernetes Namespace kubectl create namespace NAMESPACE kubectl create namespace external dns ns List Namespaces kubectl get ns Create Service Account kubectl create serviceaccount KSA NAME namespace NAMESPACE kubectl create serviceaccount external dns ksa namespace external dns ns List Service Accounts kubectl n external dns ns get sa Step 06 Associate GCP IAM Service Account with Kubernetes Service Account Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts This binding allows the Kubernetes service account to act as the IAM service account t Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT iam gserviceaccount com role roles iam workloadIdentityUser member serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME Replace GSA NAME GSA PROJECT PROJECT ID NAMESPACE KSA NAME gcloud iam service accounts add iam policy binding external dns gsa kdaida123 iam gserviceaccount com role roles iam workloadIdentityUser member serviceAccount kdaida123 svc id goog external dns ns external dns ksa Step 07 Annotate Kubernetes Service Account with GCP IAM SA email Address Annotate the Kubernetes service account with the email address of the IAM service account t Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA NAME namespace NAMESPACE iam gke io gcp service account GSA NAME GSA PROJECT iam gserviceaccount com Replace KSA NAME NAMESPACE GSA NAME GSA PROJECT kubectl annotate serviceaccount external dns ksa namespace external dns ns iam gke io gcp service account external dns gsa kdaida123 iam gserviceaccount com Describe Kubernetes Service Account kubectl n external dns ns describe sa external dns ksa Step 08 Install Helm Client on Local Desktop Install Helm https helm sh docs intro install t Install Helm brew install helm Verify Helm version helm version Step 09 Review external dns values yaml external dns values yaml https github com kubernetes sigs external dns blob master charts external dns values yaml external dns Configuration https github com kubernetes sigs external dns tree master charts external dns configuration Step 10 Review external dns Deployment Configs yaml apiVersion v1 kind ServiceAccount metadata name external dns apiVersion rbac authorization k8s io v1 kind ClusterRole metadata name external dns rules apiGroups resources services endpoints pods verbs get watch list apiGroups extensions networking k8s io resources ingresses verbs get watch list apiGroups resources nodes verbs get watch list apiVersion rbac authorization k8s io v1 kind ClusterRoleBinding metadata name external dns viewer roleRef apiGroup rbac authorization k8s io kind ClusterRole name external dns subjects kind ServiceAccount name external dns namespace default apiVersion apps v1 kind Deployment metadata name external dns spec strategy type Recreate selector matchLabels app external dns template metadata labels app external dns spec serviceAccountName external dns containers name external dns image k8s gcr io external dns external dns v0 8 0 args source service source ingress domain filter external dns test gcp zalan do will make ExternalDNS see only the hosted zones matching provided domain omit to process all available hosted zones provider google google project zalando external dns test Use this to specify a project different from the one external dns is running inside google zone visibility private Use this to filter to only zones with this visibility Set to either public or private Omitting will match public and private zones policy upsert only would prevent ExternalDNS from deleting any records omit to enable full synchronization registry txt txt owner id my identifier Step 11 Install external dns using Helm t Add external dns repo to Helm helm repo add external dns https kubernetes sigs github io external dns Install Helm Chart helm upgrade install external dns external dns external dns set provider google set policy sync set google zone visibility public set txt owner id k8s set serviceAccount create false set serviceAccount name external dns ksa n external dns ns Optional Setting Important Note will make ExternalDNS see only the Cloud DNS zones matching provided domain omit to process all available Cloud DNS zones set domain filter kalyanreddydaida com Step 12 Verify external dns deployment t List Helm helm list n external dns ns List Kubernetes Service Account kubectl n external dns ns get sa Describe Kubernetes Service Account kubectl n external dns ns describe sa external dns ksa List All resources from default Namespace kubectl n external dns ns get all List pods external dns pod should be in running state kubectl n external dns ns get pods Verify Deployment by checking logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name References https github com kubernetes sigs external dns tree master charts external dns https github com kubernetes sigs external dns blob master docs tutorials gke md External DNS Logs from Reference log W0624 07 14 15 829747 14199 gcp go 120 WARNING the gcp auth plugin is deprecated in v1 22 unavailable in v1 25 use gcloud instead To learn more consult https cloud google com blog products containers kubernetes kubectl auth changes in gke Error from server BadRequest container external dns in pod external dns 6f49549d96 2jd5q is waiting to start ContainerCreating Kalyans Mac mini 48 GKE Ingress IAP kalyanreddy kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 W0624 07 14 23 520269 14201 gcp go 120 WARNING the gcp auth plugin is deprecated in v1 22 unavailable in v1 25 use gcloud instead To learn more consult https cloud google com blog products containers kubernetes kubectl auth changes in gke W0624 07 14 24 512312 14203 gcp go 120 WARNING the gcp auth plugin is deprecated in v1 22 unavailable in v1 25 use gcloud instead To learn more consult https cloud google com blog products containers kubernetes kubectl auth changes in gke time 2022 06 24T01 44 18Z level info msg config APIServerURL KubeConfig RequestTimeout 30s DefaultTargets ContourLoadBalancerService heptio contour contour GlooNamespace gloo system SkipperRouteGroupVersion zalando org v1 Sources service ingress Namespace AnnotationFilter LabelFilter FQDNTemplate CombineFQDNAndAnnotation false IgnoreHostnameAnnotation false IgnoreIngressTLSSpec false IgnoreIngressRulesSpec false Compatibility PublishInternal false PublishHostIP false AlwaysPublishNotReadyAddresses false ConnectorSourceServer localhost 8080 Provider google GoogleProject GoogleBatchChangeSize 1000 GoogleBatchChangeInterval 1s GoogleZoneVisibility DomainFilter ExcludeDomains RegexDomainFilter RegexDomainExclusion ZoneNameFilter ZoneIDFilter AlibabaCloudConfigFile etc kubernetes alibaba cloud json AlibabaCloudZoneType AWSZoneType AWSZoneTagFilter AWSAssumeRole AWSBatchChangeSize 1000 AWSBatchChangeInterval 1s AWSEvaluateTargetHealth true AWSAPIRetries 3 AWSPreferCNAME false AWSZoneCacheDuration 0s AWSSDServiceCleanup false AzureConfigFile etc kubernetes azure json AzureResourceGroup AzureSubscriptionID AzureUserAssignedIdentityClientID BluecatDNSConfiguration BluecatConfigFile etc kubernetes bluecat json BluecatDNSView BluecatGatewayHost BluecatRootZone BluecatDNSServerName BluecatDNSDeployType no deploy BluecatSkipTLSVerify false CloudflareProxied false CloudflareZonesPerPage 50 CoreDNSPrefix skydns RcodezeroTXTEncrypt false AkamaiServiceConsumerDomain AkamaiClientToken AkamaiClientSecret AkamaiAccessToken AkamaiEdgercPath AkamaiEdgercSection InfobloxGridHost InfobloxWapiPort 443 InfobloxWapiUsername admin InfobloxWapiPassword InfobloxWapiVersion 2 3 1 InfobloxSSLVerify true InfobloxView InfobloxMaxResults 0 InfobloxFQDNRegEx InfobloxCreatePTR false InfobloxCacheDuration 0 DynCustomerName DynUsername DynPassword DynMinTTLSeconds 0 OCIConfigFile etc kubernetes oci yaml InMemoryZones OVHEndpoint ovh eu OVHApiRateLimit 20 PDNSServer http localhost 8081 PDNSAPIKey PDNSTLSEnabled false TLSCA TLSClientCert TLSClientCertKey Policy sync Registry txt TXTOwnerID default TXTPrefix TXTSuffix Interval 1m0s MinEventSyncInterval 5s Once false DryRun false UpdateEvents false LogFormat text MetricsAddress 7979 LogLevel info TXTCacheInterval 0s TXTWildcardReplacement ExoscaleEndpoint https api exoscale ch dns ExoscaleAPIKey ExoscaleAPISecret CRDSourceAPIVersion externaldns k8s io v1alpha1 CRDSourceKind DNSEndpoint ServiceTypeFilter CFAPIEndpoint CFUsername CFPassword RFC2136Host RFC2136Port 0 RFC2136Zone RFC2136Insecure false RFC2136GSSTSIG false RFC2136KerberosRealm RFC2136KerberosUsername RFC2136KerberosPassword RFC2136TSIGKeyName RFC2136TSIGSecret RFC2136TSIGSecretAlg RFC2136TAXFR false RFC2136MinTTL 0s RFC2136BatchChangeSize 50 NS1Endpoint NS1IgnoreSSL false NS1MinTTLSeconds 0 TransIPAccountName TransIPPrivateKeyFile DigitalOceanAPIPageSize 50 ManagedDNSRecordTypes A CNAME GoDaddyAPIKey GoDaddySecretKey GoDaddyTTL 0 GoDaddyOTE false OCPRouterName time 2022 06 24T01 44 18Z level info msg Instantiating new Kubernetes client time 2022 06 24T01 44 18Z level info msg Using inCluster config based on serviceaccount token time 2022 06 24T01 44 18Z level info msg Created Kubernetes client https 10 104 0 1 443 time 2022 06 24T01 44 18Z level info msg Google project auto detected kdaida123 time 2022 06 24T01 44 23Z level error msg Get https dns googleapis com dns v1 projects kdaida123 managedZones alt json prettyPrint false compute Received 403 Unable to generate access token IAM returned 403 Forbidden The caller does not have permission nThis error could be caused by a missing IAM policy binding on the target IAM service account nFor more information refer to the Workload Identity documentation n thttps cloud google com kubernetes engine docs how to workload identity authenticating to n n
gcp gke docs Perform Authorized Network Tests Create Cloud NAT Deploy Sample App and Test title GCP Google Kubernetes Engine GKE Private Cluster Implement GCP Google Kubernetes Engine GKE Private Cluster Create GKE Private Cluster Step 01 Introduction
--- title: GCP Google Kubernetes Engine GKE Private Cluster description: Implement GCP Google Kubernetes Engine GKE Private Cluster --- ## Step-01: Introduction - Create GKE Private Cluster - Create Cloud NAT - Deploy Sample App and Test - Perform Authorized Network Tests ## Step-02: Create Standard GKE Cluster - Go to Kubernetes Engine -> Clusters -> CREATE - Select **GKE Standard -> CONFIGURE** - **Cluster Basics** - **Name:** standard-cluster-private-1 - **Location type:** Regional - **Zone:** us-central1-a, us-central1-b, us-central1-c - **Release Channel** - **Release Channel:** Rapid Channel - **Version:** LATEST AVAIALABLE ON THAT DAY - REST ALL LEAVE TO DEFAULTS - **NODE POOLS: default-pool** - **Node pool details** - **Name:** default-pool - **Number of Nodes (per Zone):** 1 - **Nodes: Configure node settings** - **Image type:** Containerized Optimized OS - **Machine configuration** - **GENERAL PURPOSE SERIES:** e2 - **Machine Type:** e2-small - **Boot disk type:** standard persistent disk - **Boot disk size(GB):** 20 - **Enable Nodes on Spot VMs:** CHECKED - **Node Networking:** REVIEW AND LEAVE TO DEFAULTS - **Node Security:** - **Access scopes:** Allow full access to all Cloud APIs - REST ALL REVIEW AND LEAVE TO DEFAULTS - **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS - **CLUSTER** - **Automation:** REVIEW AND LEAVE TO DEFAULTS - **Networking:** - **Network Access:** Private Cluster - **Access control plane using its external IP address:** BY DEFAULT CHECKED - **Important Note:** Disabling this option locks down external access to the cluster control plane. There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone. This setting is permanent - **Enable Control Plane Global Access:** CHECKED - **Control Plane IP Range:** 172.16.0.0/28 - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Security:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Workload Identity** IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Metadata:** REVIEW AND LEAVE TO DEFAULTS - **Features:** REVIEW AND LEAVE TO DEFAULTS - **Enable Compute Engine Persistent Disk CSI Driver:** SHOULD BE CHECKED BY DEFAULT - VERIFY - **Enable File Store CSI Driver:** CHECKED - CLICK ON **CREATE** ## Step-03: Review kube-manifests: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 imagePullPolicy: Always ``` ## Step-04: Review kube-manifest: 02-kubernetes-loadbalancer-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-05: Deploy Kubernetes Manifests ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # Change Directory cd 20-GKE-Private-Cluster # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # Verify Pods kubectl get pods Observation: SHOULD FAIL - UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB # Describe Pod kubectl describe pod <POD-NAME> # Clean-Up kubectl delete -f kube-manifests/ ``` ## Step-06: Create Cloud NAT - Go to Network Services -> CREATE CLOUD NAT GATEWAY - **Gateway Name:** gke-us-central1-default-cloudnat-gw - **Select Cloud Router:** - **Network:** default - **Region:** us-central1 - **Cloud Router:** CREATE NEW ROUTER - **Name:** gke-us-central1-cloud-router - **Description:** GKE Cloud Router Region us-central1 - **Network:** default (POPULATED by default) - **Region:** us-central1 (POPULATED by default) - **BGP Peer keepalive interval:** 20 seconds (LEAVE TO DEFAULT) - Click on **CREATE** - **Cloud NAT Mapping:** LEAVE TO DEFAULTS - **Destination (external):** LEAVE TO DEFAULTS - **Stackdriver logging:** LEAVE TO DEFAULTS - **Port allocation:** - CHECK **Enable Dynamic Port Allocation** - **Timeouts for protocol connections:** LEAVE TO DEFAULTS - CLICK on **CREATE** ## Step-07: Deploy Kubernetes Manifests ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # Verify Pods kubectl get pods Observation: SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE # List Services kubectl get svc # Access Application http://<External-IP> # Clean-Up kubectl delete -f kube-manifests ``` ## Step-08: Authorized Network Test1: My Network - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** CHECKED - CLICK ON **ADD AUTHORIZED NETWORK** - **NAME:** MY-NETWORK-1 - **NETWORK:** 10.10.10.0/24 - Click on **DONE** - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli is lost 2. Access to GKE API Service is now allowed only from "10.10.10.0/24" network 3. In short even though our GKE API Server has Internet enabled endpoint, its access is restricted to specific network of IPs ## Sample Output Kalyan-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes Unable to connect to the server: dial tcp 34.70.169.161:443: i/o timeout Kalyan-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Step-09: Authorized Network Test2: My Desktop - Go to link [whatismyip](https://www.whatismyip.com/) and get desktop public IP - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** CHECKED - CLICK ON **ADD AUTHORIZED NETWORK** - **NAME:** MY-DESKTOP-1 - **NETWORK:** 10.10.10.0/24 - Click on **DONE** - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli should be success ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-pri-default-pool-90b1f67b-4z71 Ready <none> 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-6xn6 Ready <none> 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-dggg Ready <none> 55m v1.24.3-gke.900 Kalyans-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Step-10: Authorized Network Test2: Delete both network rules (Roll back to old state) - Goto -> standard-cluster-private-1 -> DETAILS -> NETWORKING - Control plane authorized networks -> EDIT - **Enable control plane authorized networks:** UN-CHECKED - AUTHORIZED NETWORKS -> DELETE -> MY-NETWORK-1, MY-DESKTOP-1 - Click on **SAVE CHANGES** ```t # List Kubernetes Nodes kubectl get nodes Observation: 1. Access to GKE API Service from our local desktop kubectl cli should be success ## Sample Output Kalyans-Mac-mini:google-kubernetes-engine kalyan$ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-pri-default-pool-90b1f67b-4z71 Ready <none> 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-6xn6 Ready <none> 55m v1.24.3-gke.900 gke-standard-cluster-pri-default-pool-90b1f67b-dggg Ready <none> 55m v1.24.3-gke.900 Kalyans-Mac-mini:google-kubernetes-engine kalyan$ ``` ## Additional Reference - [GKE Private Cluster with Terraform](https://github.com/GoogleCloudPlatform/gke-private-cluster-demo
gcp gke docs
title GCP Google Kubernetes Engine GKE Private Cluster description Implement GCP Google Kubernetes Engine GKE Private Cluster Step 01 Introduction Create GKE Private Cluster Create Cloud NAT Deploy Sample App and Test Perform Authorized Network Tests Step 02 Create Standard GKE Cluster Go to Kubernetes Engine Clusters CREATE Select GKE Standard CONFIGURE Cluster Basics Name standard cluster private 1 Location type Regional Zone us central1 a us central1 b us central1 c Release Channel Release Channel Rapid Channel Version LATEST AVAIALABLE ON THAT DAY REST ALL LEAVE TO DEFAULTS NODE POOLS default pool Node pool details Name default pool Number of Nodes per Zone 1 Nodes Configure node settings Image type Containerized Optimized OS Machine configuration GENERAL PURPOSE SERIES e2 Machine Type e2 small Boot disk type standard persistent disk Boot disk size GB 20 Enable Nodes on Spot VMs CHECKED Node Networking REVIEW AND LEAVE TO DEFAULTS Node Security Access scopes Allow full access to all Cloud APIs REST ALL REVIEW AND LEAVE TO DEFAULTS Node Metadata REVIEW AND LEAVE TO DEFAULTS CLUSTER Automation REVIEW AND LEAVE TO DEFAULTS Networking Network Access Private Cluster Access control plane using its external IP address BY DEFAULT CHECKED Important Note Disabling this option locks down external access to the cluster control plane There is still an external IP address used by Google for cluster management purposes but the IP address is not accessible to anyone This setting is permanent Enable Control Plane Global Access CHECKED Control Plane IP Range 172 16 0 0 28 CHECK THIS BOX Enable Dataplane V2 CHECK IT IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED Security REVIEW AND LEAVE TO DEFAULTS CHECK THIS BOX Enable Workload Identity IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED Metadata REVIEW AND LEAVE TO DEFAULTS Features REVIEW AND LEAVE TO DEFAULTS Enable Compute Engine Persistent Disk CSI Driver SHOULD BE CHECKED BY DEFAULT VERIFY Enable File Store CSI Driver CHECKED CLICK ON CREATE Step 03 Review kube manifests 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 2 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 imagePullPolicy Always Step 04 Review kube manifest 02 kubernetes loadbalancer service yaml yaml apiVersion v1 kind Service metadata name myapp1 lb service spec type LoadBalancer ClusterIp NodePort selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port Step 05 Deploy Kubernetes Manifests t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Change Directory cd 20 GKE Private Cluster Deploy Kubernetes Manifests kubectl apply f kube manifests Verify Pods kubectl get pods Observation SHOULD FAIL UNABLE TO DOWNLOAD DOCKER IMAGE FROM DOCKER HUB Describe Pod kubectl describe pod POD NAME Clean Up kubectl delete f kube manifests Step 06 Create Cloud NAT Go to Network Services CREATE CLOUD NAT GATEWAY Gateway Name gke us central1 default cloudnat gw Select Cloud Router Network default Region us central1 Cloud Router CREATE NEW ROUTER Name gke us central1 cloud router Description GKE Cloud Router Region us central1 Network default POPULATED by default Region us central1 POPULATED by default BGP Peer keepalive interval 20 seconds LEAVE TO DEFAULT Click on CREATE Cloud NAT Mapping LEAVE TO DEFAULTS Destination external LEAVE TO DEFAULTS Stackdriver logging LEAVE TO DEFAULTS Port allocation CHECK Enable Dynamic Port Allocation Timeouts for protocol connections LEAVE TO DEFAULTS CLICK on CREATE Step 07 Deploy Kubernetes Manifests t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Deploy Kubernetes Manifests kubectl apply f kube manifests Verify Pods kubectl get pods Observation SHOULD BE ABLE TO DOWNLOAD THE DOCKER IMAGE List Services kubectl get svc Access Application http External IP Clean Up kubectl delete f kube manifests Step 08 Authorized Network Test1 My Network Goto standard cluster private 1 DETAILS NETWORKING Control plane authorized networks EDIT Enable control plane authorized networks CHECKED CLICK ON ADD AUTHORIZED NETWORK NAME MY NETWORK 1 NETWORK 10 10 10 0 24 Click on DONE Click on SAVE CHANGES t List Kubernetes Nodes kubectl get nodes Observation 1 Access to GKE API Service from our local desktop kubectl cli is lost 2 Access to GKE API Service is now allowed only from 10 10 10 0 24 network 3 In short even though our GKE API Server has Internet enabled endpoint its access is restricted to specific network of IPs Sample Output Kalyan Mac mini google kubernetes engine kalyan kubectl get nodes Unable to connect to the server dial tcp 34 70 169 161 443 i o timeout Kalyan Mac mini google kubernetes engine kalyan Step 09 Authorized Network Test2 My Desktop Go to link whatismyip https www whatismyip com and get desktop public IP Goto standard cluster private 1 DETAILS NETWORKING Control plane authorized networks EDIT Enable control plane authorized networks CHECKED CLICK ON ADD AUTHORIZED NETWORK NAME MY DESKTOP 1 NETWORK 10 10 10 0 24 Click on DONE Click on SAVE CHANGES t List Kubernetes Nodes kubectl get nodes Observation 1 Access to GKE API Service from our local desktop kubectl cli should be success Sample Output Kalyans Mac mini google kubernetes engine kalyan kubectl get nodes NAME STATUS ROLES AGE VERSION gke standard cluster pri default pool 90b1f67b 4z71 Ready none 55m v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b 6xn6 Ready none 55m v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b dggg Ready none 55m v1 24 3 gke 900 Kalyans Mac mini google kubernetes engine kalyan Step 10 Authorized Network Test2 Delete both network rules Roll back to old state Goto standard cluster private 1 DETAILS NETWORKING Control plane authorized networks EDIT Enable control plane authorized networks UN CHECKED AUTHORIZED NETWORKS DELETE MY NETWORK 1 MY DESKTOP 1 Click on SAVE CHANGES t List Kubernetes Nodes kubectl get nodes Observation 1 Access to GKE API Service from our local desktop kubectl cli should be success Sample Output Kalyans Mac mini google kubernetes engine kalyan kubectl get nodes NAME STATUS ROLES AGE VERSION gke standard cluster pri default pool 90b1f67b 4z71 Ready none 55m v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b 6xn6 Ready none 55m v1 24 3 gke 900 gke standard cluster pri default pool 90b1f67b dggg Ready none 55m v1 24 3 gke 900 Kalyans Mac mini google kubernetes engine kalyan Additional Reference GKE Private Cluster with Terraform https github com GoogleCloudPlatform gke private cluster demo
gcp gke docs Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GKE Storage with GCP Cloud SQL MySQL Public Instance 1 Verify if GKE Cluster is created t
--- title: GKE Storage with GCP Cloud SQL - MySQL Public Instance description: Use GCP Cloud SQL MySQL DB for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet (0.0.0.0/0) ## Step-02: Create Google Cloud SQL MySQL Instance - Go to SQL -> Choose MySQL - **Instance ID:** ums-db-public-instance - **Password:** KalyanReddy13 - **Database Version:** MYSQL 8.0 - **Choose a configuration to start with:** Development - **Choose region and zonal availability** - **Region:** US-central1(IOWA) - **Zonal availability:** Single Zone - **Primary Zone:** us-central1-a - **Customize your instance** - **Machine Type** - **Machine Type:** LightWeight (1 vCPU, 3.75GB) - **STORAGE** - **Storage Type:** HDD - **Storage Capacity:** 10GB - **Enable automatic storage increases:** CHECKED - **CONNECTIONS** - **Instance IP Assignment:** - **Private IP:** UNCHECKED - **Public IP:** CHECKED - **Authorized networks** - **Name:** All-Internet - **Network:** 0.0.0.0/0 - Click on **DONE** - **DATA PROTECTION** - **Automatic Backups:** UNCHECKED - **Enable Deletion protection:** UNCHECKED - **Maintenance:** Leave to defaults - **Flags:** Leave to defaults - **Labels:** Leave to defaults - Click on **CREATE INSTANCE** ## Step-03: Perform Telnet Test from local desktop ```t # Telnet Test telnet <MYSQL-DB-PUBLIC-IP> 3306 # Replace Public IP telnet 35.184.228.151 3306 ## SAMPLE OUTPUT Kalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$ telnet 35.184.228.151 3306 Trying 35.184.228.151... Connected to 151.228.184.35.bc.googleusercontent.com. Escape character is '^]'. Q 8.0.26-google?h'Sxcr+?nd'h<a(X`z=mysql_native_password2#08S01Got timeout reading communication packetsConnection closed by foreign host. Kalyans-Mac-mini:25-GKE-Storage-with-GCP-Cloud-SQL kalyanreddy$ ``` ## Step-04: Create DB Schema webappdb - Go to SQL -> ums-db-public-instance -> Databases -> **CREATE DATABASE** - **Database Name:** webappdb - **Character set:** utf8 - **Collation:** Default collation - Click on **CREATE** ## Step-05: 01-MySQL-externalName-Service.yaml - Update Cloud SQL MySQL DB `Public IP` in ExternalName Service ```yaml apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 35.184.228.151 ``` ## Step-06: 02-Kubernetes-Secrets.yaml ```yaml apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ``` ## Step-07: 03-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-10: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 ``` ## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl ```t ## Verify from Kubernetes Cluster, we are able to connect to MySQL DB # Template kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD> # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb ```t # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-13: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Delete Cloud SQL MySQL Instance 1. Go to SQL -> ums-db-public-instance -> DELETE 2. Instance ID: ums-db-public-instance 3. Click on DELETE ```
gcp gke docs
title GKE Storage with GCP Cloud SQL MySQL Public Instance description Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME zone ZONE project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster 1 zone us central1 c project kdaida123 Step 01 Introduction GKE Private Cluster GCP Cloud SQL with Public IP and Authorized Network for DB as entire internet 0 0 0 0 0 Step 02 Create Google Cloud SQL MySQL Instance Go to SQL Choose MySQL Instance ID ums db public instance Password KalyanReddy13 Database Version MYSQL 8 0 Choose a configuration to start with Development Choose region and zonal availability Region US central1 IOWA Zonal availability Single Zone Primary Zone us central1 a Customize your instance Machine Type Machine Type LightWeight 1 vCPU 3 75GB STORAGE Storage Type HDD Storage Capacity 10GB Enable automatic storage increases CHECKED CONNECTIONS Instance IP Assignment Private IP UNCHECKED Public IP CHECKED Authorized networks Name All Internet Network 0 0 0 0 0 Click on DONE DATA PROTECTION Automatic Backups UNCHECKED Enable Deletion protection UNCHECKED Maintenance Leave to defaults Flags Leave to defaults Labels Leave to defaults Click on CREATE INSTANCE Step 03 Perform Telnet Test from local desktop t Telnet Test telnet MYSQL DB PUBLIC IP 3306 Replace Public IP telnet 35 184 228 151 3306 SAMPLE OUTPUT Kalyans Mac mini 25 GKE Storage with GCP Cloud SQL kalyanreddy telnet 35 184 228 151 3306 Trying 35 184 228 151 Connected to 151 228 184 35 bc googleusercontent com Escape character is Q 8 0 26 google h Sxcr nd h a X z mysql native password2 08S01Got timeout reading communication packetsConnection closed by foreign host Kalyans Mac mini 25 GKE Storage with GCP Cloud SQL kalyanreddy Step 04 Create DB Schema webappdb Go to SQL ums db public instance Databases CREATE DATABASE Database Name webappdb Character set utf8 Collation Default collation Click on CREATE Step 05 01 MySQL externalName Service yaml Update Cloud SQL MySQL DB Public IP in ExternalName Service yaml apiVersion v1 kind Service metadata name mysql externalname service spec type ExternalName externalName 35 184 228 151 Step 06 02 Kubernetes Secrets yaml yaml apiVersion v1 kind Secret metadata name mysql db password type Opaque data db password S2FseWFuUmVkZHkxMw Base64 of KalyanReddy13 https www base64encode org Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw Step 07 03 UserMgmtWebApp Deployment yaml yaml apiVersion apps v1 kind Deployment metadata name usermgmt webapp labels app usermgmt webapp spec replicas 1 selector matchLabels app usermgmt webapp template metadata labels app usermgmt webapp spec initContainers name init db image busybox 1 31 command sh c echo e Checking for the availability of MySQL Server deployment while nc z mysql externalname service 3306 do sleep 1 printf done echo e MySQL DB Server has started containers name usermgmt webapp image stacksimplify kube usermgmt webapp 1 0 0 MySQLDB imagePullPolicy Always ports containerPort 8080 env name DB HOSTNAME value mysql externalname service name DB PORT value 3306 name DB NAME value webappdb name DB USERNAME value root name DB PASSWORD valueFrom secretKeyRef name mysql db password key db password Step 08 04 UserMgmtWebApp LoadBalancer Service yaml yaml apiVersion v1 kind Service metadata name usermgmt webapp lb service labels app usermgmt webapp spec type LoadBalancer selector app usermgmt webapp ports port 80 Service Port targetPort 8080 Container Port Step 09 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 10 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Step 11 Connect to MySQL DB Cloud SQL from GKE Cluster using kubectl t Verify from Kubernetes Cluster we are able to connect to MySQL DB Template kubectl run it rm image mysql 8 0 restart Never mysql client mysql h Kubernetes ExternalName Service u USER NAME p PASSWORD MySQL Client 8 0 Replace External Name Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql externalname service u root pKalyanReddy13 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql exit Step 12 Create New user admin102 and verify in Cloud SQL MySQL webappdb t Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User Used for testing allowVolumeExpansion true Option Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 MySQL Client 8 0 Replace External Name Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql externalname service u root pKalyanReddy13 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql exit Step 13 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Delete Cloud SQL MySQL Instance 1 Go to SQL ums db public instance DELETE 2 Instance ID ums db public instance 3 Click on DELETE
gcp gke docs Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GCP Google Kubernetes Engine GKE Ingress Context Path Routing Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress Context Path Routing description: Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - Ingress Context Path based Routing - Discuss about the Architecture we are going to build as part of this Section - We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller - /app1/* - should go to app1-nginx-nodeport-service - /app2/* - should go to app2-nginx-nodeport-service - /* - should go to app3-nginx-nodeport-service ## Step-02: Review Nginx App1, App2 & App3 Deployment & Service - Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions - **Kubernetes Deployment:** Container Image name - **App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kube-nginxapp1:1.0.0 - **App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kube-nginxapp2:1.0.0 - **App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yaml** - **image:** stacksimplify/kubenginx:1.0.0 ## Step-03: 04-Ingress-ContextPath-Based-Routing.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cpr annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 # - path: / # pathType: Prefix # backend: # service: # name: app3-nginx-nodeport-service # port: # number: 80 ``` ## Step-04: Deploy kube-manifests and test ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-cpr ``` ## Step-05: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app1/index.html http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/app2/index.html http://<ADDRESS-FIELD-FROM-GET-INGRESS-OUTPUT>/ ``` ## Step-06: Verify Load Balancer - Go to Load Balancing -> Click on Load balancer ### Load Balancer View - DETAILS Tab - Frontend - Host and Path Rules - Backend Services - Health Checks - MONITORING TAB - CACHING TAB ### Load Balancer Components View - FORWARDING RULES - TARGET PROXIES - BACKEND SERVICES - BACKEND BUCKETS - CERTIFICATES - TARGET POOLS ## Step-07: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress Context Path Routing description Implement GCP Google Kubernetes Engine GKE Ingress Context Path Routing Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction Ingress Context Path based Routing Discuss about the Architecture we are going to build as part of this Section We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller app1 should go to app1 nginx nodeport service app2 should go to app2 nginx nodeport service should go to app3 nginx nodeport service Step 02 Review Nginx App1 App2 App3 Deployment Service Differences for all 3 apps will be only one field from kubernetes manifests perspective and additionally their naming conventions Kubernetes Deployment Container Image name App1 Nginx 01 Nginx App1 Deployment and NodePortService yaml image stacksimplify kube nginxapp1 1 0 0 App2 Nginx 02 Nginx App2 Deployment and NodePortService yaml image stacksimplify kube nginxapp2 1 0 0 App3 Nginx 03 Nginx App3 Deployment and NodePortService yaml image stacksimplify kubenginx 1 0 0 Step 03 04 Ingress ContextPath Based Routing yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress cpr annotations External Load Balancer kubernetes io ingress class gce spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 path pathType Prefix backend service name app3 nginx nodeport service port number 80 Step 04 Deploy kube manifests and test t Deploy Kubernetes manifests kubectl apply f kube manifests List Pods kubectl get pods List Services kubectl get svc List Ingress Load Balancers kubectl get ingress Describe Ingress and view Rules kubectl describe ingress ingress cpr Step 05 Access Application t Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors Access Application http ADDRESS FIELD FROM GET INGRESS OUTPUT app1 index html http ADDRESS FIELD FROM GET INGRESS OUTPUT app2 index html http ADDRESS FIELD FROM GET INGRESS OUTPUT Step 06 Verify Load Balancer Go to Load Balancing Click on Load balancer Load Balancer View DETAILS Tab Frontend Host and Path Rules Backend Services Health Checks MONITORING TAB CACHING TAB Load Balancer Components View FORWARDING RULES TARGET PROXIES BACKEND SERVICES BACKEND BUCKETS CERTIFICATES TARGET POOLS Step 07 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests
gcp gke docs title GKE Persistent Disks Use Regional PD Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Use Google Disks Regional PD for Kubernetes Workloads 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks - Use Regional PD description: Use Google Disks Regional PD for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Use Regional Persistent Disks ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 00-storage-class.yaml ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: regionalpd-storageclass provisioner: pd.csi.storage.gke.io parameters: #type: pd-standard # Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard. type: pd-ssd replication-type: regional-pd volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: topology.gke.io/zone values: - us-central1-c - us-central1-b ## Important Note - Regional PD # If using a regional cluster, you can leave allowedTopologies unspecified. If you do this, when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones. One zone is the same as the zone that the Pod is scheduled in. The other zone is randomly picked from the zones available to the cluster. # When using a zonal cluster, allowedTopologies must be set. # STORAGE CLASS # 1. A StorageClass provides a way for administrators # to describe the "classes" of storage they offer. # 2. Here we are offering GCP PD Storage for GKE Cluster ``` ## Step-04: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: regionalpd-storageclass resources: requests: storage: 4Gi ``` ## Step-05: Other Kubernetes YAML Manifests - No changes to other Kubernetes YAML Manifests - They are same as previous section - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-06: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-07: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-b, us-central1-c - **Type:** Regional SSD persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-08: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 ``` ## Step-09: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify if PD is deleted Go to Compute Engine -> Disks -> Search for 4GB Regional SSD persistent disk. It should be deleted. ``` ## References - [Regional PD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd
gcp gke docs
title GKE Persistent Disks Use Regional PD description Use Google Disks Regional PD for Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Use Regional Persistent Disks Step 02 List Kubernetes Storage Classes in GKE Cluster t List Storage Classes kubectl get sc Step 03 00 storage class yaml yaml apiVersion storage k8s io v1 kind StorageClass metadata name regionalpd storageclass provisioner pd csi storage gke io parameters type pd standard Note To use regional persistent disks of type pd standard set the PersistentVolumeClaim storage attribute to 200Gi or higher If you need a smaller persistent disk use pd ssd instead of pd standard type pd ssd replication type regional pd volumeBindingMode WaitForFirstConsumer allowedTopologies matchLabelExpressions key topology gke io zone values us central1 c us central1 b Important Note Regional PD If using a regional cluster you can leave allowedTopologies unspecified If you do this when you create a Pod that consumes a PersistentVolumeClaim which uses this StorageClass a regional persistent disk is provisioned with two zones One zone is the same as the zone that the Pod is scheduled in The other zone is randomly picked from the zones available to the cluster When using a zonal cluster allowedTopologies must be set STORAGE CLASS 1 A StorageClass provides a way for administrators to describe the classes of storage they offer 2 Here we are offering GCP PD Storage for GKE Cluster Step 04 01 persistent volume claim yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName regionalpd storageclass resources requests storage 4Gi Step 05 Other Kubernetes YAML Manifests No changes to other Kubernetes YAML Manifests They are same as previous section 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 06 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 07 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation Review the below items Zones us central1 b us central1 c Type Regional SSD persistent disk In use by gke standard cluster 1 default pool db7b638f j5lk Step 08 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Step 09 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Verify if PD is deleted Go to Compute Engine Disks Search for 4GB Regional SSD persistent disk It should be deleted References Regional PD https cloud google com kubernetes engine docs how to persistent volumes regional pd
gcp gke docs Use existing storageclass standard rwo in Kubernetes Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GKE Persistent Disks Existing StorageClass standard rwo Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks Existing StorageClass standard-rwo description: Use existing storageclass standard-rwo in Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction - Understand Kubernetes Objects 01. Kubernetes PersistentVolumeClaim 02. Kubernetes ConfigMap 03. Kubernetes Deployment 04. Kubernetes Volumes 05. Kubernetes Volume Mounts 06. Kubernetes Environment Variables 07. Kubernetes ClusterIP Service 08. Kubernetes Init Containers 09. Kubernetes Service of Type LoadBalancer 10. Kubernetes StorageClass - Use predefined Storage Class `standard-rwo` - `standard-rwo` uses balanced persistent disk ## Step-02: List Kubernetes Storage Classes in GKE Cluster ```t # List Storage Classes kubectl get sc ``` ## Step-03: 01-persistent-volume-claim.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: accessModes: - ReadWriteOnce storageClassName: standard-rwo resources: requests: storage: 4Gi # NEED FOR PVC # 1. Dynamic volume provisioning allows storage volumes to be created # on-demand. # 2. Without dynamic provisioning, cluster administrators have to manually # make calls to their cloud or storage provider to create new storage # volumes, and then create PersistentVolume objects to represent them in k8s # 3. The dynamic provisioning feature eliminates the need for cluster # administrators to pre-provision storage. Instead, it automatically # provisions storage when it is requested by users. # 4. PVC: Users request dynamically provisioned storage by including # a storage class in their PersistentVolumeClaim ``` ## Step-04: 02-UserManagement-ConfigMap.yaml ```yaml apiVersion: v1 kind: ConfigMap metadata: name: usermanagement-dbcreation-script data: mysql_usermgmt.sql: |- DROP DATABASE IF EXISTS webappdb; CREATE DATABASE webappdb; ``` ## Step-05: 03-mysql-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate # terminates all the pods and replaces them with the new version. template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:8.0 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script # VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES: ## 1. On-disk files in a container are ephemeral ## 2. One problem is the loss of files when a container crashes. ## 3. Kubernetes Volumes solves above two as these volumes are configured to POD and not container. ## Only they can be mounted in Container ## 4. Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach ## for having Persistent Volumes for workloads in Kubernetes ## ENVIRONMENT VARIABLES # 1. When you create a Pod, you can set environment variables for the # containers that run in the Pod. # 2. To set environment variables, include the env or envFrom field in # the configuration file. ## DEPLOYMENT STRATEGIES # 1. Rolling deployment: This strategy replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. # 2. Recreate: This strategy terminates all the pods and replaces them with the new version. # 3. Ramped slow rollout: This strategy rolls out replicas of the new version, while in parallel, shutting down old replicas. # 4. Best-effort controlled rollout: This strategy specifies a β€œmax unavailable” parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. # 5. Canary Deployment: This strategy uses a progressive delivery approach, with one version of the application serving maximum users, and another, newer version serving a small set of test users. The test deployment is rolled out to more users if it is successful. ``` ## Step-06: 04-mysql-clusterip-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: mysql spec: selector: app: mysql ports: - port: 3306 clusterIP: None # This means we are going to use Pod IP ``` ## Step-07: 05-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD value: "dbpassword11" ``` ## Step-08: 06-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Storage Classes kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 # Sample Message for Successful Start of JVM 2022-06-20 09:34:32.519 INFO 1 --- [ost-startStop-1] .r.SpringbootSecurityInternalApplication : Started SpringbootSecurityInternalApplication in 14.891 seconds (JVM running for 23.283) 20-Jun-2022 09:34:32.593 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive /usr/local/tomcat/webapps/ROOT.war has finished in 21,016 ms 20-Jun-2022 09:34:32.623 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-apr-8080"] 20-Jun-2022 09:34:32.688 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-apr-8009"] 20-Jun-2022 09:34:32.713 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 21275 ms ``` ## Step-10: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk ## Step-11: Verify Kubernetes Workloads, Services ConfigMaps on Kubernetes Engine Dashboard ```t # Verify Workloads Go to Kubernetes Engine -> Workloads Observation: 1. You should see "mysql" and "usermgmt-webapp" deployments # Verify Services Go to Kubernetes Engine -> Services & Ingress Observation: 1. You should "mysql ClusterIP Service" and "usermgmt-webapp-lb-service" # Verify ConfigMaps Go to Kubernetes Engine -> Secrets & ConfigMaps Observation: 1. You should find the ConfigMap "usermanagement-dbcreation-script" # Verify Persistent Volume Claim Go to Kubernetes Engine -> Storage -> PERSISTENT VOLUME CLAIMS TAB Observation: 1. You should see PVC "mysql-pv-claim" # Verify StorageClass Go to Kubernetes Engine -> Storage -> STORAGE CLASSES TAB Observation: 1. You should see 3 Storage Classes out of which "standard-rwo" and "premium-rwo" are part of Compute Engine Persistent Disks (latest and greatest - Recommended for use) 2. Not recommended to use Storage Class with name "standard" (Older version) ``` ## Step-13: Connect to MySQL Database ```t # Template: Connect to MySQL Database using kubectl kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD> # MySQL Client 8.0: Replace ClusterIP Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Verify this user in MySQL DB # Template: Connect to MySQL Database using kubectl kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ClusterIP-Service> -u <USER_NAME> -p<PASSWORD> # MySQL Client 8.0: Replace ClusterIP Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql -u root -pdbpassword11 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> select * from user; Observation: 1. You should find the newly created user from browser successfully created in MySQL DB. 2. In simple terms, we have done the following a. Created MySQL k8s Deployment in GKE CLuster b. Created Java WebApplication k8s Deployment in GKE Cluster c. Accessed Application using GKE Load Balancer IP using browser d. Created a new user in this application and that user successfully stored in MySQL DB. e. END TO END FLOW from Browser to DB using GKE Cluster we have seen. ``` ## Step-13: Verify GCE PD CSI Driver Logging - https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver ```t # Cloud Logging Query resource.type="k8s_container" resource.labels.project_id="PROJECT_ID" resource.labels.cluster_name="CLUSTER_NAME" resource.labels.namespace_name="kube-system" resource.labels.container_name="gce-pd-driver" # Cloud Logging Query (Replace Values) resource.type="k8s_container" resource.labels.project_id="kdaida123" resource.labels.cluster_name="standard-cluster-private-1" resource.labels.namespace_name="kube-system" resource.labels.container_name="gce-pd-driver" ``` ## Step-14: Clean-Up ```t # Delete kube-manifests kubectl delete -f kube-manifests/ ``` ## Reference - [Using the Compute Engine persistent disk CSI Driver](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver) ## Addtional-Data-01 1. It enables the automatic deployment and management of the persistent disk driver without having to manually set it up. 2. You can use customer-managed encryption keys (CMEKs). These keys are used to encrypt the data encryption keys that encrypt your data. 3. You can use volume snapshots with the Compute Engine persistent disk CSI Driver. Volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume. 4. Bug fixes and feature updates are rolled out independently from minor Kubernetes releases. This release schedule typically results in a faster release cadence. ## Addtional-Data-02 - For Standard Clusters: The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters - Linux clusters: GKE version 1.18.10-gke.2100 or later, or 1.19.3-gke.2100 or later. - Windows clusters: GKE version 1.22.6-gke.300 or later, or 1.23.2-gke.300 or later. - For Autopilot clusters: The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited. ## Addtional-Data-03 - GKE automatically installs the following StorageClasses: - standard-rwo: using balanced persistent disk - premium-rwo: using SSD persistent disk - For Autopilot clusters: The default StorageClass is standard-rwo, which uses the Compute Engine persistent disk CSI Driver. - For Standard clusters: The default StorageClass uses the Kubernetes in-tree gcePersistentDisk volume plugin. ```t # You can find the name of your installed StorageClasses by running the following command: kubectl get sc or kubectl get storageclass ```
gcp gke docs
title GKE Persistent Disks Existing StorageClass standard rwo description Use existing storageclass standard rwo in Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction Understand Kubernetes Objects 01 Kubernetes PersistentVolumeClaim 02 Kubernetes ConfigMap 03 Kubernetes Deployment 04 Kubernetes Volumes 05 Kubernetes Volume Mounts 06 Kubernetes Environment Variables 07 Kubernetes ClusterIP Service 08 Kubernetes Init Containers 09 Kubernetes Service of Type LoadBalancer 10 Kubernetes StorageClass Use predefined Storage Class standard rwo standard rwo uses balanced persistent disk Step 02 List Kubernetes Storage Classes in GKE Cluster t List Storage Classes kubectl get sc Step 03 01 persistent volume claim yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name mysql pv claim spec accessModes ReadWriteOnce storageClassName standard rwo resources requests storage 4Gi NEED FOR PVC 1 Dynamic volume provisioning allows storage volumes to be created on demand 2 Without dynamic provisioning cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes and then create PersistentVolume objects to represent them in k8s 3 The dynamic provisioning feature eliminates the need for cluster administrators to pre provision storage Instead it automatically provisions storage when it is requested by users 4 PVC Users request dynamically provisioned storage by including a storage class in their PersistentVolumeClaim Step 04 02 UserManagement ConfigMap yaml yaml apiVersion v1 kind ConfigMap metadata name usermanagement dbcreation script data mysql usermgmt sql DROP DATABASE IF EXISTS webappdb CREATE DATABASE webappdb Step 05 03 mysql deployment yaml yaml apiVersion apps v1 kind Deployment metadata name mysql spec replicas 1 selector matchLabels app mysql strategy type Recreate terminates all the pods and replaces them with the new version template metadata labels app mysql spec containers name mysql image mysql 8 0 env name MYSQL ROOT PASSWORD value dbpassword11 ports containerPort 3306 name mysql volumeMounts name mysql persistent storage mountPath var lib mysql name usermanagement dbcreation script mountPath docker entrypoint initdb d https hub docker com mysql Refer Initializing a fresh instance volumes name mysql persistent storage persistentVolumeClaim claimName mysql pv claim name usermanagement dbcreation script configMap name usermanagement dbcreation script VERY IMPORTANT POINTS ABOUT CONTAINERS AND POD VOLUMES 1 On disk files in a container are ephemeral 2 One problem is the loss of files when a container crashes 3 Kubernetes Volumes solves above two as these volumes are configured to POD and not container Only they can be mounted in Container 4 Using Compute Enginer Persistent Disk CSI Driver is a super generalized approach for having Persistent Volumes for workloads in Kubernetes ENVIRONMENT VARIABLES 1 When you create a Pod you can set environment variables for the containers that run in the Pod 2 To set environment variables include the env or envFrom field in the configuration file DEPLOYMENT STRATEGIES 1 Rolling deployment This strategy replaces pods running the old version of the application with the new version one by one without downtime to the cluster 2 Recreate This strategy terminates all the pods and replaces them with the new version 3 Ramped slow rollout This strategy rolls out replicas of the new version while in parallel shutting down old replicas 4 Best effort controlled rollout This strategy specifies a max unavailable parameter which indicates what percentage of existing pods can be unavailable during the upgrade enabling the rollout to happen much more quickly 5 Canary Deployment This strategy uses a progressive delivery approach with one version of the application serving maximum users and another newer version serving a small set of test users The test deployment is rolled out to more users if it is successful Step 06 04 mysql clusterip service yaml yaml apiVersion v1 kind Service metadata name mysql spec selector app mysql ports port 3306 clusterIP None This means we are going to use Pod IP Step 07 05 UserMgmtWebApp Deployment yaml yaml apiVersion apps v1 kind Deployment metadata name usermgmt webapp labels app usermgmt webapp spec replicas 1 selector matchLabels app usermgmt webapp template metadata labels app usermgmt webapp spec initContainers name init db image busybox 1 31 command sh c echo e Checking for the availability of MySQL Server deployment while nc z mysql 3306 do sleep 1 printf done echo e MySQL DB Server has started containers name usermgmt webapp image stacksimplify kube usermgmt webapp 1 0 0 MySQLDB imagePullPolicy Always ports containerPort 8080 env name DB HOSTNAME value mysql name DB PORT value 3306 name DB NAME value webappdb name DB USERNAME value root name DB PASSWORD value dbpassword11 Step 08 06 UserMgmtWebApp LoadBalancer Service yaml yaml apiVersion v1 kind Service metadata name usermgmt webapp lb service labels app usermgmt webapp spec type LoadBalancer selector app usermgmt webapp ports port 80 Service Port targetPort 8080 Container Port Step 09 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Storage Classes kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Sample Message for Successful Start of JVM 2022 06 20 09 34 32 519 INFO 1 ost startStop 1 r SpringbootSecurityInternalApplication Started SpringbootSecurityInternalApplication in 14 891 seconds JVM running for 23 283 20 Jun 2022 09 34 32 593 INFO localhost startStop 1 org apache catalina startup HostConfig deployWAR Deployment of web application archive usr local tomcat webapps ROOT war has finished in 21 016 ms 20 Jun 2022 09 34 32 623 INFO main org apache coyote AbstractProtocol start Starting ProtocolHandler http apr 8080 20 Jun 2022 09 34 32 688 INFO main org apache coyote AbstractProtocol start Starting ProtocolHandler ajp apr 8009 20 Jun 2022 09 34 32 713 INFO main org apache catalina startup Catalina start Server startup in 21275 ms Step 10 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Step 11 Verify Kubernetes Workloads Services ConfigMaps on Kubernetes Engine Dashboard t Verify Workloads Go to Kubernetes Engine Workloads Observation 1 You should see mysql and usermgmt webapp deployments Verify Services Go to Kubernetes Engine Services Ingress Observation 1 You should mysql ClusterIP Service and usermgmt webapp lb service Verify ConfigMaps Go to Kubernetes Engine Secrets ConfigMaps Observation 1 You should find the ConfigMap usermanagement dbcreation script Verify Persistent Volume Claim Go to Kubernetes Engine Storage PERSISTENT VOLUME CLAIMS TAB Observation 1 You should see PVC mysql pv claim Verify StorageClass Go to Kubernetes Engine Storage STORAGE CLASSES TAB Observation 1 You should see 3 Storage Classes out of which standard rwo and premium rwo are part of Compute Engine Persistent Disks latest and greatest Recommended for use 2 Not recommended to use Storage Class with name standard Older version Step 13 Connect to MySQL Database t Template Connect to MySQL Database using kubectl kubectl run it rm image mysql 8 0 restart Never mysql client mysql h Kubernetes ClusterIP Service u USER NAME p PASSWORD MySQL Client 8 0 Replace ClusterIP Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql u root pdbpassword11 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql exit Step 12 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 Verify this user in MySQL DB Template Connect to MySQL Database using kubectl kubectl run it rm image mysql 8 0 restart Never mysql client mysql h Kubernetes ClusterIP Service u USER NAME p PASSWORD MySQL Client 8 0 Replace ClusterIP Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql u root pdbpassword11 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql select from user Observation 1 You should find the newly created user from browser successfully created in MySQL DB 2 In simple terms we have done the following a Created MySQL k8s Deployment in GKE CLuster b Created Java WebApplication k8s Deployment in GKE Cluster c Accessed Application using GKE Load Balancer IP using browser d Created a new user in this application and that user successfully stored in MySQL DB e END TO END FLOW from Browser to DB using GKE Cluster we have seen Step 13 Verify GCE PD CSI Driver Logging https cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver t Cloud Logging Query resource type k8s container resource labels project id PROJECT ID resource labels cluster name CLUSTER NAME resource labels namespace name kube system resource labels container name gce pd driver Cloud Logging Query Replace Values resource type k8s container resource labels project id kdaida123 resource labels cluster name standard cluster private 1 resource labels namespace name kube system resource labels container name gce pd driver Step 14 Clean Up t Delete kube manifests kubectl delete f kube manifests Reference Using the Compute Engine persistent disk CSI Driver https cloud google com kubernetes engine docs how to persistent volumes gce pd csi driver Addtional Data 01 1 It enables the automatic deployment and management of the persistent disk driver without having to manually set it up 2 You can use customer managed encryption keys CMEKs These keys are used to encrypt the data encryption keys that encrypt your data 3 You can use volume snapshots with the Compute Engine persistent disk CSI Driver Volume snapshots let you create a copy of your volume at a specific point in time You can use this copy to bring a volume back to a prior state or to provision a new volume 4 Bug fixes and feature updates are rolled out independently from minor Kubernetes releases This release schedule typically results in a faster release cadence Addtional Data 02 For Standard Clusters The Compute Engine persistent disk CSI Driver is enabled by default on newly created clusters Linux clusters GKE version 1 18 10 gke 2100 or later or 1 19 3 gke 2100 or later Windows clusters GKE version 1 22 6 gke 300 or later or 1 23 2 gke 300 or later For Autopilot clusters The Compute Engine persistent disk CSI Driver is enabled by default and cannot be disabled or edited Addtional Data 03 GKE automatically installs the following StorageClasses standard rwo using balanced persistent disk premium rwo using SSD persistent disk For Autopilot clusters The default StorageClass is standard rwo which uses the Compute Engine persistent disk CSI Driver For Standard clusters The default StorageClass uses the Kubernetes in tree gcePersistentDisk volume plugin t You can find the name of your installed StorageClasses by running the following command kubectl get sc or kubectl get storageclass
gcp gke docs 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine GKE Workload Identity Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE Workload Identity gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Workload Identity description: Implement GCP Google Kubernetes Engine GKE Workload Identity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction 1. Create GCP IAM Service Account 2. Add IAM Roles to GCP IAM Service Account (add-iam-policy-binding) 3. Create Kubernetes Namespace 4. Create Kubernetes Service Account 5. Associate GCP IAM Service Account with Kubernetes Service Account (gcloud iam service-accounts add-iam-policy-binding) 6. Annotate Kubernetes Service Account with GCP IAM SA email Address (kubectl annotate serviceaccount) 7. Create a Sample App with and without Kubernetes Service Account 8. Test Workload Identity in GKE Cluster ## Step-02: Verify if Workload Identity Setting is enabled for GKE Cluster - Go to Kubernetes Engine -> Clusters -> standard-cluster-private-1 -> DETAILS Tab - In Security -> Workload Identity -> SHOULD BE IN ENABLED STATE ## Step-03: Create GCP IAM Service Account ```t # List IAM Service Accounts gcloud iam service-accounts list # List Google Cloud Projects gcloud projects list Observation: 1. Get the PROJECT_ID for your current project 2. Replace GSA_PROJECT_ID with PROJECT_ID for your current project # Create GCP IAM Service Account gcloud iam service-accounts create GSA_NAME --project=GSA_PROJECT_ID GSA_NAME: the name of the new IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project for your IAM service account. GSA_PROJECT==PROJECT_ID # Replace GSA_NAME and GSA_PROJECT gcloud iam service-accounts create wid-gcpiam-sa --project=kdaida123 # List IAM Service Accounts gcloud iam service-accounts list ``` ## Step-04: Add IAM Roles to GCP IAM Service Account - We are giving `"roles/compute.viewer"` permissions to IAM Service Account. - From Kubernetes Pod, we are going to list the compute instances. - With the help of the `Google IAM Service account` and `Kubernetes Service Account`, access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances. ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account. GSA_PROJECT_ID==PROJECT_ID ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT_ID, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/compute.viewer" ``` ## Step-05: Create Kubernetes Namepsace and Service Account ```t # Create Kubernetes Namespace kubectl create namespace <NAMESPACE> kubectl create namespace wid-kns # Create Service Account kubectl create serviceaccount <KSA_NAME> --namespace <NAMESPACE> kubectl create serviceaccount wid-ksa --namespace wid-kns ``` ## Step-06: Associate GCP IAM Service Account with Kubernetes Service Account - Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts. - This binding allows the Kubernetes service account to act as the IAM service account. ```t # Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service-accounts add-iam-policy-binding GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE/KSA_NAME]" # Replace GSA_NAME, GSA_PROJECT_ID, PROJECT_ID, NAMESPACE, KSA_NAME gcloud iam service-accounts add-iam-policy-binding wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:kdaida123.svc.id.goog[wid-kns/wid-ksa]" ``` ## Step-07: Annotate Kubernetes Service Account with GCP IAM SA email Address - Annotate the Kubernetes service account with the email address of the IAM service account. ```t # Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA_NAME \ --namespace NAMESPACE \ iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com # Replace KSA_NAME, NAMESPACE, GSA_NAME, GSA_PROJECT_ID kubectl annotate serviceaccount wid-ksa \ --namespace wid-kns \ iam.gke.io/gcp-service-account=wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com # Describe Kubernetes Service Account kubectl describe sa wid-ksa -n wid-kns ``` ## Step-08: 01-wid-demo-pod-without-sa.yaml ```yaml apiVersion: v1 kind: Pod metadata: name: wid-demo-without-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-without-sa command: ["sleep","infinity"] #serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ``` ## Step-09: 02-wid-demo-pod-with-sa.yaml - **Important Note:** For Autopilot clusters, omit the nodeSelector field. Autopilot rejects this nodeSelector because all nodes use Workload Identity. ```yaml apiVersion: v1 kind: Pod metadata: name: wid-demo-with-sa namespace: wid-kns spec: containers: - image: google/cloud-sdk:slim name: wid-demo-with-sa command: ["sleep","infinity"] serviceAccountName: wid-ksa nodeSelector: iam.gke.io/gke-metadata-server-enabled: "true" ``` ## Step-10: Deploy Kubernetes Manifests and Verify ```t # Deploy kube-manifests kubectl apply -f kube-manifests # List Pods kubectl -n wid-kns get pods ``` ## Step-11: Verify from Workload Identity without Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-without-sa -- /bin/bash # Default Service Account Pod is using currently gcloud auth list Observation: It chose the default account ## Sample Output root@wid-demo-without-sa:/# gcloud auth list Credentialed Accounts ACTIVE ACCOUNT * kdaida123.svc.id.goog To set the active account, run: $ gcloud config set account `ACCOUNT` root@wid-demo-without-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-without-sa:/# gcloud compute instances list ERROR: (gcloud.compute.instances.list) Some requests did not succeed: - Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project. root@wid-demo-without-sa:/# # Exit the container terminal exit ``` ## Step-12: Verify from Workload Identity with Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash # Default Service Account Pod is using currently gcloud auth list ## Sample Output root@wid-demo-with-sa:/# gcloud auth list Credentialed Accounts ACTIVE ACCOUNT * wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com To set the active account, run: $ gcloud config set account `ACCOUNT` root@wid-demo-with-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-with-sa:/# gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-standard-cluster-priva-new-pool-2-7c9415e8-5cds us-central1-c g1-small true 10.128.15.235 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz us-central1-c g1-small true 10.128.0.8 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6 us-central1-c g1-small true 10.128.0.2 RUNNING root@wid-demo-with-sa:/# ``` ## Step-13: Negative Usecase: Test access to Cloud DNS Record Sets ```t # gcloud list DNS Records gcloud dns record-sets list --zone=kalyanreddydaida-com Observation: 1. GCP IAM Service Account "wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" doesnt have roles assigned related to Cloud DNS so we got HTTP 403 ## Sample Output root@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com ERROR: (gcloud.dns.record-sets.list) HTTPError 403: Forbidden root@wid-demo-with-sa:/# # Exit the container terminal exit ``` ## Step-14: Give Cloud DNS Admin Role for GCP IAM Servic Account wid-gcpiam-sa ```t # Add IAM Roles to GCP IAM Service Account gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. GSA_PROJECT==PROJECT_ID # Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects add-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" ``` ## Step-15: Verify from Workload Identity with Service Account Pod ```t # Connect to Pod kubectl -n wid-kns exec -it wid-demo-with-sa -- /bin/bash # List Cloud DNS Record Sets gcloud dns record-sets list --zone=kalyanreddydaida-com ### Sample Output root@wid-demo-with-sa:/# gcloud dns record-sets list --zone=kalyanreddydaida-com NAME TYPE TTL DATA kalyanreddydaida.com. NS 21600 ns-cloud-a1.googledomains.com.,ns-cloud-a2.googledomains.com.,ns-cloud-a3.googledomains.com.,ns-cloud-a4.googledomains.com. kalyanreddydaida.com. SOA 21600 ns-cloud-a1.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300 demo1.kalyanreddydaida.com. A 300 34.120.32.120 root@wid-demo-with-sa:/# # List Compute Instances from workload-identity-demo pod gcloud compute instances list ## Sample Output root@wid-demo-with-sa:/# gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-standard-cluster-priva-new-pool-2-7c9415e8-5cds us-central1-c g1-small true 10.128.15.235 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-5mpz us-central1-c g1-small true 10.128.0.8 RUNNING gke-standard-cluster-priva-new-pool-2-7c9415e8-8qg6 us-central1-c g1-small true 10.128.0.2 RUNNING root@wid-demo-with-sa:/# # Exit the container terminal exit ``` ## Step-16: Clean-Up Kubernetes Resources ```t # Delete Kubernetes Pods kubectl delete -f kube-manifests # List Namespaces kubectl get ns # Delete Kubernetes Namespace kubectl delete ns wid-kns Observation: 1. Kubernetes Service Account "wid-ksa" will get automatically deleted when that namespace is deleted ``` ## Step-17: Clean-Up GCP IAM Resources ```t # List GCP IAM Service Accounts gcloud iam service-accounts list # Remove IAM Roles to GCP IAM Service Account gcloud projects remove-iam-policy-binding PROJECT_ID \ --member "serviceAccount:GSA_NAME@GSA_PROJECT_ID.iam.gserviceaccount.com" \ --role "ROLE_NAME" PROJECT_ID: your Google Cloud project ID. GSA_NAME: the name of your IAM service account. GSA_PROJECT_ID: the project ID of the Google Cloud project of your IAM service account. GSA_PROJECT_ID==PROJECT_ID ROLE_NAME: the IAM role to assign to your service account, like roles/spanner.viewer. # REMOVE ROLE: COMPUTE VIEWER: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects remove-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/compute.viewer" # REMOVE ROLE: DNS ADMIN: Replace PROJECT_ID, GSA_NAME, GSA_PROJECT, ROLE_NAME gcloud projects remove-iam-policy-binding kdaida123 \ --member "serviceAccount:wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com" \ --role "roles/dns.admin" # Delete the GCP IAM Service Account we have created gcloud iam service-accounts delete wid-gcpiam-sa@kdaida123.iam.gserviceaccount.com --project=kdaida123 ``` ## References - [GKE - Use Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
gcp gke docs
title GCP Google Kubernetes Engine GKE Workload Identity description Implement GCP Google Kubernetes Engine GKE Workload Identity Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction 1 Create GCP IAM Service Account 2 Add IAM Roles to GCP IAM Service Account add iam policy binding 3 Create Kubernetes Namespace 4 Create Kubernetes Service Account 5 Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding 6 Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount 7 Create a Sample App with and without Kubernetes Service Account 8 Test Workload Identity in GKE Cluster Step 02 Verify if Workload Identity Setting is enabled for GKE Cluster Go to Kubernetes Engine Clusters standard cluster private 1 DETAILS Tab In Security Workload Identity SHOULD BE IN ENABLED STATE Step 03 Create GCP IAM Service Account t List IAM Service Accounts gcloud iam service accounts list List Google Cloud Projects gcloud projects list Observation 1 Get the PROJECT ID for your current project 2 Replace GSA PROJECT ID with PROJECT ID for your current project Create GCP IAM Service Account gcloud iam service accounts create GSA NAME project GSA PROJECT ID GSA NAME the name of the new IAM service account GSA PROJECT ID the project ID of the Google Cloud project for your IAM service account GSA PROJECT PROJECT ID Replace GSA NAME and GSA PROJECT gcloud iam service accounts create wid gcpiam sa project kdaida123 List IAM Service Accounts gcloud iam service accounts list Step 04 Add IAM Roles to GCP IAM Service Account We are giving roles compute viewer permissions to IAM Service Account From Kubernetes Pod we are going to list the compute instances With the help of the Google IAM Service account and Kubernetes Service Account access for Kubernetes Pod from GKE cluster should be successful for listing the google computing instances t Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID member serviceAccount GSA NAME GSA PROJECT ID iam gserviceaccount com role ROLE NAME PROJECT ID your Google Cloud project ID GSA NAME the name of your IAM service account GSA PROJECT ID the project ID of the Google Cloud project of your IAM service account GSA PROJECT ID PROJECT ID ROLE NAME the IAM role to assign to your service account like roles spanner viewer Replace PROJECT ID GSA NAME GSA PROJECT ID ROLE NAME gcloud projects add iam policy binding kdaida123 member serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com role roles compute viewer Step 05 Create Kubernetes Namepsace and Service Account t Create Kubernetes Namespace kubectl create namespace NAMESPACE kubectl create namespace wid kns Create Service Account kubectl create serviceaccount KSA NAME namespace NAMESPACE kubectl create serviceaccount wid ksa namespace wid kns Step 06 Associate GCP IAM Service Account with Kubernetes Service Account Allow the Kubernetes service account to impersonate the IAM service account by adding an IAM policy binding between the two service accounts This binding allows the Kubernetes service account to act as the IAM service account t Associate GCP IAM Service Account with Kubernetes Service Account gcloud iam service accounts add iam policy binding GSA NAME GSA PROJECT ID iam gserviceaccount com role roles iam workloadIdentityUser member serviceAccount PROJECT ID svc id goog NAMESPACE KSA NAME Replace GSA NAME GSA PROJECT ID PROJECT ID NAMESPACE KSA NAME gcloud iam service accounts add iam policy binding wid gcpiam sa kdaida123 iam gserviceaccount com role roles iam workloadIdentityUser member serviceAccount kdaida123 svc id goog wid kns wid ksa Step 07 Annotate Kubernetes Service Account with GCP IAM SA email Address Annotate the Kubernetes service account with the email address of the IAM service account t Annotate Kubernetes Service Account with GCP IAM SA email Address kubectl annotate serviceaccount KSA NAME namespace NAMESPACE iam gke io gcp service account GSA NAME GSA PROJECT ID iam gserviceaccount com Replace KSA NAME NAMESPACE GSA NAME GSA PROJECT ID kubectl annotate serviceaccount wid ksa namespace wid kns iam gke io gcp service account wid gcpiam sa kdaida123 iam gserviceaccount com Describe Kubernetes Service Account kubectl describe sa wid ksa n wid kns Step 08 01 wid demo pod without sa yaml yaml apiVersion v1 kind Pod metadata name wid demo without sa namespace wid kns spec containers image google cloud sdk slim name wid demo without sa command sleep infinity serviceAccountName wid ksa nodeSelector iam gke io gke metadata server enabled true Step 09 02 wid demo pod with sa yaml Important Note For Autopilot clusters omit the nodeSelector field Autopilot rejects this nodeSelector because all nodes use Workload Identity yaml apiVersion v1 kind Pod metadata name wid demo with sa namespace wid kns spec containers image google cloud sdk slim name wid demo with sa command sleep infinity serviceAccountName wid ksa nodeSelector iam gke io gke metadata server enabled true Step 10 Deploy Kubernetes Manifests and Verify t Deploy kube manifests kubectl apply f kube manifests List Pods kubectl n wid kns get pods Step 11 Verify from Workload Identity without Service Account Pod t Connect to Pod kubectl n wid kns exec it wid demo without sa bin bash Default Service Account Pod is using currently gcloud auth list Observation It chose the default account Sample Output root wid demo without sa gcloud auth list Credentialed Accounts ACTIVE ACCOUNT kdaida123 svc id goog To set the active account run gcloud config set account ACCOUNT root wid demo without sa List Compute Instances from workload identity demo pod gcloud compute instances list Sample Output root wid demo without sa gcloud compute instances list ERROR gcloud compute instances list Some requests did not succeed Request had invalid authentication credentials Expected OAuth 2 access token login cookie or other valid authentication credential See https developers google com identity sign in web devconsole project root wid demo without sa Exit the container terminal exit Step 12 Verify from Workload Identity with Service Account Pod t Connect to Pod kubectl n wid kns exec it wid demo with sa bin bash Default Service Account Pod is using currently gcloud auth list Sample Output root wid demo with sa gcloud auth list Credentialed Accounts ACTIVE ACCOUNT wid gcpiam sa kdaida123 iam gserviceaccount com To set the active account run gcloud config set account ACCOUNT root wid demo with sa List Compute Instances from workload identity demo pod gcloud compute instances list Sample Output root wid demo with sa gcloud compute instances list NAME ZONE MACHINE TYPE PREEMPTIBLE INTERNAL IP EXTERNAL IP STATUS gke standard cluster priva new pool 2 7c9415e8 5cds us central1 c g1 small true 10 128 15 235 RUNNING gke standard cluster priva new pool 2 7c9415e8 5mpz us central1 c g1 small true 10 128 0 8 RUNNING gke standard cluster priva new pool 2 7c9415e8 8qg6 us central1 c g1 small true 10 128 0 2 RUNNING root wid demo with sa Step 13 Negative Usecase Test access to Cloud DNS Record Sets t gcloud list DNS Records gcloud dns record sets list zone kalyanreddydaida com Observation 1 GCP IAM Service Account wid gcpiam sa kdaida123 iam gserviceaccount com doesnt have roles assigned related to Cloud DNS so we got HTTP 403 Sample Output root wid demo with sa gcloud dns record sets list zone kalyanreddydaida com ERROR gcloud dns record sets list HTTPError 403 Forbidden root wid demo with sa Exit the container terminal exit Step 14 Give Cloud DNS Admin Role for GCP IAM Servic Account wid gcpiam sa t Add IAM Roles to GCP IAM Service Account gcloud projects add iam policy binding PROJECT ID member serviceAccount GSA NAME GSA PROJECT iam gserviceaccount com role ROLE NAME PROJECT ID your Google Cloud project ID GSA NAME the name of your IAM service account GSA PROJECT the project ID of the Google Cloud project of your IAM service account ROLE NAME the IAM role to assign to your service account like roles spanner viewer GSA PROJECT PROJECT ID Replace PROJECT ID GSA NAME GSA PROJECT ROLE NAME gcloud projects add iam policy binding kdaida123 member serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com role roles dns admin Step 15 Verify from Workload Identity with Service Account Pod t Connect to Pod kubectl n wid kns exec it wid demo with sa bin bash List Cloud DNS Record Sets gcloud dns record sets list zone kalyanreddydaida com Sample Output root wid demo with sa gcloud dns record sets list zone kalyanreddydaida com NAME TYPE TTL DATA kalyanreddydaida com NS 21600 ns cloud a1 googledomains com ns cloud a2 googledomains com ns cloud a3 googledomains com ns cloud a4 googledomains com kalyanreddydaida com SOA 21600 ns cloud a1 googledomains com cloud dns hostmaster google com 1 21600 3600 259200 300 demo1 kalyanreddydaida com A 300 34 120 32 120 root wid demo with sa List Compute Instances from workload identity demo pod gcloud compute instances list Sample Output root wid demo with sa gcloud compute instances list NAME ZONE MACHINE TYPE PREEMPTIBLE INTERNAL IP EXTERNAL IP STATUS gke standard cluster priva new pool 2 7c9415e8 5cds us central1 c g1 small true 10 128 15 235 RUNNING gke standard cluster priva new pool 2 7c9415e8 5mpz us central1 c g1 small true 10 128 0 8 RUNNING gke standard cluster priva new pool 2 7c9415e8 8qg6 us central1 c g1 small true 10 128 0 2 RUNNING root wid demo with sa Exit the container terminal exit Step 16 Clean Up Kubernetes Resources t Delete Kubernetes Pods kubectl delete f kube manifests List Namespaces kubectl get ns Delete Kubernetes Namespace kubectl delete ns wid kns Observation 1 Kubernetes Service Account wid ksa will get automatically deleted when that namespace is deleted Step 17 Clean Up GCP IAM Resources t List GCP IAM Service Accounts gcloud iam service accounts list Remove IAM Roles to GCP IAM Service Account gcloud projects remove iam policy binding PROJECT ID member serviceAccount GSA NAME GSA PROJECT ID iam gserviceaccount com role ROLE NAME PROJECT ID your Google Cloud project ID GSA NAME the name of your IAM service account GSA PROJECT ID the project ID of the Google Cloud project of your IAM service account GSA PROJECT ID PROJECT ID ROLE NAME the IAM role to assign to your service account like roles spanner viewer REMOVE ROLE COMPUTE VIEWER Replace PROJECT ID GSA NAME GSA PROJECT ROLE NAME gcloud projects remove iam policy binding kdaida123 member serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com role roles compute viewer REMOVE ROLE DNS ADMIN Replace PROJECT ID GSA NAME GSA PROJECT ROLE NAME gcloud projects remove iam policy binding kdaida123 member serviceAccount wid gcpiam sa kdaida123 iam gserviceaccount com role roles dns admin Delete the GCP IAM Service Account we have created gcloud iam service accounts delete wid gcpiam sa kdaida123 iam gserviceaccount com project kdaida123 References GKE Use Workload Identity https cloud google com kubernetes engine docs how to workload identity
gcp gke docs title GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy description: Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. External DNS Controller should be installed and ready to use ## Step-01: Introduction 1. Configuring the OAuth consent screen 2. Creating OAuth credentials 3. Setting up IAP access 4. Creating a Kubernetes Secret with OAuth Client ID Credentials 5. Adding an iap block to the BackendConfig ## Step-02: Create basic google gmail users (if not present) - I have created below two users for this IAP Demo - gcpuser901@gmail.com - gcpuser902@gmail.com ## Step-03: Enabling IAP for GKE - [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto) - We will follow steps from above documentation link to create below 2 items 1. [Configuring the OAuth consent screen](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-configure) 2. [Creating OAuth credentials](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#oauth-credentials) ```t # Make a note of Client ID and Client Secret Client ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com Client Secret: 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 # Template https://iap.googleapis.com/v1/oauth/clientIds/CLIENT_ID:handleRedirect # Replace CLIENT_ID (Update URL in OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds) https://iap.googleapis.com/v1/oauth/clientIds/1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com:handleRedirect ``` ## Step-04: Creating a Kubernetes Secret ```t # Make a note of Client ID and Client Secret Client ID: 1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com Client Secret: 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 # List Kubernetes Secrets (Default Namespace) kubectl get secrets # Create Kubernetes Secret kubectl create secret generic my-secret --from-literal=client_id=client_id_key \ --from-literal=client_secret=client_secret_key # Replace client_id_key, client_secret_key kubectl create secret generic my-secret --from-literal=client_id=1057267725005-0icbqnab9rsvodgmq7dicfvs1f56sj5p.apps.googleusercontent.com \ --from-literal=client_secret=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 # List Kubernetes Secrets (Default Namespace) kubectl get secrets ``` ## Step-05: Adding an iap block to the BackendConfig - **File Name:** 07-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: iap: enabled: true oauthclientCredentials: secretName: my-secret ``` ## Step-06: Review Kubenertes Manifests - All 3 Node Port Services will have annotation added `cloud.google.com/backend-config` - 01-Nginx-App1-Deployment-and-NodePortService.yaml - 02-Nginx-App2-Deployment-and-NodePortService.yaml - 03-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: v1 kind: Service metadata: name: app1-nginx-nodeport-service labels: app: app1-nginx annotations: cloud.google.com/backend-config: '{"default": "my-backendconfig"}' spec: type: NodePort selector: app: app1-nginx ports: - port: 80 targetPort: 80 ``` ## Step-07: Review Kubenertes Manifests - No changes to below YAML files from previous section - 04-Ingress-NameBasedVHost-Routing.yaml - 05-Managed-Certificate.yaml - 06-frontendconfig.yaml ## Step-08: Deploy Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests Observation: 1. All other configs already created as part of previous demo, only backendconfig change will be applied now. # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # List Frontend Configs kubectl get frontendconfig # List Backend Configs kubectl get backendconfig ``` ## Step-09: Setting up IAP access - [Setting up IAP access](https://cloud.google.com/iap/docs/enabling-kubernetes-howto#iap-access) - Add User `gcpuser901@gmail.com` as Principal. ## Step-10: Access Application ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Username: gcpuser901@gmail.com (In your case it might be a different user you added as part of Step-09) Password: XXXXXXXXXX Observation: 1. All 3 URLS will redirect to Google Authentication. Provide credentials to login 2. All 3 URLS should work as expected. In your case, replace YOUR_DOMAIN name for testing 3. HTTP to HTTPS redirect should work ``` ## Step-11: Negative Usecase: Access using User which is not added in Principal ```t # Access Application http://app1-ingress.kalyanreddydaida.com/app1/index.html http://app2-ingress.kalyanreddydaida.com/app2/index.html http://default-ingress.kalyanreddydaida.com Username: gcpuser902@gmail.com (user which is not added in principal as part of Step-09) Password: XXXXXXXXXX Observation: 1. It should fail, Application should not be accessible. ``` ## Step-12: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete Kubernetes Secret kubectl delete secret my-secret # Delete OAuth Credentials Go to API & Services -> Credentials -> OAuth 2.0 Client IDs -> gke-ingress-iap-demo-oauth-creds -> DELETE ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features) - [Enabling IAP for GKE](https://cloud.google.com/iap/docs/enabling-kubernetes-howto)
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy description Implement GCP Google Kubernetes Engine GKE Ingress with Identity Aware Proxy Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 External DNS Controller should be installed and ready to use Step 01 Introduction 1 Configuring the OAuth consent screen 2 Creating OAuth credentials 3 Setting up IAP access 4 Creating a Kubernetes Secret with OAuth Client ID Credentials 5 Adding an iap block to the BackendConfig Step 02 Create basic google gmail users if not present I have created below two users for this IAP Demo gcpuser901 gmail com gcpuser902 gmail com Step 03 Enabling IAP for GKE Enabling IAP for GKE https cloud google com iap docs enabling kubernetes howto We will follow steps from above documentation link to create below 2 items 1 Configuring the OAuth consent screen https cloud google com iap docs enabling kubernetes howto oauth configure 2 Creating OAuth credentials https cloud google com iap docs enabling kubernetes howto oauth credentials t Make a note of Client ID and Client Secret Client ID 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com Client Secret GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5 Template https iap googleapis com v1 oauth clientIds CLIENT ID handleRedirect Replace CLIENT ID Update URL in OAuth 2 0 Client IDs gke ingress iap demo oauth creds https iap googleapis com v1 oauth clientIds 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com handleRedirect Step 04 Creating a Kubernetes Secret t Make a note of Client ID and Client Secret Client ID 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com Client Secret GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5 List Kubernetes Secrets Default Namespace kubectl get secrets Create Kubernetes Secret kubectl create secret generic my secret from literal client id client id key from literal client secret client secret key Replace client id key client secret key kubectl create secret generic my secret from literal client id 1057267725005 0icbqnab9rsvodgmq7dicfvs1f56sj5p apps googleusercontent com from literal client secret GOCSPX TKJOtavKIRI7vjMLQVp s gy0ut5 List Kubernetes Secrets Default Namespace kubectl get secrets Step 05 Adding an iap block to the BackendConfig File Name 07 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec iap enabled true oauthclientCredentials secretName my secret Step 06 Review Kubenertes Manifests All 3 Node Port Services will have annotation added cloud google com backend config 01 Nginx App1 Deployment and NodePortService yaml 02 Nginx App2 Deployment and NodePortService yaml 03 Nginx App3 Deployment and NodePortService yaml yaml apiVersion v1 kind Service metadata name app1 nginx nodeport service labels app app1 nginx annotations cloud google com backend config default my backendconfig spec type NodePort selector app app1 nginx ports port 80 targetPort 80 Step 07 Review Kubenertes Manifests No changes to below YAML files from previous section 04 Ingress NameBasedVHost Routing yaml 05 Managed Certificate yaml 06 frontendconfig yaml Step 08 Deploy Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests Observation 1 All other configs already created as part of previous demo only backendconfig change will be applied now List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc List Ingress Services kubectl get ingress List Frontend Configs kubectl get frontendconfig List Backend Configs kubectl get backendconfig Step 09 Setting up IAP access Setting up IAP access https cloud google com iap docs enabling kubernetes howto iap access Add User gcpuser901 gmail com as Principal Step 10 Access Application t Access Application http app1 ingress kalyanreddydaida com app1 index html http app2 ingress kalyanreddydaida com app2 index html http default ingress kalyanreddydaida com Username gcpuser901 gmail com In your case it might be a different user you added as part of Step 09 Password XXXXXXXXXX Observation 1 All 3 URLS will redirect to Google Authentication Provide credentials to login 2 All 3 URLS should work as expected In your case replace YOUR DOMAIN name for testing 3 HTTP to HTTPS redirect should work Step 11 Negative Usecase Access using User which is not added in Principal t Access Application http app1 ingress kalyanreddydaida com app1 index html http app2 ingress kalyanreddydaida com app2 index html http default ingress kalyanreddydaida com Username gcpuser902 gmail com user which is not added in principal as part of Step 09 Password XXXXXXXXXX Observation 1 It should fail Application should not be accessible Step 12 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Delete Kubernetes Secret kubectl delete secret my secret Delete OAuth Credentials Go to API Services Credentials OAuth 2 0 Client IDs gke ingress iap demo oauth creds DELETE References Ingress Features https cloud google com kubernetes engine docs how to ingress features Enabling IAP for GKE https cloud google com iap docs enabling kubernetes howto
gcp gke docs title GCP Google Kubernetes Engine GKE Ingress with External DNS Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Implement GCP Google Kubernetes Engine GKE Ingress with External DNS
--- title: GCP Google Kubernetes Engine GKE Ingress with External DNS description: Implement GCP Google Kubernetes Engine GKE Ingress with External DNS --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. External DNS Controller Installed ## Step-01: Introduction - Ingress with External DNS - We are going to use the Annotation `external-dns.alpha.kubernetes.io/hostname` in Ingress Service. - DNS Recordsets will be automatically added to Google Cloud DNS using external-dns controller when Ingress Service deployed ## Step-02: 01-Nginx-App3-Deployment-and-NodePortService.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: app3-nginx-deployment labels: app: app3-nginx spec: replicas: 1 selector: matchLabels: app: app3-nginx template: metadata: labels: app: app3-nginx spec: containers: - name: app3-nginx image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: app3-nginx-nodeport-service spec: type: NodePort selector: app: app3-nginx ports: - port: 80 targetPort: 80 ``` ## Step-03: 02-ingress-external-dns.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-externaldns-demo annotations: # If the class annotation is not specified it defaults to "gce". # gce: external load balancer # gce-internal: internal load balancer kubernetes.io/ingress.class: "gce" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: ingressextdns101.kalyanreddydaida.com spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy Kubernetes Manifests and Verify ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Pods kubectl get pods # List Services kubectl get svc # List Ingress Services kubectl get ingress # Describe Ingress Service kubectl describe ingress ingress-externaldns-demo # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be present # Access Application http://<DNS-Name> http://ingressextdns101.kalyanreddydaida.com ``` ## Step-05: Delete kube-manifests ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Verify external-dns Controller logs kubectl -n external-dns-ns logs -f $(kubectl -n external-dns-ns get po | egrep -o 'external-dns[A-Za-z0-9-]+') [or] kubectl -n external-dns-ns get pods kubectl -n external-dns-ns logs -f <External-DNS-Pod-Name> # Verify Cloud DNS 1. Go to Network Services -> Cloud DNS -> kalyanreddydaida-com 2. Verify Record sets, DNS Name we added in Ingress Service should be not preset (already deleted) ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress with External DNS description Implement GCP Google Kubernetes Engine GKE Ingress with External DNS Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 External DNS Controller Installed Step 01 Introduction Ingress with External DNS We are going to use the Annotation external dns alpha kubernetes io hostname in Ingress Service DNS Recordsets will be automatically added to Google Cloud DNS using external dns controller when Ingress Service deployed Step 02 01 Nginx App3 Deployment and NodePortService yaml yaml apiVersion apps v1 kind Deployment metadata name app3 nginx deployment labels app app3 nginx spec replicas 1 selector matchLabels app app3 nginx template metadata labels app app3 nginx spec containers name app3 nginx image stacksimplify kubenginx 1 0 0 ports containerPort 80 Readiness Probe https cloud google com kubernetes engine docs concepts ingress def inf hc readinessProbe httpGet scheme HTTP path index html port 80 initialDelaySeconds 10 periodSeconds 15 timeoutSeconds 5 apiVersion v1 kind Service metadata name app3 nginx nodeport service spec type NodePort selector app app3 nginx ports port 80 targetPort 80 Step 03 02 ingress external dns yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress externaldns demo annotations If the class annotation is not specified it defaults to gce gce external load balancer gce internal internal load balancer kubernetes io ingress class gce External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname ingressextdns101 kalyanreddydaida com spec defaultBackend service name app3 nginx nodeport service port number 80 Step 04 Deploy Kubernetes Manifests and Verify t Deploy Kubernetes Manifests kubectl apply f kube manifests List Pods kubectl get pods List Services kubectl get svc List Ingress Services kubectl get ingress Describe Ingress Service kubectl describe ingress ingress externaldns demo Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Ingress Service should be present Access Application http DNS Name http ingressextdns101 kalyanreddydaida com Step 05 Delete kube manifests t Delete Kubernetes Objects kubectl delete f kube manifests Verify external dns Controller logs kubectl n external dns ns logs f kubectl n external dns ns get po egrep o external dns A Za z0 9 or kubectl n external dns ns get pods kubectl n external dns ns logs f External DNS Pod Name Verify Cloud DNS 1 Go to Network Services Cloud DNS kalyanreddydaida com 2 Verify Record sets DNS Name we added in Ingress Service should be not preset already deleted
gcp gke docs title GKE Storage with GCP Cloud SQL MySQL Private Instance Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GKE Storage with GCP Cloud SQL - MySQL Private Instance description: Use GCP Cloud SQL MySQL DB for GKE Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` ## Step-01: Introduction - GKE Private Cluster - GCP Cloud SQL with Private IP ## Step-02: Create Private Service Connection to Google Managed Services from our VPC Network ## Step-02-01: Create ALLOCATED IP RANGES FOR SERVICES - Go to VPC Networks -> default -> PRIVATE SERVICE CONNECTION -> ALLOCATED IP RANGES FOR SERVICES - Click on **ALLOCATE IP RANGE** - **Name:** google-managed-services-default (google-managed-services-<VPC-NAME>) - **Description:** google-managed-services-default - **IP Range:** Automatic - **Prefix Length:** 16 - Click on **ALLOCATE** ## Step-02-02: Create PRIVATE CONNECTIONS TO SERVICES - Delete existing connection if any present `servicenetworking-googleapis-com` - Click on **CREATE CONNECTION** - **Connected Service Provider:** Google Cloud Platform - **Connection Name:** servicenetworking-googleapis-com (DEFAULT POPULATED CANNOT CHANGE) - **Assigned IP Allocation:** google-managed-services-default - Click on **CONNECT** ## Step-03: Create Google Cloud SQL MySQL Instance - Go to SQL -> Choose MySQL - **Instance ID:** ums-db-private-instance - **Password:** KalyanReddy13 - **Database Version:** MYSQL 8.0 - **Choose a configuration to start with:** Development - **Choose region and zonal availability** - **Region:** US-central1(IOWA) - **Zonal availability:** Single Zone - **Primary Zone:** us-central1-c - **Customize your instance** - **Machine Type:** LightWeight (1 vCPU, 3.75GB) - **Storage Type:** HDD - **Storage Capacity:** 10GB - **Enable automatic storage increases:** CHECKED - **Instance IP Assignment:** - **Private IP:** CHECKED - **Associated networking:** default - **MESSAGE:** Private services access connection for network default has been successfully created. You will now be able to use the same network across all your project's managed services. If you would like to change this connection, please visit the Networking page. - **Allocated IP range (optional):** google-managed-services-default - **Public IP:** UNCHECKED - **Authorized networks:** NOT ADDED ANYTHING - **Data Protection** - **Automatic Backups:** UNCHECKED - **Instance deletion protection:** UNCHECKED - **Maintenance:** Leave to defaults - **Flags:** Leave to defaults - **Labels:** Leave to defaults - Click on **CREATE INSTANCE** ## Step-04: Create DB Schema webappdb - Go to SQL -> ums-db-public-instance -> Databases -> **CREATE DATABASE** - **Database Name:** webappdb - **Character set:** utf8 - **Collation:** Default collation - Click on **CREATE** ## Step-05: 01-MySQL-externalName-Service.yaml - Update Cloud SQL MySQL DB `Private IP` in ExternalName Service ```yaml apiVersion: v1 kind: Service metadata: name: mysql-externalname-service spec: type: ExternalName externalName: 10.64.18.3 ``` ## Step-06: 02-Kubernetes-Secrets.yaml ```yaml apiVersion: v1 kind: Secret metadata: name: mysql-db-password type: Opaque data: db-password: S2FseWFuUmVkZHkxMw== # Base64 of KalyanReddy13 # https://www.base64encode.org/ # Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw== ``` ## Step-07: 03-UserMgmtWebApp-Deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: usermgmt-webapp labels: app: usermgmt-webapp spec: replicas: 1 selector: matchLabels: app: usermgmt-webapp template: metadata: labels: app: usermgmt-webapp spec: initContainers: - name: init-db image: busybox:1.31 command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql-externalname-service 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] containers: - name: usermgmt-webapp image: stacksimplify/kube-usermgmt-webapp:1.0.0-MySQLDB imagePullPolicy: Always ports: - containerPort: 8080 env: - name: DB_HOSTNAME value: "mysql-externalname-service" - name: DB_PORT value: "3306" - name: DB_NAME value: "webappdb" - name: DB_USERNAME value: "root" - name: DB_PASSWORD valueFrom: secretKeyRef: name: mysql-db-password key: db-password ``` ## Step-08: 04-UserMgmtWebApp-LoadBalancer-Service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: usermgmt-webapp-lb-service labels: app: usermgmt-webapp spec: type: LoadBalancer selector: app: usermgmt-webapp ports: - port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-09: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-10: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 ``` ## Step-11: Connect to MySQL DB (Cloud SQL) from GKE Cluster using kubectl ```t ## Verify from Kubernetes Cluster, we are able to connect to MySQL DB # Template kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h <Kubernetes-ExternalName-Service> -u <USER_NAME> -p<PASSWORD> # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-12: Create New user admin102 and verify in Cloud SQL MySQL webappdb ```t # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User (Used for testing `allowVolumeExpansion: true` Option) Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # MySQL Client 8.0: Replace External Name Service, Username and Password kubectl run -it --rm --image=mysql:8.0 --restart=Never mysql-client -- mysql -h mysql-externalname-service -u root -pKalyanReddy13 mysql> show schemas; mysql> use webappdb; mysql> show tables; mysql> select * from user; mysql> exit ``` ## Step-13: Clean-Up ```t # Delete Kubernetes Objects kubectl delete -f kube-manifests/ # Important Note: DONT DELETE GCP Cloud SQL Instance. We will use it in next demo and clean-up ``` ## References - [Private Service Access with MySQL](https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console) - [Private Service Access](https://cloud.google.com/vpc/docs/private-services-access) - [VPC Network Peering Limits](https://cloud.google.com/vpc/docs/quota#vpc-peering) - [Configuring Private Service Access](https://cloud.google.com/vpc/docs/configure-private-services-access) - [Additional Reference Only - Enabling private services access](https://cloud.google.com/service-infrastructure/docs/enabling-private-services-access)
gcp gke docs
title GKE Storage with GCP Cloud SQL MySQL Private Instance description Use GCP Cloud SQL MySQL DB for GKE Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 Step 01 Introduction GKE Private Cluster GCP Cloud SQL with Private IP Step 02 Create Private Service Connection to Google Managed Services from our VPC Network Step 02 01 Create ALLOCATED IP RANGES FOR SERVICES Go to VPC Networks default PRIVATE SERVICE CONNECTION ALLOCATED IP RANGES FOR SERVICES Click on ALLOCATE IP RANGE Name google managed services default google managed services VPC NAME Description google managed services default IP Range Automatic Prefix Length 16 Click on ALLOCATE Step 02 02 Create PRIVATE CONNECTIONS TO SERVICES Delete existing connection if any present servicenetworking googleapis com Click on CREATE CONNECTION Connected Service Provider Google Cloud Platform Connection Name servicenetworking googleapis com DEFAULT POPULATED CANNOT CHANGE Assigned IP Allocation google managed services default Click on CONNECT Step 03 Create Google Cloud SQL MySQL Instance Go to SQL Choose MySQL Instance ID ums db private instance Password KalyanReddy13 Database Version MYSQL 8 0 Choose a configuration to start with Development Choose region and zonal availability Region US central1 IOWA Zonal availability Single Zone Primary Zone us central1 c Customize your instance Machine Type LightWeight 1 vCPU 3 75GB Storage Type HDD Storage Capacity 10GB Enable automatic storage increases CHECKED Instance IP Assignment Private IP CHECKED Associated networking default MESSAGE Private services access connection for network default has been successfully created You will now be able to use the same network across all your project s managed services If you would like to change this connection please visit the Networking page Allocated IP range optional google managed services default Public IP UNCHECKED Authorized networks NOT ADDED ANYTHING Data Protection Automatic Backups UNCHECKED Instance deletion protection UNCHECKED Maintenance Leave to defaults Flags Leave to defaults Labels Leave to defaults Click on CREATE INSTANCE Step 04 Create DB Schema webappdb Go to SQL ums db public instance Databases CREATE DATABASE Database Name webappdb Character set utf8 Collation Default collation Click on CREATE Step 05 01 MySQL externalName Service yaml Update Cloud SQL MySQL DB Private IP in ExternalName Service yaml apiVersion v1 kind Service metadata name mysql externalname service spec type ExternalName externalName 10 64 18 3 Step 06 02 Kubernetes Secrets yaml yaml apiVersion v1 kind Secret metadata name mysql db password type Opaque data db password S2FseWFuUmVkZHkxMw Base64 of KalyanReddy13 https www base64encode org Base64 of KalyanReddy13 is S2FseWFuUmVkZHkxMw Step 07 03 UserMgmtWebApp Deployment yaml yaml apiVersion apps v1 kind Deployment metadata name usermgmt webapp labels app usermgmt webapp spec replicas 1 selector matchLabels app usermgmt webapp template metadata labels app usermgmt webapp spec initContainers name init db image busybox 1 31 command sh c echo e Checking for the availability of MySQL Server deployment while nc z mysql externalname service 3306 do sleep 1 printf done echo e MySQL DB Server has started containers name usermgmt webapp image stacksimplify kube usermgmt webapp 1 0 0 MySQLDB imagePullPolicy Always ports containerPort 8080 env name DB HOSTNAME value mysql externalname service name DB PORT value 3306 name DB NAME value webappdb name DB USERNAME value root name DB PASSWORD valueFrom secretKeyRef name mysql db password key db password Step 08 04 UserMgmtWebApp LoadBalancer Service yaml yaml apiVersion v1 kind Service metadata name usermgmt webapp lb service labels app usermgmt webapp spec type LoadBalancer selector app usermgmt webapp ports port 80 Service Port targetPort 8080 Container Port Step 09 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 10 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Step 11 Connect to MySQL DB Cloud SQL from GKE Cluster using kubectl t Verify from Kubernetes Cluster we are able to connect to MySQL DB Template kubectl run it rm image mysql 8 0 restart Never mysql client mysql h Kubernetes ExternalName Service u USER NAME p PASSWORD MySQL Client 8 0 Replace External Name Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql externalname service u root pKalyanReddy13 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql exit Step 12 Create New user admin102 and verify in Cloud SQL MySQL webappdb t Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User Used for testing allowVolumeExpansion true Option Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 MySQL Client 8 0 Replace External Name Service Username and Password kubectl run it rm image mysql 8 0 restart Never mysql client mysql h mysql externalname service u root pKalyanReddy13 mysql show schemas mysql use webappdb mysql show tables mysql select from user mysql exit Step 13 Clean Up t Delete Kubernetes Objects kubectl delete f kube manifests Important Note DONT DELETE GCP Cloud SQL Instance We will use it in next demo and clean up References Private Service Access with MySQL https cloud google com sql docs mysql configure private services access console Private Service Access https cloud google com vpc docs private services access VPC Network Peering Limits https cloud google com vpc docs quota vpc peering Configuring Private Service Access https cloud google com vpc docs configure private services access Additional Reference Only Enabling private services access https cloud google com service infrastructure docs enabling private services access
gcp gke docs Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t title GCP Google Kubernetes Engine GKE Ingress with Cloud Armor
--- title: GCP Google Kubernetes Engine GKE Ingress with Cloud Armor description: Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. Registered Domain using Google Cloud Domains 4. External DNS Controller installed and ready to use ```t # List External DNS Pods kubectl -n external-dns-ns get pods ``` 5. Verify if External IP Address is created ```t # List External IP Address gcloud compute addresses list # Describe External IP Address gcloud compute addresses describe gke-ingress-extip1 --global ``` ## Step-01: Introduction - Ingress Service with Cloud Armor ## Step-02: Create Cloud Armor Policy - Go to Network Security -> Cloud Armor -> CREATE POLICY ### Configure Policy - **Name:** cloud-armor-policy-1 - **Description:** Cloud Armor Demo with GKE Ingress - **Policy type:** Backend security policy - **Default rule action:** Deny - **Deny Status:** 403(Forbidden) - Click on **NEXT STEP** ### Add More Rules (Optional) - Leave to default - NO NEW RULES OTHER THAN EXISTING DEFAULT RULE - ALL IP ADDRESS -> DENY -> With 403 ERROR -> Priority 2,147,483,647 - Click on **NEXT STEP** ### Add Policy to Targets (Optional) - Leave to default - Click on **NEXT STEP** ### Advanced configurations (Adaptive Protection) (optional) - Click on **Enable** checkbox - Click on **DONE** - Click on **CREATE POLICY** ## Step-03: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: cloud-armor-demo-deployment spec: replicas: 2 selector: matchLabels: app: cloud-armor-demo template: metadata: labels: app: cloud-armor-demo spec: containers: - name: cloud-armor-demo image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 # Readiness Probe (https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#def_inf_hc) readinessProbe: httpGet: scheme: HTTP path: /index.html port: 80 initialDelaySeconds: 10 periodSeconds: 15 timeoutSeconds: 5 ``` ## Step-04: 02-kubernetes-NodePort-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: cloud-armor-demo-nodeport-service annotations: cloud.google.com/backend-config: '{"ports": {"80":"my-backendconfig"}}' spec: type: NodePort selector: app: cloud-armor-demo ports: - port: 80 targetPort: 80 ``` ## Step-05: 03-ingress.yaml ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-cloud-armor-demo annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # External DNS - For creating a Record Set in Google Cloud Cloud DNS external-dns.alpha.kubernetes.io/hostname: cloudarmor-ingress.kalyanreddydaida.com spec: defaultBackend: service: name: cloud-armor-demo-nodeport-service port: number: 80 ``` ## Step-06: 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 securityPolicy: name: "cloud-armor-policy-1" ``` ## Step-07: Deploy Kubernetes Manifests and Verify ```t # Deploy Kubernetes Manifests kubectl apply -f kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get po # List Services kubectl get svc # List Ingress Services kubectl get ingress # List Backendconfig kubectl get backendconfig # Access Application http://<DNS-NAME> http://cloudarmor-ingress.kalyanreddydaida.com Observation: 1. We should get 403 Forbidden error. 2. This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error ``` ## Step-08: Make a note of Public IP for your Internet Connection - Go to [URL: www.whatismyip.com](https://www.whatismyip.com/) and make a note of your local desktop Public IP - If you are behind Company / Organizations proxies, not sure if it works. - I am using my Home Internet Connection ## Step-09: Add new rule in Cloud Armor Policy - Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> RULES -> ADD RULE - **Description:** Allow-from-my-desktop - **Mode:** Basic Mode(IP Address / Ranges only) - **Match:** 49.206.52.84 (My internet connection public ip) - **Action:** Allow - **Priority:** 1 - Click on **ADD** - WAIT FOR 5 MINUTES for new policy to go live ## Step-10: Access Application ```t # Access Application from local desktop http://<DNS-NAME> http://cloudarmor-ingress.kalyanreddydaida.com curl http://cloudarmor-ingress.kalyanreddydaida.com Observation: 1. Application access should be successful ``` ## Step-11: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Delete Cloud Armor Policy Go to Network Security -> Cloud Armor -> POLICIES -> cloud-armor-policy-1 -> DELETE ``` ## References - https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#cloud_armor - https://cloud.google.com/armor/docs/security-policy-overview - https://cloud.google.com/armor/docs/integrating-cloud-armor - https://cloud.google.com/armor/docs/configure-security-policie
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress with Cloud Armor description Implement GCP Google Kubernetes Engine GKE Ingress with Cloud Armor Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 Registered Domain using Google Cloud Domains 4 External DNS Controller installed and ready to use t List External DNS Pods kubectl n external dns ns get pods 5 Verify if External IP Address is created t List External IP Address gcloud compute addresses list Describe External IP Address gcloud compute addresses describe gke ingress extip1 global Step 01 Introduction Ingress Service with Cloud Armor Step 02 Create Cloud Armor Policy Go to Network Security Cloud Armor CREATE POLICY Configure Policy Name cloud armor policy 1 Description Cloud Armor Demo with GKE Ingress Policy type Backend security policy Default rule action Deny Deny Status 403 Forbidden Click on NEXT STEP Add More Rules Optional Leave to default NO NEW RULES OTHER THAN EXISTING DEFAULT RULE ALL IP ADDRESS DENY With 403 ERROR Priority 2 147 483 647 Click on NEXT STEP Add Policy to Targets Optional Leave to default Click on NEXT STEP Advanced configurations Adaptive Protection optional Click on Enable checkbox Click on DONE Click on CREATE POLICY Step 03 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata name cloud armor demo deployment spec replicas 2 selector matchLabels app cloud armor demo template metadata labels app cloud armor demo spec containers name cloud armor demo image stacksimplify kubenginx 1 0 0 ports containerPort 80 Readiness Probe https cloud google com kubernetes engine docs concepts ingress def inf hc readinessProbe httpGet scheme HTTP path index html port 80 initialDelaySeconds 10 periodSeconds 15 timeoutSeconds 5 Step 04 02 kubernetes NodePort service yaml yaml apiVersion v1 kind Service metadata name cloud armor demo nodeport service annotations cloud google com backend config ports 80 my backendconfig spec type NodePort selector app cloud armor demo ports port 80 targetPort 80 Step 05 03 ingress yaml yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress cloud armor demo annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 External DNS For creating a Record Set in Google Cloud Cloud DNS external dns alpha kubernetes io hostname cloudarmor ingress kalyanreddydaida com spec defaultBackend service name cloud armor demo nodeport service port number 80 Step 06 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 securityPolicy name cloud armor policy 1 Step 07 Deploy Kubernetes Manifests and Verify t Deploy Kubernetes Manifests kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get po List Services kubectl get svc List Ingress Services kubectl get ingress List Backendconfig kubectl get backendconfig Access Application http DNS NAME http cloudarmor ingress kalyanreddydaida com Observation 1 We should get 403 Forbidden error 2 This is expected because we have configured a Cloud Armor Policy to block All IP Addresses with 403 Error Step 08 Make a note of Public IP for your Internet Connection Go to URL www whatismyip com https www whatismyip com and make a note of your local desktop Public IP If you are behind Company Organizations proxies not sure if it works I am using my Home Internet Connection Step 09 Add new rule in Cloud Armor Policy Go to Network Security Cloud Armor POLICIES cloud armor policy 1 RULES ADD RULE Description Allow from my desktop Mode Basic Mode IP Address Ranges only Match 49 206 52 84 My internet connection public ip Action Allow Priority 1 Click on ADD WAIT FOR 5 MINUTES for new policy to go live Step 10 Access Application t Access Application from local desktop http DNS NAME http cloudarmor ingress kalyanreddydaida com curl http cloudarmor ingress kalyanreddydaida com Observation 1 Application access should be successful Step 11 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Delete Cloud Armor Policy Go to Network Security Cloud Armor POLICIES cloud armor policy 1 DELETE References https cloud google com kubernetes engine docs how to ingress features cloud armor https cloud google com armor docs security policy overview https cloud google com armor docs integrating cloud armor https cloud google com armor docs configure security policie
gcp gke docs What is a POD What is a Multi Container POD Step 01 PODs Introduction Learn about Kubernetes Pods title Kubernetes PODs Step 02 PODs Demo
--- title: Kubernetes PODs description: Learn about Kubernetes Pods --- ## Step-01: PODs Introduction - What is a POD ? - What is a Multi-Container POD? ## Step-02: PODs Demo ### Step-02-01: Get Worker Nodes Status - Verify if kubernetes worker nodes are ready. ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME> gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Get Worker Node Status kubectl get nodes # Get Worker Node Status with wide option kubectl get nodes -o wide ``` ### Step-02-02: Create a Pod - Create a Pod ```t # Template kubectl run <desired-pod-name> --image <Container-Image> # Replace Pod Name, Container Image kubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0 ``` ### Step-02-03: List Pods - Get the list of pods ```t # List Pods kubectl get pods # Alias name for pods is po kubectl get po ``` ### Step-02-04: List Pods with wide option - List pods with wide option which also provide Node information on which Pod is running ```t # List Pods with Wide Option kubectl get pods -o wide ``` ### Step-02-05: What happened in the backgroup when above command is run? 1. Kubernetes created a pod 2. Pulled the docker image from docker hub 3. Created the container in the pod 4. Started the container present in the pod ### Step-02-06: Describe Pod - Describe the POD, primarily required during troubleshooting. - Events shown will be of a great help during troubleshooting. ```t # To get list of pod names kubectl get pods # Describe the Pod kubectl describe pod <Pod-Name> kubectl describe pod my-first-pod Observation: 1. Review Events - thats the key for troubleshooting, understanding what happened ``` ### Step-02-07: Access Application - Currently we can access this application only inside worker nodes. - To access it externally, we need to create a **NodePort or Load Balancer Service**. - **Services** is one very very important concept in Kubernetes. ### Step-02-08: Delete Pod ```t # To get list of pod names kubectl get pods # Delete Pod kubectl delete pod <Pod-Name> kubectl delete pod my-first-pod ``` ## Step-03: Load Balancer Service Introduction - What are Services in k8s? - What is a Load Balancer Service? - How it works? ## Step-04: Demo - Expose Pod with a Service - Expose pod with a service (Load Balancer Service) to access the application externally (from internet) - **Ports** - **port:** Port on which node port service listens in Kubernetes cluster internally - **targetPort:** We define container port here on which our application is running. - Verify the following before LB Service creation - Azure Standard Load Balancer created for Azure AKS Cluster - Frontend IP Configuration - Load Balancing Rules - Azure Public IP ```t # Create a Pod kubectl run <desired-pod-name> --image <Container-Image> kubectl run my-first-pod --image stacksimplify/kubenginx:1.0.0 # Expose Pod as a Service kubectl expose pod <Pod-Name> --type=LoadBalancer --port=80 --name=<Service-Name> kubectl expose pod my-first-pod --type=LoadBalancer --port=80 --name=my-first-service # Get Service Info kubectl get service kubectl get svc Observation: 1. Initially External-IP will show as pending and slowly it will get the external-ip assigned and displayed. 2. It will take 2 to 3 minutes to get the external-ip listed # Describe Service kubectl describe service my-first-service # Access Application http://<External-IP-from-get-service-output> curl http://<External-IP-from-get-service-output> ``` - Verify the following after LB Service creation - Google Load Balancer created, verify it. - Verify Backends - Verify Frontends - Verify **Workloads and Services** on Google GKE Dashboard GCP Console ## Step-05: Interact with a Pod ### Step-05-01: Verify Pod Logs ```t # Get Pod Name kubectl get po # Dump Pod logs kubectl logs <pod-name> kubectl logs my-first-pod # Stream pod logs with -f option and access application to see logs kubectl logs <pod-name> kubectl logs -f my-first-pod ``` - **Important Notes** - Refer below link and search for **Interacting with running Pods** for additional log options - Troubleshooting skills are very important. So please go through all logging options available and master them. - **Reference:** https://kubernetes.io/docs/reference/kubectl/cheatsheet/ ### Step-05-02: Connect to a Container in POD and execute command ```t # Connect to Nginx Container in a POD kubectl exec -it <pod-name> -- /bin/bash kubectl exec -it my-first-pod -- /bin/bash # Execute some commands in Nginx container ls cd /usr/share/nginx/html cat index.html exit ``` ### Step-05-03: Running individual commands in a Container ```t # Template kubectl exec -it <pod-name> -- <COMMAND> # Sample Commands kubectl exec -it my-first-pod -- env kubectl exec -it my-first-pod -- ls kubectl exec -it my-first-pod -- cat /usr/share/nginx/html/index.html ``` ## Step-06: Get YAML Output of Pod & Service ### Get YAML Output ```t # Get pod definition YAML output kubectl get pod my-first-pod -o yaml # Get service definition YAML output kubectl get service my-first-service -o yaml ``` ## Step-07: Clean-Up ```t # Get all Objects in default namespace kubectl get all # Delete Services kubectl delete svc my-first-service # Delete Pod kubectl delete pod my-first-pod # Get all Objects in default namespace kubectl get all ``` ## LOGS - More Options ```t # Return snapshot logs from pod nginx with only one container kubectl logs nginx # Return snapshot of previous terminated ruby container logs from pod web-1 kubectl logs -p -c ruby web-1 # Begin streaming the logs of the ruby container in pod web-1 kubectl logs -f -c ruby web-1 # Display only the most recent 20 lines of output in pod nginx kubectl logs --tail=20 nginx # Show all logs from pod nginx written in the last hour kubectl logs --since=1h nginx ```
gcp gke docs
title Kubernetes PODs description Learn about Kubernetes Pods Step 01 PODs Introduction What is a POD What is a Multi Container POD Step 02 PODs Demo Step 02 01 Get Worker Nodes Status Verify if kubernetes worker nodes are ready t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT NAME gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 Get Worker Node Status kubectl get nodes Get Worker Node Status with wide option kubectl get nodes o wide Step 02 02 Create a Pod Create a Pod t Template kubectl run desired pod name image Container Image Replace Pod Name Container Image kubectl run my first pod image stacksimplify kubenginx 1 0 0 Step 02 03 List Pods Get the list of pods t List Pods kubectl get pods Alias name for pods is po kubectl get po Step 02 04 List Pods with wide option List pods with wide option which also provide Node information on which Pod is running t List Pods with Wide Option kubectl get pods o wide Step 02 05 What happened in the backgroup when above command is run 1 Kubernetes created a pod 2 Pulled the docker image from docker hub 3 Created the container in the pod 4 Started the container present in the pod Step 02 06 Describe Pod Describe the POD primarily required during troubleshooting Events shown will be of a great help during troubleshooting t To get list of pod names kubectl get pods Describe the Pod kubectl describe pod Pod Name kubectl describe pod my first pod Observation 1 Review Events thats the key for troubleshooting understanding what happened Step 02 07 Access Application Currently we can access this application only inside worker nodes To access it externally we need to create a NodePort or Load Balancer Service Services is one very very important concept in Kubernetes Step 02 08 Delete Pod t To get list of pod names kubectl get pods Delete Pod kubectl delete pod Pod Name kubectl delete pod my first pod Step 03 Load Balancer Service Introduction What are Services in k8s What is a Load Balancer Service How it works Step 04 Demo Expose Pod with a Service Expose pod with a service Load Balancer Service to access the application externally from internet Ports port Port on which node port service listens in Kubernetes cluster internally targetPort We define container port here on which our application is running Verify the following before LB Service creation Azure Standard Load Balancer created for Azure AKS Cluster Frontend IP Configuration Load Balancing Rules Azure Public IP t Create a Pod kubectl run desired pod name image Container Image kubectl run my first pod image stacksimplify kubenginx 1 0 0 Expose Pod as a Service kubectl expose pod Pod Name type LoadBalancer port 80 name Service Name kubectl expose pod my first pod type LoadBalancer port 80 name my first service Get Service Info kubectl get service kubectl get svc Observation 1 Initially External IP will show as pending and slowly it will get the external ip assigned and displayed 2 It will take 2 to 3 minutes to get the external ip listed Describe Service kubectl describe service my first service Access Application http External IP from get service output curl http External IP from get service output Verify the following after LB Service creation Google Load Balancer created verify it Verify Backends Verify Frontends Verify Workloads and Services on Google GKE Dashboard GCP Console Step 05 Interact with a Pod Step 05 01 Verify Pod Logs t Get Pod Name kubectl get po Dump Pod logs kubectl logs pod name kubectl logs my first pod Stream pod logs with f option and access application to see logs kubectl logs pod name kubectl logs f my first pod Important Notes Refer below link and search for Interacting with running Pods for additional log options Troubleshooting skills are very important So please go through all logging options available and master them Reference https kubernetes io docs reference kubectl cheatsheet Step 05 02 Connect to a Container in POD and execute command t Connect to Nginx Container in a POD kubectl exec it pod name bin bash kubectl exec it my first pod bin bash Execute some commands in Nginx container ls cd usr share nginx html cat index html exit Step 05 03 Running individual commands in a Container t Template kubectl exec it pod name COMMAND Sample Commands kubectl exec it my first pod env kubectl exec it my first pod ls kubectl exec it my first pod cat usr share nginx html index html Step 06 Get YAML Output of Pod Service Get YAML Output t Get pod definition YAML output kubectl get pod my first pod o yaml Get service definition YAML output kubectl get service my first service o yaml Step 07 Clean Up t Get all Objects in default namespace kubectl get all Delete Services kubectl delete svc my first service Delete Pod kubectl delete pod my first pod Get all Objects in default namespace kubectl get all LOGS More Options t Return snapshot logs from pod nginx with only one container kubectl logs nginx Return snapshot of previous terminated ruby container logs from pod web 1 kubectl logs p c ruby web 1 Begin streaming the logs of the ruby container in pod web 1 kubectl logs f c ruby web 1 Display only the most recent 20 lines of output in pod nginx kubectl logs tail 20 nginx Show all logs from pod nginx written in the last hour kubectl logs since 1h nginx
gcp gke docs Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test Create GKE Standard GKE Cluster Clean Up Learn to create Google Kubernetes Engine GKE Cluster title GCP Google Kubernetes Engine Create GKE Cluster Step 01 Introduction Configure Google CloudShell to access GKE Cluster
--- title: GCP Google Kubernetes Engine - Create GKE Cluster description: Learn to create Google Kubernetes Engine GKE Cluster --- ## Step-01: Introduction - Create GKE Standard GKE Cluster - Configure Google CloudShell to access GKE Cluster - Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test - Clean-Up ## Step-02: Create Standard GKE Cluster - Go to Kubernetes Engine -> Clusters -> CREATE - Select **GKE Standard -> CONFIGURE** - **Cluster Basics** - **Name:** standard-public-cluster-1 - **Location type:** Regional - **Region:** us-central1 - **Specify default node locations:** us-central1-a, us-central1-b, us-central1-c - **Release Channel** - **Release Channel:** Rapid Channel - **Version:** LATEST AVAIALABLE ON THAT DAY - REST ALL LEAVE TO DEFAULTS - **NODE POOLS: default-pool** - **Node pool details** - **Name:** default-pool - **Number of Nodes (per zone):** 1 - **Node Pool Upgrade Strategy:** Surge Upgrade - **Nodes: Configure node settings** - **Image type:** Containerized Optimized OS - **Machine configuration** - **GENERAL PURPOSE SERIES:** E2 - **Machine Type:** e2-small - **Boot disk type:** Balanced persistent disk - **Boot disk size(GB):** 20 - **Boot disk encryption:** Google-managed encryption key (default ) - **Enable Node on Spot VMs:** CHECKED - **Node Networking:** LEAVE TO DEFAULTS - **Node Security:** - **Access scopes:** Allow default access (LEAVE TO DEFAULT) - REST ALL REVIEW AND LEAVE TO DEFAULTS - **Node Metadata:** REVIEW AND LEAVE TO DEFAULTS - **CLUSTER** - **Automation:** REVIEW AND LEAVE TO DEFAULTS - **Networking:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Dataplane V2** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Security:** REVIEW AND LEAVE TO DEFAULTS - **CHECK THIS BOX: Enable Workload Identity** CHECK IT - IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED - **Metadata:** REVIEW AND LEAVE TO DEFAULTS - **Features:** REVIEW AND LEAVE TO DEFAULTS - CLICK ON **CREATE** ## Step-03: Verify Cluster Details - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Review - Details Tab - Nodes Tab - Review same nodes **Compute Engine** - Storage Tab - Review Storage Classes - Logs Tab - Review Cluster Logs - Review Cluster Logs **Filter By Severity** ## Step-04: Verify Additional Features in GKE on a High-Level ### Step-04-01: Verify Workloads Tab - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Workloads -> **SHOW SYSTEM WORKLOADS** ### Step-04-02: Verify Services & Ingress - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Services & Ingress -> **SHOW SYSTEM OBJECTS** ### Step-04-03: Verify Applications, Secrets & ConfigMaps - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Applications - Secrets & ConfigMaps ### Step-04-04: Verify Storage - Go to Kubernetes Engine -> Clusters -> **standard-public-cluster-1** - Storage Classes - premium-rwo - standard - standard-rwo ### Step-04-05: Verify the below 1. Object Browser 2. Migrate to Containers 3. Backup for GKE 4. Config Management 5. Protect ## Step-05: Google CloudShell: Connect to GKE Cluster using kubectl - [kubectl Authentication in GKE](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) ```t # Verify gke-gcloud-auth-plugin Installation (if not installed, install it) gke-gcloud-auth-plugin --version # Install Kubectl authentication plugin for GKE sudo apt-get install google-cloud-sdk-gke-gcloud-auth-plugin # Verify gke-gcloud-auth-plugin Installation gke-gcloud-auth-plugin --version # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT-NAME> gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Run kubectl with the new plugin prior to the release of v1.25 vi ~/.bashrc USE_GKE_GCLOUD_AUTH_PLUGIN=True # Reload the environment value source ~/.bashrc # Check if Environment variable loaded in Terminal echo $USE_GKE_GCLOUD_AUTH_PLUGIN # Verify kubectl version kubectl version --short # Install kubectl (if not installed) gcloud components install kubectl # Configure kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --zone <ZONE> --project <PROJECT-ID> gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-c --project kdaida123 # Verify Kubernetes Worker Nodes kubectl get nodes # Verify System Pod in kube-system Namespace kubectl -n kube-system get pods # Verify kubeconfig file cat $HOME/.kube/config kubectl config view ``` ## Step-06: Review Sample Application: 01-kubernetes-deployment.yaml - **Folder:** kube-manifests ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 2 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` ## Step-07: Review Sample Application: 02-kubernetes-loadbalancer-service.yaml - **Folder:** kube-manifests ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-lb-service spec: type: LoadBalancer # ClusterIp, # NodePort selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` ## Step-08: Upload Sample App to Google CloudShell ```t # Upload Sample App to Google CloudShell Go to Google CloudShell -> 3 Dots -> Upload -> Folder -> google-kubernetes-engine # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Verify folder uploaded ls kube-manifests/ # Verify Files cat kube-manifests/01-kubernetes-deployment.yaml cat kube-manifests/02-kubernetes-loadbalancer-service.yaml ``` ## Step-09: Deploy Sample Application and Verify ```t # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Deploy Sample App using kubectl kubectl apply -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pod # List Services kubectl get svc # Access Sample Application http://<EXTERNAL-IP> ``` ## Step-10: Verify Workloads in GKE Dashboard - Go to GCP Console -> Kubernetes Engine -> Workloads - Click on **myapp1-deployment** - Review all tabs ## Step-11: Verify Services in GKE Dashboard - Go to GCP Console -> Kubernetes Engine -> Services & Ingress - Click on **myapp1-lb-service** - Review all tabs ## Step-13: Verify Load Balancer - Go to GCP Console -> Networking Services -> Load Balancing - Review all tabs ## Step-14: Clean-Up - Go to Google Cloud Shell ```t # Change Directory cd google-kubernetes-engine/02-Create-GKE-Cluster # Delete Kubernetes Deployment and Service kubectl delete -f kube-manifests/ # List Deployments kubectl get deploy # List Pods kubectl get pod # List Services kubectl get svc ```
gcp gke docs
title GCP Google Kubernetes Engine Create GKE Cluster description Learn to create Google Kubernetes Engine GKE Cluster Step 01 Introduction Create GKE Standard GKE Cluster Configure Google CloudShell to access GKE Cluster Deploy simple Kubernetes Deployment and Kubernetes Load Balancer Service and Test Clean Up Step 02 Create Standard GKE Cluster Go to Kubernetes Engine Clusters CREATE Select GKE Standard CONFIGURE Cluster Basics Name standard public cluster 1 Location type Regional Region us central1 Specify default node locations us central1 a us central1 b us central1 c Release Channel Release Channel Rapid Channel Version LATEST AVAIALABLE ON THAT DAY REST ALL LEAVE TO DEFAULTS NODE POOLS default pool Node pool details Name default pool Number of Nodes per zone 1 Node Pool Upgrade Strategy Surge Upgrade Nodes Configure node settings Image type Containerized Optimized OS Machine configuration GENERAL PURPOSE SERIES E2 Machine Type e2 small Boot disk type Balanced persistent disk Boot disk size GB 20 Boot disk encryption Google managed encryption key default Enable Node on Spot VMs CHECKED Node Networking LEAVE TO DEFAULTS Node Security Access scopes Allow default access LEAVE TO DEFAULT REST ALL REVIEW AND LEAVE TO DEFAULTS Node Metadata REVIEW AND LEAVE TO DEFAULTS CLUSTER Automation REVIEW AND LEAVE TO DEFAULTS Networking REVIEW AND LEAVE TO DEFAULTS CHECK THIS BOX Enable Dataplane V2 CHECK IT IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED Security REVIEW AND LEAVE TO DEFAULTS CHECK THIS BOX Enable Workload Identity CHECK IT IN FUTURE VERSIONS IT WILL BE BY DEFAULT ENABLED Metadata REVIEW AND LEAVE TO DEFAULTS Features REVIEW AND LEAVE TO DEFAULTS CLICK ON CREATE Step 03 Verify Cluster Details Go to Kubernetes Engine Clusters standard public cluster 1 Review Details Tab Nodes Tab Review same nodes Compute Engine Storage Tab Review Storage Classes Logs Tab Review Cluster Logs Review Cluster Logs Filter By Severity Step 04 Verify Additional Features in GKE on a High Level Step 04 01 Verify Workloads Tab Go to Kubernetes Engine Clusters standard public cluster 1 Workloads SHOW SYSTEM WORKLOADS Step 04 02 Verify Services Ingress Go to Kubernetes Engine Clusters standard public cluster 1 Services Ingress SHOW SYSTEM OBJECTS Step 04 03 Verify Applications Secrets ConfigMaps Go to Kubernetes Engine Clusters standard public cluster 1 Applications Secrets ConfigMaps Step 04 04 Verify Storage Go to Kubernetes Engine Clusters standard public cluster 1 Storage Classes premium rwo standard standard rwo Step 04 05 Verify the below 1 Object Browser 2 Migrate to Containers 3 Backup for GKE 4 Config Management 5 Protect Step 05 Google CloudShell Connect to GKE Cluster using kubectl kubectl Authentication in GKE https cloud google com blog products containers kubernetes kubectl auth changes in gke t Verify gke gcloud auth plugin Installation if not installed install it gke gcloud auth plugin version Install Kubectl authentication plugin for GKE sudo apt get install google cloud sdk gke gcloud auth plugin Verify gke gcloud auth plugin Installation gke gcloud auth plugin version Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT NAME gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 Run kubectl with the new plugin prior to the release of v1 25 vi bashrc USE GKE GCLOUD AUTH PLUGIN True Reload the environment value source bashrc Check if Environment variable loaded in Terminal echo USE GKE GCLOUD AUTH PLUGIN Verify kubectl version kubectl version short Install kubectl if not installed gcloud components install kubectl Configure kubectl gcloud container clusters get credentials CLUSTER NAME zone ZONE project PROJECT ID gcloud container clusters get credentials standard cluster 1 zone us central1 c project kdaida123 Verify Kubernetes Worker Nodes kubectl get nodes Verify System Pod in kube system Namespace kubectl n kube system get pods Verify kubeconfig file cat HOME kube config kubectl config view Step 06 Review Sample Application 01 kubernetes deployment yaml Folder kube manifests yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 2 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 ports containerPort 80 Step 07 Review Sample Application 02 kubernetes loadbalancer service yaml Folder kube manifests yaml apiVersion v1 kind Service metadata name myapp1 lb service spec type LoadBalancer ClusterIp NodePort selector app myapp1 ports name http port 80 Service Port targetPort 80 Container Port Step 08 Upload Sample App to Google CloudShell t Upload Sample App to Google CloudShell Go to Google CloudShell 3 Dots Upload Folder google kubernetes engine Change Directory cd google kubernetes engine 02 Create GKE Cluster Verify folder uploaded ls kube manifests Verify Files cat kube manifests 01 kubernetes deployment yaml cat kube manifests 02 kubernetes loadbalancer service yaml Step 09 Deploy Sample Application and Verify t Change Directory cd google kubernetes engine 02 Create GKE Cluster Deploy Sample App using kubectl kubectl apply f kube manifests List Deployments kubectl get deploy List Pods kubectl get pod List Services kubectl get svc Access Sample Application http EXTERNAL IP Step 10 Verify Workloads in GKE Dashboard Go to GCP Console Kubernetes Engine Workloads Click on myapp1 deployment Review all tabs Step 11 Verify Services in GKE Dashboard Go to GCP Console Kubernetes Engine Services Ingress Click on myapp1 lb service Review all tabs Step 13 Verify Load Balancer Go to GCP Console Networking Services Load Balancing Review all tabs Step 14 Clean Up Go to Google Cloud Shell t Change Directory cd google kubernetes engine 02 Create GKE Cluster Delete Kubernetes Deployment and Service kubectl delete f kube manifests List Deployments kubectl get deploy List Pods kubectl get pod List Services kubectl get svc
gcp gke docs title GCP Google Kubernetes Engine Kubernetes Limit Range Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created Implement GCP Google Kubernetes Engine Kubernetes Limit Range t
--- title: GCP Google Kubernetes Engine Kubernetes Limit Range description: Implement GCP Google Kubernetes Engine Kubernetes Limit Range --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` # Step-01: Introduction 1. Kubernetes Namespaces - LimitRange 2. Kubernetes Namespaces - Declarative using YAML ## Step-02: Create Namespace manifest - **Important Note:** File name starts with `01-` so that when creating k8s objects namespace will get created first so it don't throw an error. ```yaml apiVersion: v1 kind: Namespace metadata: name: qa ``` ## Step-03: Create LimitRange manifest - Instead of specifying `resources like cpu and memory` in every container spec of a pod defintion, we can provide the default CPU & Memory for all containers in a namespace using `LimitRange` ```yaml apiVersion: v1 kind: LimitRange metadata: name: default-cpu-mem-limit-range namespace: qa spec: limits: - default: cpu: "400m" # If not specified default limit is 1 vCPU per container memory: "256Mi" # If not specified the Container's memory limit is set to 512Mi, which is the default memory limit for the namespace. defaultRequest: cpu: "200m" # If not specified default it will take from whatever specified in limits.default.cpu memory: "128Mi" # If not specified default it will take from whatever specified in limits.default.memory max: cpu: "500m" memory: "500Mi" min: cpu: "100m" memory: "100Mi" type: Container ``` ## Step-04: Demo-01: Create Kubernetes Resources & Test ```t # Create Kubernetes Resources kubectl apply -f 01-kube-manifests-LimitRange-defaults # List Pods kubectl get pods -n qa -w # View Pod Specification (CPU & Memory) kubectl describe pod <pod-name> -n qa Observation: 1. We will find the "Limits" in pod container equals to "defaults" from LimitRange 2. We will find the "Requests" in pod container equals to "defaultRequest" # Sample from Pod description Limits: cpu: 400m memory: 256Mi Requests: cpu: 200m memory: 128Mi # Get & Describe Limits kubectl get limits -n qa kubectl describe limits default-cpu-mem-limit-range -n qa # List Services kubectl get svc -n qa # Access Application http://<SVC-External-IP>/ ``` ## Step-05: Demo-01: Clean-Up - Delete all Kubernetes objects created as part of this section ```t # Delete All kubectl delete -f 01-kube-manifests-LimitRange-defaults/ ``` ## Step-06: Demo-02: Update Demo-02 Deployment Manifest with Requests & Limits - Negative case testing - When deployed with these `Requests & Limits` where `cpu=600m in limits` which is above the `max cpu = 500m` in LimitRange `default-cpu-mem-limit-range` it should not schedule the pods and throw error in `ReplicaSet Events`. - **File Name:** 03-kubernetes-deployment.yaml ```t # Update Demo-02 Deployment Manifest with Requests & Limits resources: requests: memory: "128Mi" cpu: "450m" limits: memory: "256Mi" cpu: "600m" ``` ## Step-07: Demo-02: Create Kubernetes Resources & Test ```t # Create Kubernetes Resources kubectl apply -f 02-kube-manifests-LimitRange-MinMax # List Pods kubectl get pods -n qa Observation: 1. No Pod should be scheduled # List Deployments kubectl get deploy -n qa Observation: 0/2 ready which means no pods scheduled. Verify ReplicaSet Events # List & Describe ReplicaSets kubectl get rs -n qa kubectl describe rs <ReplicaSet-Name> -n qa Observation: Below error will be displayed Warning FailedCreate 18s (x5 over 56s) replicaset-controller (combined from similar events): Error creating: pods "myapp1-deployment-5dd9f78fd8-k5th6" is forbidden: maximum cpu usage per Container is 500m, but limit is 600m # Get & Describe Limits kubectl get limits -n qa kubectl describe limits default-cpu-mem-limit-range -n qa # List Services kubectl get svc -n qa # Access Application http://<SVC-External-IP>/ ``` ## Step-08: Demo-02: Update Deployment resources.limit=500m - **File Name:** 03-kubernetes-deployment.yaml ```t # Demo-02: Update Deployment resources.limit=500m resources: requests: memory: "128Mi" cpu: "450m" limits: memory: "256Mi" cpu: "500m" # This is equal to Max value defined in LimitRange, Pods will be scheduled. ``` ## Step-09: Demo-02: Deploy the updated Deployment ```t # Deploy the Updated Deployment kubectl apply -f 02-kube-manifests-LimitRange-MinMax/03-kubernetes-deployment.yaml # List Pods kubectl get pods -n qa Observation: 1. Pods should be scheduled now. ``` ## Step-10: Demo-02: Clean-Up ```t # Delete Demo-02 Kubernetes Resources kubectl delete -f 02-kube-manifests-LimitRange-MinMax ``` ## References: - https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/ - https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
gcp gke docs
title GCP Google Kubernetes Engine Kubernetes Limit Range description Implement GCP Google Kubernetes Engine Kubernetes Limit Range Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction 1 Kubernetes Namespaces LimitRange 2 Kubernetes Namespaces Declarative using YAML Step 02 Create Namespace manifest Important Note File name starts with 01 so that when creating k8s objects namespace will get created first so it don t throw an error yaml apiVersion v1 kind Namespace metadata name qa Step 03 Create LimitRange manifest Instead of specifying resources like cpu and memory in every container spec of a pod defintion we can provide the default CPU Memory for all containers in a namespace using LimitRange yaml apiVersion v1 kind LimitRange metadata name default cpu mem limit range namespace qa spec limits default cpu 400m If not specified default limit is 1 vCPU per container memory 256Mi If not specified the Container s memory limit is set to 512Mi which is the default memory limit for the namespace defaultRequest cpu 200m If not specified default it will take from whatever specified in limits default cpu memory 128Mi If not specified default it will take from whatever specified in limits default memory max cpu 500m memory 500Mi min cpu 100m memory 100Mi type Container Step 04 Demo 01 Create Kubernetes Resources Test t Create Kubernetes Resources kubectl apply f 01 kube manifests LimitRange defaults List Pods kubectl get pods n qa w View Pod Specification CPU Memory kubectl describe pod pod name n qa Observation 1 We will find the Limits in pod container equals to defaults from LimitRange 2 We will find the Requests in pod container equals to defaultRequest Sample from Pod description Limits cpu 400m memory 256Mi Requests cpu 200m memory 128Mi Get Describe Limits kubectl get limits n qa kubectl describe limits default cpu mem limit range n qa List Services kubectl get svc n qa Access Application http SVC External IP Step 05 Demo 01 Clean Up Delete all Kubernetes objects created as part of this section t Delete All kubectl delete f 01 kube manifests LimitRange defaults Step 06 Demo 02 Update Demo 02 Deployment Manifest with Requests Limits Negative case testing When deployed with these Requests Limits where cpu 600m in limits which is above the max cpu 500m in LimitRange default cpu mem limit range it should not schedule the pods and throw error in ReplicaSet Events File Name 03 kubernetes deployment yaml t Update Demo 02 Deployment Manifest with Requests Limits resources requests memory 128Mi cpu 450m limits memory 256Mi cpu 600m Step 07 Demo 02 Create Kubernetes Resources Test t Create Kubernetes Resources kubectl apply f 02 kube manifests LimitRange MinMax List Pods kubectl get pods n qa Observation 1 No Pod should be scheduled List Deployments kubectl get deploy n qa Observation 0 2 ready which means no pods scheduled Verify ReplicaSet Events List Describe ReplicaSets kubectl get rs n qa kubectl describe rs ReplicaSet Name n qa Observation Below error will be displayed Warning FailedCreate 18s x5 over 56s replicaset controller combined from similar events Error creating pods myapp1 deployment 5dd9f78fd8 k5th6 is forbidden maximum cpu usage per Container is 500m but limit is 600m Get Describe Limits kubectl get limits n qa kubectl describe limits default cpu mem limit range n qa List Services kubectl get svc n qa Access Application http SVC External IP Step 08 Demo 02 Update Deployment resources limit 500m File Name 03 kubernetes deployment yaml t Demo 02 Update Deployment resources limit 500m resources requests memory 128Mi cpu 450m limits memory 256Mi cpu 500m This is equal to Max value defined in LimitRange Pods will be scheduled Step 09 Demo 02 Deploy the updated Deployment t Deploy the Updated Deployment kubectl apply f 02 kube manifests LimitRange MinMax 03 kubernetes deployment yaml List Pods kubectl get pods n qa Observation 1 Pods should be scheduled now Step 10 Demo 02 Clean Up t Delete Demo 02 Kubernetes Resources kubectl delete f 02 kube manifests LimitRange MinMax References https kubernetes io docs tasks administer cluster namespaces walkthrough https kubernetes io docs tasks administer cluster manage resources cpu default namespace https kubernetes io docs tasks administer cluster manage resources memory default namespace
gcp gke docs title GCP Google Kubernetes Engine Access to Multiple Clusters We should have the two clusters created and ready standard cluster private 1 Implement GCP Google Kubernetes Engine Access to Multiple Clusters autopilot cluster private 1 Step 00 Pre requisites
--- title: GCP Google Kubernetes Engine Access to Multiple Clusters description: Implement GCP Google Kubernetes Engine Access to Multiple Clusters --- ## Step-00: Pre-requisites - We should have the two clusters created and ready - standard-cluster-private-1 - autopilot-cluster-private-1 ## Step-01: Introduction - Configure access to Multiple Clusters - Understand kube config file $HOME/.kube/config - Understand kubectl config command - kubectl config view - kubectl config current-context - kubectl config use-context <context-name> - kubectl config get-context - kubectl config get-clusters ## Step-02: Pre-requisite - Verify if you have any two GKE Clusters created and ready for use - standard-cluster-private-1 - autopilot-cluster-private-1 ## Step-03: Clean-Up kube config file ```t # Clean existing kube configs cd $HOME/.kube >config cat config ``` ## Step-04: Configure Standard Cluster Access for kubectl - Understand commands - kubectl config view - kubectl config current-context ```t # View kubeconfig kubectl config view # Configure kubeconfig for kubectl: standard-cluster-private-1 gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # View kubeconfig kubectl config view # View Cluster Information kubectl cluster-info # View the current context for kubectl kubectl config current-context ``` ## Step-05: Configure Autopilot Cluster Access for kubectl ```t # Configure kubeconfig for kubectl: autopilot-cluster-private-1 gcloud container clusters get-credentials autopilot-cluster-private-1 --region us-central1 --project kdaida123 # View the current context for kubectl kubectl config current-context # View Cluster Information kubectl cluster-info # View kubeconfig kubectl config view ``` ## Step-06: Switch Contexts between clusters - Understand the kubectl config command **use-context** ```t # View the current context for kubectl kubectl config current-context # View kubeconfig kubectl config view Get contexts.context.name to which you want to switch # Switch Context kubectl config use-context gke_kdaida123_us-central1_standard-cluster-private-1 # View the current context for kubectl kubectl config current-context # View Cluster Information kubectl cluster-info ``` ## Step-07: List Contexts configured in kubeconfig ```t # List Contexts kubectl config get-contexts ``` ## Step-08: List Clusters configured in kubeconfig ```t # List Clusters kubectl config get-clusters ``
gcp gke docs
title GCP Google Kubernetes Engine Access to Multiple Clusters description Implement GCP Google Kubernetes Engine Access to Multiple Clusters Step 00 Pre requisites We should have the two clusters created and ready standard cluster private 1 autopilot cluster private 1 Step 01 Introduction Configure access to Multiple Clusters Understand kube config file HOME kube config Understand kubectl config command kubectl config view kubectl config current context kubectl config use context context name kubectl config get context kubectl config get clusters Step 02 Pre requisite Verify if you have any two GKE Clusters created and ready for use standard cluster private 1 autopilot cluster private 1 Step 03 Clean Up kube config file t Clean existing kube configs cd HOME kube config cat config Step 04 Configure Standard Cluster Access for kubectl Understand commands kubectl config view kubectl config current context t View kubeconfig kubectl config view Configure kubeconfig for kubectl standard cluster private 1 gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 View kubeconfig kubectl config view View Cluster Information kubectl cluster info View the current context for kubectl kubectl config current context Step 05 Configure Autopilot Cluster Access for kubectl t Configure kubeconfig for kubectl autopilot cluster private 1 gcloud container clusters get credentials autopilot cluster private 1 region us central1 project kdaida123 View the current context for kubectl kubectl config current context View Cluster Information kubectl cluster info View kubeconfig kubectl config view Step 06 Switch Contexts between clusters Understand the kubectl config command use context t View the current context for kubectl kubectl config current context View kubeconfig kubectl config view Get contexts context name to which you want to switch Switch Context kubectl config use context gke kdaida123 us central1 standard cluster private 1 View the current context for kubectl kubectl config current context View Cluster Information kubectl cluster info Step 07 List Contexts configured in kubeconfig t List Contexts kubectl config get contexts Step 08 List Clusters configured in kubeconfig t List Clusters kubectl config get clusters
gcp gke docs Configure kubeconfig for kubectl on your local terminal Verify if you are able to reach GKE Cluster using kubectl from your local terminal title gcloud cli install on macOS Learn to install gcloud cli on MacOS Install gcloud CLI on MacOS Step 01 Introduction
--- title: gcloud cli install on macOS description: Learn to install gcloud cli on MacOS --- ## Step-01: Introduction - Install gcloud CLI on MacOS - Configure kubeconfig for kubectl on your local terminal - Verify if you are able to reach GKE Cluster using kubectl from your local terminal ## Step-02: Install gcloud cli on MacOS - [Install gcloud cli](https://cloud.google.com/sdk/docs/install-sdk#mac) ```t # Verify Python Version (Supported versions are Python 3 (3.5 to 3.8, 3.7 recommended) python3 -V # Determine your machine hardware uname -m # Create Folder mkdir gcloud-cli-software # Download gcloud cli based on machine hardware ## Important Note: Download the latest version available on that respective day Dowload Link: https://cloud.google.com/sdk/docs/install-sdk#mac ## As on today the below is the latest version (x86_64 bit) curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-418.0.0-darwin-x86_64.tar.gz # Unzip binary ls -lrta tar -zxf google-cloud-cli-418.0.0-darwin-x86_64.tar.gz # Run the install script with screen reader mode on: ./google-cloud-sdk/install.sh --screen-reader=true ``` ## Step-03: Verify gcloud cli version ```t # Open new terminal AS PATH is updated, open new terminal # gcloud cli version gcloud version ## Sample Output Kalyans-Mac-mini:gcloud-cli-software kalyanreddy$ gcloud version Google Cloud SDK 418.0.0 bq 2.0.85 core 2023.02.13 gcloud-crc32c 1.0.0 gsutil 5.20 Kalyans-Mac-mini:gcloud-cli-software kalyanreddy$ ``` ## Step-04: Intialize gcloud CLI in local Terminal ```t # Initialize gcloud CLI ./google-cloud-sdk/bin/gcloud init # gcloud config Configurations Commands (For Reference) gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename ``` ## Step-05: Verify gke-gcloud-auth-plugin ```t # Change Directroy gcloud-cli-software ## Important Note about gke-gcloud-auth-plugin: 1. Kubernetes clients require an authentication plugin, gke- gcloud-auth-plugin, which uses the Client-go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version # Install gke-gcloud-auth-plugin gcloud components install gke-gcloud-auth-plugin # Verify if gke-gcloud-auth-plugin installed gke-gcloud-auth-plugin --version ``` ## Step-06: Remove any existing kubectl clients ```t # Verify kubectl version kubectl version --short which kubectl Observation: 1. We are not using kubectl from gcloud CLI and we need to fix that. # Removing existing kubectl which kubectl rm /usr/local/bin/kubectl ``` ## Step-07: Install kubectl client from gcloud CLI ```t # List gcloud components gcloud components list ## SAMPLE OUTPUT Status: Not Installed Name: kubectl ID: kubectl Size: < 1 MiB # Install kubectl client gcloud components install kubectl # Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version --short which kubectl ``` ## Step-08: Fix kubectl client version equal to GKE Cluster version - **Important Note:** You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane. - For example, a 1.24 kubectl client works with Kubernetes Cluster 1.23, 1.24 and 1.25 clusters. - As our GKE cluster version is 1.26, we will also upgrade our kubectl to 1.26 ```t # Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version --short which kubectl # Change Directroy cd /Users/kalyanreddy/Documents/course-repos/gcloud-cli-software/google-cloud-sdk/bin/ # List files ls -lrta # Backup existing kubectl cp kubectl kubectl_bkup_1.24 # Copy latest kubectl cp kubectl.1.26 kubectl # Verify kubectl version kubectl version --short which kubectl ``` ## Step-09: Configure kubeconfig for kubectl in local desktop terminal ```t # Clean-Up kubeconfig file (if any older configs exists) rm $HOME/.kube/config # Configure kubeconfig for kubectl gcloud container clusters get-credentials <GKE-CLUSTER-NAME> --region <REGION> --project <PROJECT> gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # Verify Kubernetes Worker Nodes kubectl get nodes # Verify System Pod in kube-system Namespace kubectl -n kube-system get pods # Verify kubeconfig file cat $HOME/.kube/config kubectl config view ``` ## References - [gcloud CLI](https://cloud.google.com/sdk/gcloud) - [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install-sdk#mac
gcp gke docs
title gcloud cli install on macOS description Learn to install gcloud cli on MacOS Step 01 Introduction Install gcloud CLI on MacOS Configure kubeconfig for kubectl on your local terminal Verify if you are able to reach GKE Cluster using kubectl from your local terminal Step 02 Install gcloud cli on MacOS Install gcloud cli https cloud google com sdk docs install sdk mac t Verify Python Version Supported versions are Python 3 3 5 to 3 8 3 7 recommended python3 V Determine your machine hardware uname m Create Folder mkdir gcloud cli software Download gcloud cli based on machine hardware Important Note Download the latest version available on that respective day Dowload Link https cloud google com sdk docs install sdk mac As on today the below is the latest version x86 64 bit curl O https dl google com dl cloudsdk channels rapid downloads google cloud cli 418 0 0 darwin x86 64 tar gz Unzip binary ls lrta tar zxf google cloud cli 418 0 0 darwin x86 64 tar gz Run the install script with screen reader mode on google cloud sdk install sh screen reader true Step 03 Verify gcloud cli version t Open new terminal AS PATH is updated open new terminal gcloud cli version gcloud version Sample Output Kalyans Mac mini gcloud cli software kalyanreddy gcloud version Google Cloud SDK 418 0 0 bq 2 0 85 core 2023 02 13 gcloud crc32c 1 0 0 gsutil 5 20 Kalyans Mac mini gcloud cli software kalyanreddy Step 04 Intialize gcloud CLI in local Terminal t Initialize gcloud CLI google cloud sdk bin gcloud init gcloud config Configurations Commands For Reference gcloud config list gcloud config configurations list gcloud config configurations activate gcloud config configurations create gcloud config configurations delete gcloud config configurations describe gcloud config configurations rename Step 05 Verify gke gcloud auth plugin t Change Directroy gcloud cli software Important Note about gke gcloud auth plugin 1 Kubernetes clients require an authentication plugin gke gcloud auth plugin which uses the Client go Credential Plugins framework to provide authentication tokens to communicate with GKE clusters Verify if gke gcloud auth plugin installed gke gcloud auth plugin version Install gke gcloud auth plugin gcloud components install gke gcloud auth plugin Verify if gke gcloud auth plugin installed gke gcloud auth plugin version Step 06 Remove any existing kubectl clients t Verify kubectl version kubectl version short which kubectl Observation 1 We are not using kubectl from gcloud CLI and we need to fix that Removing existing kubectl which kubectl rm usr local bin kubectl Step 07 Install kubectl client from gcloud CLI t List gcloud components gcloud components list SAMPLE OUTPUT Status Not Installed Name kubectl ID kubectl Size 1 MiB Install kubectl client gcloud components install kubectl Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version short which kubectl Step 08 Fix kubectl client version equal to GKE Cluster version Important Note You must use a kubectl version that is within one minor version difference of your Kubernetescluster control plane For example a 1 24 kubectl client works with Kubernetes Cluster 1 23 1 24 and 1 25 clusters As our GKE cluster version is 1 26 we will also upgrade our kubectl to 1 26 t Verify kubectl version OPEN NEW TERMINAL AS PATH IS UPDATED kubectl version short which kubectl Change Directroy cd Users kalyanreddy Documents course repos gcloud cli software google cloud sdk bin List files ls lrta Backup existing kubectl cp kubectl kubectl bkup 1 24 Copy latest kubectl cp kubectl 1 26 kubectl Verify kubectl version kubectl version short which kubectl Step 09 Configure kubeconfig for kubectl in local desktop terminal t Clean Up kubeconfig file if any older configs exists rm HOME kube config Configure kubeconfig for kubectl gcloud container clusters get credentials GKE CLUSTER NAME region REGION project PROJECT gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 Verify Kubernetes Worker Nodes kubectl get nodes Verify System Pod in kube system Namespace kubectl n kube system get pods Verify kubeconfig file cat HOME kube config kubectl config view References gcloud CLI https cloud google com sdk gcloud Install the Google Cloud CLI https cloud google com sdk docs install sdk mac
gcp gke docs title GCP Google Kubernetes Engine GKE Headless Service Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Headless Service 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Headless Service description: Implement GCP Google Kubernetes Engine GKE Headless Service --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-public-cluster-1 --region us-central1 --project kdaida123 # List GKE Kubernetes Worker Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Kubernetes ClusterIP and Headless Service - Understand Headless Service in detail ## Step-02: 01-kubernetes-deployment.yaml ```yaml apiVersion: apps/v1 kind: Deployment metadata: #Dictionary name: myapp1-deployment spec: # Dictionary replicas: 4 selector: matchLabels: app: myapp1 template: metadata: # Dictionary name: myapp1-pod labels: # Dictionary app: myapp1 # Key value pairs spec: containers: # List - name: myapp1-container #image: stacksimplify/kubenginx:1.0.0 image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0 ports: - containerPort: 8080 ``` ## Step-03: 02-kubernetes-clusterip-service.yaml ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-cip-service spec: type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName selector: app: myapp1 ports: - name: http port: 80 # Service Port targetPort: 8080 # Container Port ``` ## Step-04: 03-kubernetes-headless-service.yaml - Add `spec.clusterIP: None` ### VERY IMPORTANT NODE 1. When using Headless Service, we should use both the "Service Port and Target Port" same. 2. Headless Service directly sends traffic to Pod with Pod IP and Container Port. 3. DNS resolution directly happens from headless service to Pod IP. ```yaml apiVersion: v1 kind: Service metadata: name: myapp1-headless-service spec: #type: ClusterIP # ClusterIP, # NodePort, # LoadBalancer, # ExternalName clusterIP: None selector: app: myapp1 ports: - name: http port: 8080 # Service Port targetPort: 8080 # Container Port ``` ## Step-05: Deply Kubernetes Manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods kubectl get pods -o wide Observation: make a note of Pod IP # List Services kubectl get svc Observation: 1. "CLUSTER-IP" will be "NONE" for Headless Service ## Sample Kalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 135m myapp1-cip-service ClusterIP 10.24.2.34 <none> 80/TCP 4m9s myapp1-headless-service ClusterIP None <none> 80/TCP 4m9s Kalyans-Mac-mini:19-GKE-Headless-Service kalyanreddy$ ``` ## Step-06: Review Curl Kubernetes Manifests - **Project Folder:** 02-kube-manifests-curl ```yaml apiVersion: v1 kind: Pod metadata: name: curl-pod spec: containers: - name: curl image: curlimages/curl command: [ "sleep", "600" ] ``` ## Step-07: Deply Curl-pod and Verify ClusterIP and Headless Services ```t # Deploy curl-pod kubectl apply -f 02-kube-manifests-curl # List Services kubectl get svc # GKE Cluster Kubernetes Service Full DNS Name format <svc>.<ns>.svc.cluster.local # Will open up a terminal session into the container kubectl exec -it curl-pod -- sh # ClusterIP Service: nslookup and curl Test nslookup myapp1-cip-service.default.svc.cluster.local curl myapp1-cip-service.default.svc.cluster.local ### ClusterIP Service nslookup Outptu $ nslookup myapp1-cip-service.default.svc.cluster.local Server: 10.24.0.10 Address: 10.24.0.10:53 Name: myapp1-cip-service.default.svc.cluster.local Address: 10.24.2.34 # Headless Service: nslookup and curl Test nslookup myapp1-headless-service.default.svc.cluster.local curl myapp1-headless-service.default.svc.cluster.local:8080 Observation: 1. There is no specific IP for Headless Service 2. It will be directly dns resolved to Pod IP 3. That said we should use the same port as Container Port for Headless Service (VERY VERY IMPORTANT) ### Headless Service nslookup Output $ nslookup myapp1-headless-service.default.svc.cluster.local Server: 10.24.0.10 Address: 10.24.0.10:53 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.0.25 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.0.26 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.1.28 Name: myapp1-headless-service.default.svc.cluster.local Address: 10.20.1.29 ``` ## Step-08: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 01-kube-manifests # Delete Kubernetes Resources - Curl Pod kubectl delete -f 02-kube-manifests-curl ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Headless Service description Implement GCP Google Kubernetes Engine GKE Headless Service Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard public cluster 1 region us central1 project kdaida123 List GKE Kubernetes Worker Nodes kubectl get nodes Step 01 Introduction Implement Kubernetes ClusterIP and Headless Service Understand Headless Service in detail Step 02 01 kubernetes deployment yaml yaml apiVersion apps v1 kind Deployment metadata Dictionary name myapp1 deployment spec Dictionary replicas 4 selector matchLabels app myapp1 template metadata Dictionary name myapp1 pod labels Dictionary app myapp1 Key value pairs spec containers List name myapp1 container image stacksimplify kubenginx 1 0 0 image us docker pkg dev google samples containers gke hello app 2 0 ports containerPort 8080 Step 03 02 kubernetes clusterip service yaml yaml apiVersion v1 kind Service metadata name myapp1 cip service spec type ClusterIP ClusterIP NodePort LoadBalancer ExternalName selector app myapp1 ports name http port 80 Service Port targetPort 8080 Container Port Step 04 03 kubernetes headless service yaml Add spec clusterIP None VERY IMPORTANT NODE 1 When using Headless Service we should use both the Service Port and Target Port same 2 Headless Service directly sends traffic to Pod with Pod IP and Container Port 3 DNS resolution directly happens from headless service to Pod IP yaml apiVersion v1 kind Service metadata name myapp1 headless service spec type ClusterIP ClusterIP NodePort LoadBalancer ExternalName clusterIP None selector app myapp1 ports name http port 8080 Service Port targetPort 8080 Container Port Step 05 Deply Kubernetes Manifests t Deploy Kubernetes Manifests kubectl apply f 01 kube manifests List Deployments kubectl get deploy List Pods kubectl get pods kubectl get pods o wide Observation make a note of Pod IP List Services kubectl get svc Observation 1 CLUSTER IP will be NONE for Headless Service Sample Kalyans Mac mini 19 GKE Headless Service kalyanreddy kubectl get svc NAME TYPE CLUSTER IP EXTERNAL IP PORT S AGE kubernetes ClusterIP 10 24 0 1 none 443 TCP 135m myapp1 cip service ClusterIP 10 24 2 34 none 80 TCP 4m9s myapp1 headless service ClusterIP None none 80 TCP 4m9s Kalyans Mac mini 19 GKE Headless Service kalyanreddy Step 06 Review Curl Kubernetes Manifests Project Folder 02 kube manifests curl yaml apiVersion v1 kind Pod metadata name curl pod spec containers name curl image curlimages curl command sleep 600 Step 07 Deply Curl pod and Verify ClusterIP and Headless Services t Deploy curl pod kubectl apply f 02 kube manifests curl List Services kubectl get svc GKE Cluster Kubernetes Service Full DNS Name format svc ns svc cluster local Will open up a terminal session into the container kubectl exec it curl pod sh ClusterIP Service nslookup and curl Test nslookup myapp1 cip service default svc cluster local curl myapp1 cip service default svc cluster local ClusterIP Service nslookup Outptu nslookup myapp1 cip service default svc cluster local Server 10 24 0 10 Address 10 24 0 10 53 Name myapp1 cip service default svc cluster local Address 10 24 2 34 Headless Service nslookup and curl Test nslookup myapp1 headless service default svc cluster local curl myapp1 headless service default svc cluster local 8080 Observation 1 There is no specific IP for Headless Service 2 It will be directly dns resolved to Pod IP 3 That said we should use the same port as Container Port for Headless Service VERY VERY IMPORTANT Headless Service nslookup Output nslookup myapp1 headless service default svc cluster local Server 10 24 0 10 Address 10 24 0 10 53 Name myapp1 headless service default svc cluster local Address 10 20 0 25 Name myapp1 headless service default svc cluster local Address 10 20 0 26 Name myapp1 headless service default svc cluster local Address 10 20 1 28 Name myapp1 headless service default svc cluster local Address 10 20 1 29 Step 08 Clean Up t Delete Kubernetes Resources kubectl delete f 01 kube manifests Delete Kubernetes Resources Curl Pod kubectl delete f 02 kube manifests curl
gcp gke docs Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal title GKE Persistent Disks Volume Snapshots and Restore Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GKE Persistent Disks - Volume Snapshots and Restore description: Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Feature: Compute Engine persistent disk CSI Driver - Verify the Feature **Compute Engine persistent disk CSI Driver** enabled in GKE Cluster. - This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster. ## Step-01: Introduction 1. Deploy UMS WebApp with `01-kube-manifests` 2. Create new User (admin102, admin103) 3. Create Volume Snapshot Kubernetes Objects and Deploy them 4. Delete User (admin102, admin103) 5. Deploy PVC Restore `03-Volume-Restore` 6. Verify if after restore 2 more users what we deleted got restored in our UMS App 7. Clean Up (kubectl delete -R -f <Folder>) ## Step-02: Kubernetes YAML Manifests - **Project Folder:** 01-kube-manifests - No changes to Kubernetes YAML Manifests, same as Section `21-GKE-PD-existing-SC-standard-rwo` - 01-persistent-volume-claim.yaml - 02-UserManagement-ConfigMap.yaml - 03-mysql-deployment.yaml - 04-mysql-clusterip-service.yaml - 05-UserMgmtWebApp-Deployment.yaml - 06-UserMgmtWebApp-LoadBalancer-Service.yaml ## Step-03: Deploy kube-manifests ```t # Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests/ # List Storage Class kubectl get sc # List PVC kubectl get pvc # List PV kubectl get pv # List ConfigMaps kubectl get configmap # List Deployments kubectl get deploy # List Pods kubectl get pods # List Services kubectl get svc # Verify Pod Logs kubectl get pods kubectl logs -f <USERMGMT-POD-NAME> kubectl logs -f usermgmt-webapp-6ff7d7d849-7lrg5 ``` ## Step-04: Verify Persistent Disks - Go to Compute Engine -> Storage -> Disks - Search for `4GB` Persistent Disk - **Observation:** Review the below items - **Zones:** us-central1-c - **Type:** Balanced persistent disk - **In use by:** gke-standard-cluster-1-default-pool-db7b638f-j5lk ## Step-05: Access Application ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Create New User admin102 Username: admin102 Password: password102 First Name: fname102 Last Name: lname102 Email Address: admin102@stacksimplify.com Social Security Address: ssn102 # Create New User admin103 Username: admin103 Password: password103 First Name: fname103 Last Name: lname103 Email Address: admin103@stacksimplify.com Social Security Address: ssn103 ``` ## Step-06: 02-Volume-Snapshot: Create Volume Snapshots - **Project Folder:** 02-Volume-Snapshot ### Step-06-01: 01-VolumeSnapshotClass.yaml ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: my-snapshotclass driver: pd.csi.storage.gke.io deletionPolicy: Delete #parameters: # storage-locations: us-east2 # Optional Note: # To use a custom storage location, add a storage-locations parameter to the snapshot class. # To use this parameter, your clusters must use version 1.21 or later. ``` ### Step-06-02: 02-VolumeSnapshot.yaml ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: my-snapshot1 spec: volumeSnapshotClassName: my-snapshotclass source: persistentVolumeClaimName: mysql-pv-claim ``` ### Step-06-03: Deploy Volume Snapshot Kubernetes Manifests ```t # Deploy Volume Snapshot Kubernetes Manifests kubectl apply -f 02-Volume-Snapshot/ # List VolumeSnapshotClass kubectl get volumesnapshotclass # Describe VolumeSnapshotClass kubectl describe volumesnapshotclass my-snapshotclass # List VolumeSnapshot kubectl get volumesnapshot # Describe VolumeSnapshot kubectl describe volumesnapshot my-snapshot1 # Verify the Snapshots Go to Compute Engine -> Storage -> Snapshots Observation: 1. You should find the new snapshot created 2. Review the "Creation Time" 3. Review the "Disk Size: 4GB" ``` ## Step-07: Delete users admin102, admin103 ```t # List Services kubectl get svc # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 # Delete Users admin102 admin103 ``` ## Step-08: 03-Volume-Restore: Create Volume Restore - **Project Folder:** 03-Volume-Restore ### Step-08-01: 01-restore-pvc.yaml ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-restore spec: dataSource: name: my-snapshot1 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io storageClassName: standard-rwo accessModes: - ReadWriteOnce resources: requests: storage: 4Gi ``` ### Step-08-02: 02-mysql-deployment.yaml - Update Claim Name from `claimName: mysql-pv-claim` to `claimName: pvc-restore` ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.6 env: - name: MYSQL_ROOT_PASSWORD value: dbpassword11 ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql - name: usermanagement-dbcreation-script mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance volumes: - name: mysql-persistent-storage persistentVolumeClaim: #claimName: mysql-pv-claim claimName: pvc-restore - name: usermanagement-dbcreation-script configMap: name: usermanagement-dbcreation-script ``` ### Step-08-03: Deploy Volume Restore Kubernetes Manifests ```t # Deploy Volume Restore Kubernetes Manifests kubectl apply -f 03-Volume-Restore/ # List PVC kubectl get pvc # List PV kubectl get pv # List Pods kubectl get pods # Restart Deployments (Optional - If ERRORS) kubectl rollout restart deployment mysql kubectl rollout restart deployment usermgmt-webapp # Review Persistent Disk 1. Go to Compute Engine -> Storage -> Disks 2. You should find a new "Balanced persistent disk" created as part of new PVC "pvc-restore" 3. To get the exact Disk name for "pvc-restore" PVC run command "kubectl get pvc" # Access Application http://<ExternalIP-from-get-service-output> Username: admin101 Password: password101 Observation: 1. You should find admin102, admin103 present 2. That proves, we have restored the MySQL Data using VolumeSnapshots and PVC ``` ## Step-09: Clean-Up ```t # Delete All (Disks, Snapshots) kubectl delete -f 01-kube-manifests -f 02-Volume-Snapshot -f 03-Volume-Restore # List PVC kubectl get pvc # List PV kubectl get pv # List VolumeSnapshotClass kubectl get volumesnapshotclass # List VolumeSnapshot kubectl get volumesnapshot # Verify Persistent Disks 1. Go to Compute Engine -> Storage -> Disks -> REFRESH 2. Two disks created as part of this demo is deleted # Verify Disk Snapshots 1. Go to Compute Engine -> Storage -> Snapshots -> REFRESH 2. There should not be any snapshot which we created as part of this demo. ```
gcp gke docs
title GKE Persistent Disks Volume Snapshots and Restore description Use Google Disks Volume Snapshots and Restore Concepts applied for Kubernetes Workloads Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Feature Compute Engine persistent disk CSI Driver Verify the Feature Compute Engine persistent disk CSI Driver enabled in GKE Cluster This is required for mounting the Google Compute Engine Persistent Disks to Kubernetes Workloads in GKE Cluster Step 01 Introduction 1 Deploy UMS WebApp with 01 kube manifests 2 Create new User admin102 admin103 3 Create Volume Snapshot Kubernetes Objects and Deploy them 4 Delete User admin102 admin103 5 Deploy PVC Restore 03 Volume Restore 6 Verify if after restore 2 more users what we deleted got restored in our UMS App 7 Clean Up kubectl delete R f Folder Step 02 Kubernetes YAML Manifests Project Folder 01 kube manifests No changes to Kubernetes YAML Manifests same as Section 21 GKE PD existing SC standard rwo 01 persistent volume claim yaml 02 UserManagement ConfigMap yaml 03 mysql deployment yaml 04 mysql clusterip service yaml 05 UserMgmtWebApp Deployment yaml 06 UserMgmtWebApp LoadBalancer Service yaml Step 03 Deploy kube manifests t Deploy Kubernetes Manifests kubectl apply f 01 kube manifests List Storage Class kubectl get sc List PVC kubectl get pvc List PV kubectl get pv List ConfigMaps kubectl get configmap List Deployments kubectl get deploy List Pods kubectl get pods List Services kubectl get svc Verify Pod Logs kubectl get pods kubectl logs f USERMGMT POD NAME kubectl logs f usermgmt webapp 6ff7d7d849 7lrg5 Step 04 Verify Persistent Disks Go to Compute Engine Storage Disks Search for 4GB Persistent Disk Observation Review the below items Zones us central1 c Type Balanced persistent disk In use by gke standard cluster 1 default pool db7b638f j5lk Step 05 Access Application t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Create New User admin102 Username admin102 Password password102 First Name fname102 Last Name lname102 Email Address admin102 stacksimplify com Social Security Address ssn102 Create New User admin103 Username admin103 Password password103 First Name fname103 Last Name lname103 Email Address admin103 stacksimplify com Social Security Address ssn103 Step 06 02 Volume Snapshot Create Volume Snapshots Project Folder 02 Volume Snapshot Step 06 01 01 VolumeSnapshotClass yaml yaml apiVersion snapshot storage k8s io v1 kind VolumeSnapshotClass metadata name my snapshotclass driver pd csi storage gke io deletionPolicy Delete parameters storage locations us east2 Optional Note To use a custom storage location add a storage locations parameter to the snapshot class To use this parameter your clusters must use version 1 21 or later Step 06 02 02 VolumeSnapshot yaml yaml apiVersion snapshot storage k8s io v1 kind VolumeSnapshot metadata name my snapshot1 spec volumeSnapshotClassName my snapshotclass source persistentVolumeClaimName mysql pv claim Step 06 03 Deploy Volume Snapshot Kubernetes Manifests t Deploy Volume Snapshot Kubernetes Manifests kubectl apply f 02 Volume Snapshot List VolumeSnapshotClass kubectl get volumesnapshotclass Describe VolumeSnapshotClass kubectl describe volumesnapshotclass my snapshotclass List VolumeSnapshot kubectl get volumesnapshot Describe VolumeSnapshot kubectl describe volumesnapshot my snapshot1 Verify the Snapshots Go to Compute Engine Storage Snapshots Observation 1 You should find the new snapshot created 2 Review the Creation Time 3 Review the Disk Size 4GB Step 07 Delete users admin102 admin103 t List Services kubectl get svc Access Application http ExternalIP from get service output Username admin101 Password password101 Delete Users admin102 admin103 Step 08 03 Volume Restore Create Volume Restore Project Folder 03 Volume Restore Step 08 01 01 restore pvc yaml yaml apiVersion v1 kind PersistentVolumeClaim metadata name pvc restore spec dataSource name my snapshot1 kind VolumeSnapshot apiGroup snapshot storage k8s io storageClassName standard rwo accessModes ReadWriteOnce resources requests storage 4Gi Step 08 02 02 mysql deployment yaml Update Claim Name from claimName mysql pv claim to claimName pvc restore yaml apiVersion apps v1 kind Deployment metadata name mysql spec replicas 1 selector matchLabels app mysql strategy type Recreate template metadata labels app mysql spec containers name mysql image mysql 5 6 env name MYSQL ROOT PASSWORD value dbpassword11 ports containerPort 3306 name mysql volumeMounts name mysql persistent storage mountPath var lib mysql name usermanagement dbcreation script mountPath docker entrypoint initdb d https hub docker com mysql Refer Initializing a fresh instance volumes name mysql persistent storage persistentVolumeClaim claimName mysql pv claim claimName pvc restore name usermanagement dbcreation script configMap name usermanagement dbcreation script Step 08 03 Deploy Volume Restore Kubernetes Manifests t Deploy Volume Restore Kubernetes Manifests kubectl apply f 03 Volume Restore List PVC kubectl get pvc List PV kubectl get pv List Pods kubectl get pods Restart Deployments Optional If ERRORS kubectl rollout restart deployment mysql kubectl rollout restart deployment usermgmt webapp Review Persistent Disk 1 Go to Compute Engine Storage Disks 2 You should find a new Balanced persistent disk created as part of new PVC pvc restore 3 To get the exact Disk name for pvc restore PVC run command kubectl get pvc Access Application http ExternalIP from get service output Username admin101 Password password101 Observation 1 You should find admin102 admin103 present 2 That proves we have restored the MySQL Data using VolumeSnapshots and PVC Step 09 Clean Up t Delete All Disks Snapshots kubectl delete f 01 kube manifests f 02 Volume Snapshot f 03 Volume Restore List PVC kubectl get pvc List PV kubectl get pv List VolumeSnapshotClass kubectl get volumesnapshotclass List VolumeSnapshot kubectl get volumesnapshot Verify Persistent Disks 1 Go to Compute Engine Storage Disks REFRESH 2 Two disks created as part of this demo is deleted Verify Disk Snapshots 1 Go to Compute Engine Storage Snapshots REFRESH 2 There should not be any snapshot which we created as part of this demo
gcp gke docs Implement GCP Google Kubernetes Engine GKE Continuous Integration Step 00 Pre requisites 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl title GCP Google Kubernetes Engine GKE CI 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE CI description: Implement GCP Google Kubernetes Engine GKE Continuous Integration --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` ## Step-01: Introduction - Implement Continuous Integration for GKE Workloads using - Google Cloud Source - Google Cloud Build - Google Artifact Repository ## Step-02: Enable APIs in Google Cloud ```t # Enable APIs in Google Cloud gcloud services enable container.googleapis.com \ cloudbuild.googleapis.com \ sourcerepo.googleapis.com \ artifactregistry.googleapis.com # Google Cloud Services GKE: container.googleapis.com Cloud Build: cloudbuild.googleapis.com Cloud Source: sourcerepo.googleapis.com Artifact Registry: artifactregistry.googleapis.com ``` ## Step-03: Create Artifact Repository ```t # List Artifact Repositories gcloud artifacts repositories list # Create Artifact Repository gcloud artifacts repositories create myapps-repository \ --repository-format=docker \ --location=us-central1 # List Artifact Repositories gcloud artifacts repositories list # Describe Artifact Repository gcloud artifacts repositories describe myapps-repository --location=us-central1 ``` ## Step-04: Install Git client on local desktop (if not present) ```t # Download and Install Git Client and Installed https://git-scm.com/downloads ``` ## Step-05: Create SSH Keys for Git Repo Access - [Generating SSH Key Pair](https://cloud.google.com/source-repositories/docs/authentication#generate_a_key_pair) ```t # Change Directory cd 01-SSH-Keys # Create SSH Keys ssh-keygen -t [KEY_TYPE] -C "[USER_EMAIL]" KEY_TYPE: rsa, ecdsa, ed25519 USER_EMAIL: dkalyanreddy@gmail.com # Replace Values KEY_TYPE, USER_EMAIL ssh-keygen -t ed25519 -C "dkalyanreddy@gmail.com" Provide the File Name as "id_gcp_cloud_source" ## Sample Output Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ssh-keygen -t ed25519 -C "dkalyanreddy@gmail.com" Generating public/private ed25519 key pair. Enter file in which to save the key (/Users/kalyanreddy/.ssh/id_ed25519): id_gcp_cloud_source Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in id_gcp_cloud_source Your public key has been saved in id_gcp_cloud_source.pub The key fingerprint is: SHA256:YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy@gmail.com The key's randomart image is: +--[ED25519 256]--+ | | | | | . o | | o . + + o | |o = B % S | |...B.&=X.o | |....%B+Eo | |.+ + *o. | |. . +.+. | +----[SHA256]-----+ Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ls -lrta total 16 drwxr-xr-x 6 kalyanreddy staff 192 Jun 29 09:45 .. -rw------- 1 kalyanreddy staff 419 Jun 29 09:46 id_gcp_cloud_source drwxr-xr-x 4 kalyanreddy staff 128 Jun 29 09:46 . -rw-r--r-- 1 kalyanreddy staff 104 Jun 29 09:46 id_gcp_cloud_source.pub Kalyans-Mac-mini:01-SSH-Keys kalyanreddy$ ``` ## Step-06: Review SSH Keys (Public and Private Keys) ```t # Change Directroy cd 01-SSH-Keys # Review Private Key: id_gcp_cloud_source cat id_gcp_cloud_source # Review Public Key: id_gcp_cloud_source.pub cat id_gcp_cloud_source.pub ``` ## Step-07: Update SSH Public Key in Google Cloud Source - Go to -> Source Repositories -> 3 Dots -> Manage SSH Keys -> Register SSH Key - [Google Cloud Source URL](https://source.cloud.google.com/) ```t # Key Name Name: gke-course Key: Output from command "cat id_gcp_cloud_source.pub" in previous step. Put content from Public Key ``` - Click on **Register** ## Step-08: Update SSH Private Key in Git Config - Update SSH Private Key in your local desktop Git Config ```t # Copy SSH Private Key to your ".ssh" folder in your Home Directory from your course cd 01-SSH-Keys cp id_gcp_cloud_source $HOME/.ssh # Change Directory (Your local desktop home directory) cd $HOME/.ssh # Verify File in "$HOME/.ssh" ls -lrta id_gcp_cloud_source # Verify existing git "config" file cat config # Backup any existing "config" file cp config config_bkup_before_cloud_source # Update "config" file to point to "id_gcp_cloud_source" private key vi config ## Sample Output after changes Kalyans-Mac-mini:.ssh kalyanreddy$ cat config Host * AddKeysToAgent yes UseKeychain yes IdentityFile ~/.ssh/id_gcp_cloud_source Kalyans-Mac-mini:.ssh kalyanreddy$ # Backup config with cloudsource cp config config_with_cloud_source_key ``` ## Step-09: Update Git Global Config in your local deskopt ```t # List Global Git Config git config --list # Update Global Git Config git config --global user.email "YOUR_EMAIL_ADDRESS" git config --global user.name "YOUR_NAME" # Replace YOUR_EMAIL_ADDRESS, YOUR_NAME git config --global user.name "Kalyan Reddy Daida" git config --global user.email "dkalyanreddy@gmail.com" # List Global Git Config git config --list ``` ## Step-10: Create Git repositories in Cloud Source ```t # List Cloud Source Repository gcloud source repos list # Create Git repositories in Cloud Source gcloud source repos create myapp1-app-repo # List Cloud Source Repository gcloud source repos list # Verify using Cloud Console Search for -> Source Repositories https://source.cloud.google.com/repos ``` ## Step-11: Clone Cloud Source Git Repository, Commit a Change, Push to Remote Repo and Verify ```t # Change Directory cd course-repos # Verify using Cloud Console Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo -> SSH Authentication # Copy the git clone command and run git clone ssh://dkalyanreddy@gmail.com@source.developers.google.com:2022/p/kdaida123/r/myapp1-app-repo # Change Directory cd myapp1-app-repo # Create a simple readme file touch README.md echo "# GKE CI Demo" > README.md ls -lrta # Add Files and do local commit git add . git commit -am "First Commit" # Push file to Cloud Source Git Repo (Remote Repo) git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-12: Review Files in 02-Docker-Image folder 1. Dockerfile 2. index.html ## Step-13: Copy files from 02-Docker-Image folder to Git Repo ```t # Change Directroy cd 57-GKE-Continuous-Integration/02-Docker-Image # Copy Files to Git repo "myapp1-app-repo" 1. Dockerfile 2. index.html # Local Git Commit and Push to Remote Repo git add . git commit -am "Second Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-14: Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command ```t # Change Directory (Git App Repo: myapp1-app-repo) cd myapp1-app-repo # Get latest git commit id (current branch) git rev-parse HEAD # Get latest git commit id first 7 chars (current branch) git rev-parse --short=7 HEAD # Ensure you are in local git repo folder where "Dockerfile, index.html" present cd myapp1-app-repo # Create a Cloud Build build based on the latest commit gcloud builds submit --tag="us-central1-docker.pkg.dev/${PROJECT_ID}/${$APP_ARTIFACT_REPO}/myapp1:${COMMIT_ID}" . # Replace Values ${PROJECT_ID}, ${$APP_ARTIFACT_REPO}, ${COMMIT_ID} gcloud builds submit --tag="us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:6f7d338" . ``` ## Step-15: Review Cloud Build YAML file ```yaml steps: # This step builds the container image. - name: 'gcr.io/cloud-builders/docker' id: Build args: - 'build' - '-t' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' - '.' # This step pushes the image to Artifact Registry # The PROJECT_ID and SHORT_SHA variables are automatically # replaced by Cloud Build. - name: 'gcr.io/cloud-builders/docker' id: Push args: - 'push' - 'us-central1-docker.pkg.dev/$PROJECT_ID/myapps-repository/myapp1:$SHORT_SHA' ``` ## Step-16: Copy cloudbuild.yaml to Git Repo ```t # Change Directroy cd 57-GKE-Continuous-Integration/03-cloudbuild-yaml # Copy Files to Git repo 1. cloudbuild.yaml # Local Git Commit and Push to Remote Repo git add . git commit -am "Third Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-17: Create Continuous Integration Pipeline in Cloud Build - Go to Cloud Build -> Dashboard -> Region: us-central-1 -> Click on **SET UP BUILD TRIGGERS** [OR] - Go to Cloud Build -> TRIGGERS -> Click on **CREATE TRIGGER** - **Name:** myapp1-ci - **Region:** us-central1 - **Description:** myapp1 Continuous Integration Pipeline - **Tags:** environment=dev - **Event:** Push to a branch - **Source:** myapp1-app-repo - **Branch:** main (Auto-populated) - **Configuration:** Cloud Build configuration file (yaml or json) - **Location:** Repository - **Cloud Build Configuration file location:** /cloudbuild.yaml - **Approval:** leave unchecked - **Service account:** leave to default - Click on **CREATE** ## Step-18: Make a simple change in "index.html" and push the changes to Git Repo ```t # Change Directroy cd myapp1-app-repo # Update file index.html (change V1 to V2) <p>Application Version: V2</p> # Local Git Commit and Push to Remote Repo git status git add . git commit -am "V2 Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-19: Verify Code Build CI Pipeline ```t # Verify Code Build 1. Go to Code Build -> Dashboard or go directly to Code Build -> History 2. Click on Build History -> View All 3. Verify "BUILD LOG" 4. Verify "EXECUTION DETAILS" 5. Verify "VIEW RAW" # Verify Artifact Repository 1. Go to Artifact Registry -> myapps-repository -> myapp1 2. You should find the docker image pushed to Artifact Registry ``` ## Step-20: Review Kubernetes Manifests - **Project Folder:** 04-kube-manifests - 01-kubernetes-deployment.yaml - 02-kubernetes-loadBalancer-service.yaml ## Step-21: Update Container Image to V1 Docker Image we built ```yaml # 01-kubernetes-deployment.yaml: Update "image" spec: containers: # List - name: myapp1-container image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:d1c3b88 ports: - containerPort: 80 ``` ## Step-22: Deploy Kubernetes Manifests and Verify ```t # Change Directory You should in Course Content folder google-kubernetes-engine/<RESPECTIVE-SECTION> # Deploy Kubernetes Manifests kubectl apply -f 04-kube-manifests # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod (Review Events to understand from where Docker Image downloaded) kubectl describe pod <POD-NAME> # List Services kubectl get svc # Access Application http://<EXTERNAL-IP-GET-SERVICE-OUTPUT> Observation: 1. You should see "Application Version: V1" ``` ## Step-23: Update Container Image to V2 Docker Image we built ```yaml # 01-kubernetes-deployment.yaml: Update "image" spec: containers: # List - name: myapp1-container image: us-central1-docker.pkg.dev/kdaida123/myapps-repository/myapp1:3af592c ports: - containerPort: 80 ``` ## Step-24: Update Kubernetes Deployment and Verify ```t # Deply Kubernetes Manifests (Updated Image Tag) kubectl apply -f 04-kube-manifests # Restart Kubernetes Deployment (Optional - if it is not updated) kubectl rollout restart deployment myapp1-deployment # List Deployments kubectl get deploy # List Pods kubectl get pods # Describe Pod (Review Events to understand from where Docker Image downloaded) kubectl describe pod <POD-NAME> # List Services kubectl get svc # Access Application http://<EXTERNAL-IP-GET-SERVICE-OUTPUT> Observation: 1. You should see "Application Version: V2" ``` ## Step-25: Clean-Up ```t # Delete Kubernetes Resources kubectl delete -f 04-kube-manifests ``` ## Step-26: How to add Approvals before starting the Build Process ? ### Step-26-01: Enable Approval in Cloud Build - Go to Cloud Build -> Triggers -> myapp1-ci - Check the box in **Approval: Require approval before build executes** ### Step-26-02: Add Users to Cloud Build Approver IAM Role - Go to IAM & Admin -> GRANT ACCESS - **Add Principal:** dkalyanreddy@gmail.com - **Assign Roles:** Cloud Build Approver - Click on **SAVE** ## Step-27: Update the Git Repo to test Build Approval Process ```t # Change Directroy cd myapp1-app-repo # Update file index.html (change V2 to V3) <p>Application Version: V3</p> # Local Git Commit and Push to Remote Repo git status git add . git commit -am "V3 Commit" git push # Verify in Git Remote Repo Search for -> Source Repositories https://source.cloud.google.com/repos Go to Repo -> myapp1-app-repo ``` ## Step-28: Verify and Approve the Build - Go to Cloud Build -> Triggers -> myapp1-ci -> Select and Approve - Verify if build is successful. ## References - [Cloud Build for Docker Images](https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build
gcp gke docs
title GCP Google Kubernetes Engine GKE CI description Implement GCP Google Kubernetes Engine GKE Continuous Integration Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes Step 01 Introduction Implement Continuous Integration for GKE Workloads using Google Cloud Source Google Cloud Build Google Artifact Repository Step 02 Enable APIs in Google Cloud t Enable APIs in Google Cloud gcloud services enable container googleapis com cloudbuild googleapis com sourcerepo googleapis com artifactregistry googleapis com Google Cloud Services GKE container googleapis com Cloud Build cloudbuild googleapis com Cloud Source sourcerepo googleapis com Artifact Registry artifactregistry googleapis com Step 03 Create Artifact Repository t List Artifact Repositories gcloud artifacts repositories list Create Artifact Repository gcloud artifacts repositories create myapps repository repository format docker location us central1 List Artifact Repositories gcloud artifacts repositories list Describe Artifact Repository gcloud artifacts repositories describe myapps repository location us central1 Step 04 Install Git client on local desktop if not present t Download and Install Git Client and Installed https git scm com downloads Step 05 Create SSH Keys for Git Repo Access Generating SSH Key Pair https cloud google com source repositories docs authentication generate a key pair t Change Directory cd 01 SSH Keys Create SSH Keys ssh keygen t KEY TYPE C USER EMAIL KEY TYPE rsa ecdsa ed25519 USER EMAIL dkalyanreddy gmail com Replace Values KEY TYPE USER EMAIL ssh keygen t ed25519 C dkalyanreddy gmail com Provide the File Name as id gcp cloud source Sample Output Kalyans Mac mini 01 SSH Keys kalyanreddy ssh keygen t ed25519 C dkalyanreddy gmail com Generating public private ed25519 key pair Enter file in which to save the key Users kalyanreddy ssh id ed25519 id gcp cloud source Enter passphrase empty for no passphrase Enter same passphrase again Your identification has been saved in id gcp cloud source Your public key has been saved in id gcp cloud source pub The key fingerprint is SHA256 YialyCj3XaSa4b8ewk4bcK1hXxO7DDM5uiCP1J2TOZ0 dkalyanreddy gmail com The key s randomart image is ED25519 256 o o o o B S B X o B Eo o SHA256 Kalyans Mac mini 01 SSH Keys kalyanreddy ls lrta total 16 drwxr xr x 6 kalyanreddy staff 192 Jun 29 09 45 rw 1 kalyanreddy staff 419 Jun 29 09 46 id gcp cloud source drwxr xr x 4 kalyanreddy staff 128 Jun 29 09 46 rw r r 1 kalyanreddy staff 104 Jun 29 09 46 id gcp cloud source pub Kalyans Mac mini 01 SSH Keys kalyanreddy Step 06 Review SSH Keys Public and Private Keys t Change Directroy cd 01 SSH Keys Review Private Key id gcp cloud source cat id gcp cloud source Review Public Key id gcp cloud source pub cat id gcp cloud source pub Step 07 Update SSH Public Key in Google Cloud Source Go to Source Repositories 3 Dots Manage SSH Keys Register SSH Key Google Cloud Source URL https source cloud google com t Key Name Name gke course Key Output from command cat id gcp cloud source pub in previous step Put content from Public Key Click on Register Step 08 Update SSH Private Key in Git Config Update SSH Private Key in your local desktop Git Config t Copy SSH Private Key to your ssh folder in your Home Directory from your course cd 01 SSH Keys cp id gcp cloud source HOME ssh Change Directory Your local desktop home directory cd HOME ssh Verify File in HOME ssh ls lrta id gcp cloud source Verify existing git config file cat config Backup any existing config file cp config config bkup before cloud source Update config file to point to id gcp cloud source private key vi config Sample Output after changes Kalyans Mac mini ssh kalyanreddy cat config Host AddKeysToAgent yes UseKeychain yes IdentityFile ssh id gcp cloud source Kalyans Mac mini ssh kalyanreddy Backup config with cloudsource cp config config with cloud source key Step 09 Update Git Global Config in your local deskopt t List Global Git Config git config list Update Global Git Config git config global user email YOUR EMAIL ADDRESS git config global user name YOUR NAME Replace YOUR EMAIL ADDRESS YOUR NAME git config global user name Kalyan Reddy Daida git config global user email dkalyanreddy gmail com List Global Git Config git config list Step 10 Create Git repositories in Cloud Source t List Cloud Source Repository gcloud source repos list Create Git repositories in Cloud Source gcloud source repos create myapp1 app repo List Cloud Source Repository gcloud source repos list Verify using Cloud Console Search for Source Repositories https source cloud google com repos Step 11 Clone Cloud Source Git Repository Commit a Change Push to Remote Repo and Verify t Change Directory cd course repos Verify using Cloud Console Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo SSH Authentication Copy the git clone command and run git clone ssh dkalyanreddy gmail com source developers google com 2022 p kdaida123 r myapp1 app repo Change Directory cd myapp1 app repo Create a simple readme file touch README md echo GKE CI Demo README md ls lrta Add Files and do local commit git add git commit am First Commit Push file to Cloud Source Git Repo Remote Repo git push Verify in Git Remote Repo Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo Step 12 Review Files in 02 Docker Image folder 1 Dockerfile 2 index html Step 13 Copy files from 02 Docker Image folder to Git Repo t Change Directroy cd 57 GKE Continuous Integration 02 Docker Image Copy Files to Git repo myapp1 app repo 1 Dockerfile 2 index html Local Git Commit and Push to Remote Repo git add git commit am Second Commit git push Verify in Git Remote Repo Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo Step 14 Create a container image with Cloud Build and store it in Artifact Registry using glcoud builds command t Change Directory Git App Repo myapp1 app repo cd myapp1 app repo Get latest git commit id current branch git rev parse HEAD Get latest git commit id first 7 chars current branch git rev parse short 7 HEAD Ensure you are in local git repo folder where Dockerfile index html present cd myapp1 app repo Create a Cloud Build build based on the latest commit gcloud builds submit tag us central1 docker pkg dev PROJECT ID APP ARTIFACT REPO myapp1 COMMIT ID Replace Values PROJECT ID APP ARTIFACT REPO COMMIT ID gcloud builds submit tag us central1 docker pkg dev kdaida123 myapps repository myapp1 6f7d338 Step 15 Review Cloud Build YAML file yaml steps This step builds the container image name gcr io cloud builders docker id Build args build t us central1 docker pkg dev PROJECT ID myapps repository myapp1 SHORT SHA This step pushes the image to Artifact Registry The PROJECT ID and SHORT SHA variables are automatically replaced by Cloud Build name gcr io cloud builders docker id Push args push us central1 docker pkg dev PROJECT ID myapps repository myapp1 SHORT SHA Step 16 Copy cloudbuild yaml to Git Repo t Change Directroy cd 57 GKE Continuous Integration 03 cloudbuild yaml Copy Files to Git repo 1 cloudbuild yaml Local Git Commit and Push to Remote Repo git add git commit am Third Commit git push Verify in Git Remote Repo Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo Step 17 Create Continuous Integration Pipeline in Cloud Build Go to Cloud Build Dashboard Region us central 1 Click on SET UP BUILD TRIGGERS OR Go to Cloud Build TRIGGERS Click on CREATE TRIGGER Name myapp1 ci Region us central1 Description myapp1 Continuous Integration Pipeline Tags environment dev Event Push to a branch Source myapp1 app repo Branch main Auto populated Configuration Cloud Build configuration file yaml or json Location Repository Cloud Build Configuration file location cloudbuild yaml Approval leave unchecked Service account leave to default Click on CREATE Step 18 Make a simple change in index html and push the changes to Git Repo t Change Directroy cd myapp1 app repo Update file index html change V1 to V2 p Application Version V2 p Local Git Commit and Push to Remote Repo git status git add git commit am V2 Commit git push Verify in Git Remote Repo Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo Step 19 Verify Code Build CI Pipeline t Verify Code Build 1 Go to Code Build Dashboard or go directly to Code Build History 2 Click on Build History View All 3 Verify BUILD LOG 4 Verify EXECUTION DETAILS 5 Verify VIEW RAW Verify Artifact Repository 1 Go to Artifact Registry myapps repository myapp1 2 You should find the docker image pushed to Artifact Registry Step 20 Review Kubernetes Manifests Project Folder 04 kube manifests 01 kubernetes deployment yaml 02 kubernetes loadBalancer service yaml Step 21 Update Container Image to V1 Docker Image we built yaml 01 kubernetes deployment yaml Update image spec containers List name myapp1 container image us central1 docker pkg dev kdaida123 myapps repository myapp1 d1c3b88 ports containerPort 80 Step 22 Deploy Kubernetes Manifests and Verify t Change Directory You should in Course Content folder google kubernetes engine RESPECTIVE SECTION Deploy Kubernetes Manifests kubectl apply f 04 kube manifests List Deployments kubectl get deploy List Pods kubectl get pods Describe Pod Review Events to understand from where Docker Image downloaded kubectl describe pod POD NAME List Services kubectl get svc Access Application http EXTERNAL IP GET SERVICE OUTPUT Observation 1 You should see Application Version V1 Step 23 Update Container Image to V2 Docker Image we built yaml 01 kubernetes deployment yaml Update image spec containers List name myapp1 container image us central1 docker pkg dev kdaida123 myapps repository myapp1 3af592c ports containerPort 80 Step 24 Update Kubernetes Deployment and Verify t Deply Kubernetes Manifests Updated Image Tag kubectl apply f 04 kube manifests Restart Kubernetes Deployment Optional if it is not updated kubectl rollout restart deployment myapp1 deployment List Deployments kubectl get deploy List Pods kubectl get pods Describe Pod Review Events to understand from where Docker Image downloaded kubectl describe pod POD NAME List Services kubectl get svc Access Application http EXTERNAL IP GET SERVICE OUTPUT Observation 1 You should see Application Version V2 Step 25 Clean Up t Delete Kubernetes Resources kubectl delete f 04 kube manifests Step 26 How to add Approvals before starting the Build Process Step 26 01 Enable Approval in Cloud Build Go to Cloud Build Triggers myapp1 ci Check the box in Approval Require approval before build executes Step 26 02 Add Users to Cloud Build Approver IAM Role Go to IAM Admin GRANT ACCESS Add Principal dkalyanreddy gmail com Assign Roles Cloud Build Approver Click on SAVE Step 27 Update the Git Repo to test Build Approval Process t Change Directroy cd myapp1 app repo Update file index html change V2 to V3 p Application Version V3 p Local Git Commit and Push to Remote Repo git status git add git commit am V3 Commit git push Verify in Git Remote Repo Search for Source Repositories https source cloud google com repos Go to Repo myapp1 app repo Step 28 Verify and Approve the Build Go to Cloud Build Triggers myapp1 ci Select and Approve Verify if build is successful References Cloud Build for Docker Images https cloud google com kubernetes engine docs tutorials gitops cloud build
gcp gke docs title Kubernetes ReplicaSets Step 02 Create ReplicaSet Learn about Kubernetes ReplicaSets What are ReplicaSets What is the advantage of using ReplicaSets Step 01 Introduction to ReplicaSets
--- title: Kubernetes ReplicaSets description: Learn about Kubernetes ReplicaSets --- ## Step-01: Introduction to ReplicaSets - What are ReplicaSets? - What is the advantage of using ReplicaSets? ## Step-02: Create ReplicaSet ### Step-02-01: Create ReplicaSet - Create ReplicaSet ```t # Kubernetes ReplicaSet kubectl create -f replicaset-demo.yml ``` - **replicaset-demo.yml** ```yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-helloworld-rs labels: app: my-helloworld spec: replicas: 3 selector: matchLabels: app: my-helloworld template: metadata: labels: app: my-helloworld spec: containers: - name: my-helloworld-app image: stacksimplify/kube-helloworld:1.0.0 ``` ### Step-02-02: List ReplicaSets - Get list of ReplicaSets ```t # List ReplicaSets kubectl get replicaset kubectl get rs ``` ### Step-02-03: Describe ReplicaSet - Describe the newly created ReplicaSet ```t # Describe ReplicaSet kubectl describe rs/<replicaset-name> kubectl describe rs/my-helloworld-rs [or] kubectl describe rs my-helloworld-rs ``` ### Step-02-04: List of Pods - Get list of Pods ```t # Get list of Pods kubectl get pods kubectl describe pod <pod-name> # Get list of Pods with Pod IP and Node in which it is running kubectl get pods -o wide ``` ### Step-02-05: Verify the Owner of the Pod - Verify the owner reference of the pod. - Verify under **"name"** tag under **"ownerReferences"**. We will find the replicaset name to which this pod belongs to. ```t # List Pod with Output as YAML kubectl get pods <pod-name> -o yaml kubectl get pods my-helloworld-rs-c8rrj -o yaml ``` ## Step-03: Expose ReplicaSet as a Service - Expose ReplicaSet with a service (Load Balancer Service) to access the application externally (from internet) ```t # Expose ReplicaSet as a Service kubectl expose rs <ReplicaSet-Name> --type=LoadBalancer --port=80 --target-port=8080 --name=<Service-Name-To-Be-Created> kubectl expose rs my-helloworld-rs --type=LoadBalancer --port=80 --target-port=8080 --name=my-helloworld-rs-service # List Services kubectl get service kubectl get svc ``` - **Access the Application using External or Public IP** ```t # Access Application http://<External-IP-from-get-service-output>/hello curl http://<External-IP-from-get-service-output>/hello # Observation 1. Each time we access the application, request will be sent to different pod and pods id will be displayed for us. ``` ## Step-04: Test Replicaset Reliability or High Availability - Test how the high availability or reliability concept is achieved automatically in Kubernetes - Whenever a POD is accidentally terminated due to some application issue, ReplicaSet should auto-create that Pod to maintain desired number of Replicas configured to achive High Availability. ```t # To get Pod Name kubectl get pods # Delete the Pod kubectl delete pod <Pod-Name> # Verify the new pod got created automatically kubectl get pods (Verify Age and name of new pod) ``` ## Step-05: Test ReplicaSet Scalability feature - Test how scalability is going to seamless & quick - Update the **replicas** field in **replicaset-demo.yml** from 3 to 6. ```yaml # Before change spec: replicas: 3 # After change spec: replicas: 6 ``` - Update the ReplicaSet ```t # Apply latest changes to ReplicaSet kubectl replace -f replicaset-demo.yml # Verify if new pods got created kubectl get pods -o wide ``` ## Step-06: Delete ReplicaSet & Service ### Step-06-01: Delete ReplicaSet ```t # Delete ReplicaSet kubectl delete rs <ReplicaSet-Name> # Sample Commands kubectl delete rs/my-helloworld-rs [or] kubectl delete rs my-helloworld-rs # Verify if ReplicaSet got deleted kubectl get rs ``` ### Step-06-02: Delete Service created for ReplicaSet ```t # Delete Service kubectl delete svc <service-name> # Sample Commands kubectl delete svc my-helloworld-rs-service [or] kubectl delete svc/my-helloworld-rs-service # Verify if Service got deleted kubectl get svc ```
gcp gke docs
title Kubernetes ReplicaSets description Learn about Kubernetes ReplicaSets Step 01 Introduction to ReplicaSets What are ReplicaSets What is the advantage of using ReplicaSets Step 02 Create ReplicaSet Step 02 01 Create ReplicaSet Create ReplicaSet t Kubernetes ReplicaSet kubectl create f replicaset demo yml replicaset demo yml yaml apiVersion apps v1 kind ReplicaSet metadata name my helloworld rs labels app my helloworld spec replicas 3 selector matchLabels app my helloworld template metadata labels app my helloworld spec containers name my helloworld app image stacksimplify kube helloworld 1 0 0 Step 02 02 List ReplicaSets Get list of ReplicaSets t List ReplicaSets kubectl get replicaset kubectl get rs Step 02 03 Describe ReplicaSet Describe the newly created ReplicaSet t Describe ReplicaSet kubectl describe rs replicaset name kubectl describe rs my helloworld rs or kubectl describe rs my helloworld rs Step 02 04 List of Pods Get list of Pods t Get list of Pods kubectl get pods kubectl describe pod pod name Get list of Pods with Pod IP and Node in which it is running kubectl get pods o wide Step 02 05 Verify the Owner of the Pod Verify the owner reference of the pod Verify under name tag under ownerReferences We will find the replicaset name to which this pod belongs to t List Pod with Output as YAML kubectl get pods pod name o yaml kubectl get pods my helloworld rs c8rrj o yaml Step 03 Expose ReplicaSet as a Service Expose ReplicaSet with a service Load Balancer Service to access the application externally from internet t Expose ReplicaSet as a Service kubectl expose rs ReplicaSet Name type LoadBalancer port 80 target port 8080 name Service Name To Be Created kubectl expose rs my helloworld rs type LoadBalancer port 80 target port 8080 name my helloworld rs service List Services kubectl get service kubectl get svc Access the Application using External or Public IP t Access Application http External IP from get service output hello curl http External IP from get service output hello Observation 1 Each time we access the application request will be sent to different pod and pods id will be displayed for us Step 04 Test Replicaset Reliability or High Availability Test how the high availability or reliability concept is achieved automatically in Kubernetes Whenever a POD is accidentally terminated due to some application issue ReplicaSet should auto create that Pod to maintain desired number of Replicas configured to achive High Availability t To get Pod Name kubectl get pods Delete the Pod kubectl delete pod Pod Name Verify the new pod got created automatically kubectl get pods Verify Age and name of new pod Step 05 Test ReplicaSet Scalability feature Test how scalability is going to seamless quick Update the replicas field in replicaset demo yml from 3 to 6 yaml Before change spec replicas 3 After change spec replicas 6 Update the ReplicaSet t Apply latest changes to ReplicaSet kubectl replace f replicaset demo yml Verify if new pods got created kubectl get pods o wide Step 06 Delete ReplicaSet Service Step 06 01 Delete ReplicaSet t Delete ReplicaSet kubectl delete rs ReplicaSet Name Sample Commands kubectl delete rs my helloworld rs or kubectl delete rs my helloworld rs Verify if ReplicaSet got deleted kubectl get rs Step 06 02 Delete Service created for ReplicaSet t Delete Service kubectl delete svc service name Sample Commands kubectl delete svc my helloworld rs service or kubectl delete svc my helloworld rs service Verify if Service got deleted kubectl get svc
gcp gke docs Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress SSL Redirect 2 Verify if kubeconfig for kubectl is configured in your local terminal Configure kubeconfig for kubectl Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect 1 Verify if GKE Cluster is created gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT t
--- title: GCP Google Kubernetes Engine GKE Ingress SSL Redirect description: Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, ZONE, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 ``` 3. Registered Domain using Google Cloud Domains 4. DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS (Example: demo1.kalyanreddydaida.com) ## Step-01: Introduction - Google Managed Certificates for GKE Ingress - Ingress SSL - Ingress SSL Redirect (HTTP to HTTPS) ## Step-02: 06-frontendconfig.yaml ```yaml apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontend-config spec: redirectToHttps: enabled: true #responseCodeName: RESPONSE_CODE ``` ## Step-03: 04-Ingress-SSL.yaml - Add the Annotation `networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config"` ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-ssl annotations: # External Load Balancer kubernetes.io/ingress.class: "gce" # Static IP for Ingress Service kubernetes.io/ingress.global-static-ip-name: "gke-ingress-extip1" # Google Managed SSL Certificates networking.gke.io/managed-certificates: managed-cert-for-ingress # SSL Redirect HTTP to HTTPS networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config" spec: defaultBackend: service: name: app3-nginx-nodeport-service port: number: 80 rules: - http: paths: - path: /app1 pathType: Prefix backend: service: name: app1-nginx-nodeport-service port: number: 80 - path: /app2 pathType: Prefix backend: service: name: app2-nginx-nodeport-service port: number: 80 ``` ## Step-04: Deploy kube-manifests and Verify - From previous `Ingress SSL` demo we didn't clean-up those Kubernetes resources. - We are going use them here, in addition to previous demo in this demo we are just adding `06-frontendconfig.yaml` ```t # Deploy Kubernetes manifests kubectl apply -f kube-manifests Observation: 1. Only "my-frontend-config" will be created, rest all unchanged ### Sample Output Kalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ kubectl apply -f kube-manifests/ deployment.apps/app1-nginx-deployment unchanged service/app1-nginx-nodeport-service unchanged deployment.apps/app2-nginx-deployment unchanged service/app2-nginx-nodeport-service unchanged deployment.apps/app3-nginx-deployment unchanged service/app3-nginx-nodeport-service unchanged ingress.networking.k8s.io/ingress-ssl configured managedcertificate.networking.gke.io/managed-cert-for-ingress unchanged frontendconfig.networking.gke.io/my-frontend-config created Kalyans-Mac-mini:38-GKE-Ingress-Google-Managed-SSL-Redirect kalyanreddy$ # List FrontendConfig kubectl get frontendconfig # Describe FrontendConfig kubectl describe frontendconfig my-frontend-config # List Ingress Load Balancers kubectl get ingress # Describe Ingress and view Rules kubectl describe ingress ingress-ssl ``` ## Step-05: Access Application ```t # Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors # Access Application http://<DNS-DOMAIN-NAME>/app1/index.html http://<DNS-DOMAIN-NAME>/app2/index.html http://<DNS-DOMAIN-NAME>/ # Note: Replace Domain Name registered in Cloud DNS # HTTP URLs: Should redirect to HTTPS URLs http://demo1.kalyanreddydaida.com/app1/index.html http://demo1.kalyanreddydaida.com/app2/index.html http://demo1.kalyanreddydaida.com/ # HTTPS URLs https://demo1.kalyanreddydaida.com/app1/index.html https://demo1.kalyanreddydaida.com/app2/index.html https://demo1.kalyanreddydaida.com/ ``` ## Step-06: Clean Up ```t # Delete Kubernetes Resources kubectl delete -f kube-manifests # Verify Load Balancer Deleted Go to Network Services -> Load Balancing -> No Load balancers should be present ```
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress SSL Redirect description Implement GCP Google Kubernetes Engine GKE Ingress SSL Redirect Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME ZONE PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 3 Registered Domain using Google Cloud Domains 4 DNS name for which SSL Certificate should be created should already be added as DNS in Google Cloud DNS Example demo1 kalyanreddydaida com Step 01 Introduction Google Managed Certificates for GKE Ingress Ingress SSL Ingress SSL Redirect HTTP to HTTPS Step 02 06 frontendconfig yaml yaml apiVersion networking gke io v1beta1 kind FrontendConfig metadata name my frontend config spec redirectToHttps enabled true responseCodeName RESPONSE CODE Step 03 04 Ingress SSL yaml Add the Annotation networking gke io v1beta1 FrontendConfig my frontend config yaml apiVersion networking k8s io v1 kind Ingress metadata name ingress ssl annotations External Load Balancer kubernetes io ingress class gce Static IP for Ingress Service kubernetes io ingress global static ip name gke ingress extip1 Google Managed SSL Certificates networking gke io managed certificates managed cert for ingress SSL Redirect HTTP to HTTPS networking gke io v1beta1 FrontendConfig my frontend config spec defaultBackend service name app3 nginx nodeport service port number 80 rules http paths path app1 pathType Prefix backend service name app1 nginx nodeport service port number 80 path app2 pathType Prefix backend service name app2 nginx nodeport service port number 80 Step 04 Deploy kube manifests and Verify From previous Ingress SSL demo we didn t clean up those Kubernetes resources We are going use them here in addition to previous demo in this demo we are just adding 06 frontendconfig yaml t Deploy Kubernetes manifests kubectl apply f kube manifests Observation 1 Only my frontend config will be created rest all unchanged Sample Output Kalyans Mac mini 38 GKE Ingress Google Managed SSL Redirect kalyanreddy kubectl apply f kube manifests deployment apps app1 nginx deployment unchanged service app1 nginx nodeport service unchanged deployment apps app2 nginx deployment unchanged service app2 nginx nodeport service unchanged deployment apps app3 nginx deployment unchanged service app3 nginx nodeport service unchanged ingress networking k8s io ingress ssl configured managedcertificate networking gke io managed cert for ingress unchanged frontendconfig networking gke io my frontend config created Kalyans Mac mini 38 GKE Ingress Google Managed SSL Redirect kalyanreddy List FrontendConfig kubectl get frontendconfig Describe FrontendConfig kubectl describe frontendconfig my frontend config List Ingress Load Balancers kubectl get ingress Describe Ingress and view Rules kubectl describe ingress ingress ssl Step 05 Access Application t Important Note Wait for 2 to 3 minutes for the Load Balancer to completely create and ready for use else we will get HTTP 502 errors Access Application http DNS DOMAIN NAME app1 index html http DNS DOMAIN NAME app2 index html http DNS DOMAIN NAME Note Replace Domain Name registered in Cloud DNS HTTP URLs Should redirect to HTTPS URLs http demo1 kalyanreddydaida com app1 index html http demo1 kalyanreddydaida com app2 index html http demo1 kalyanreddydaida com HTTPS URLs https demo1 kalyanreddydaida com app1 index html https demo1 kalyanreddydaida com app2 index html https demo1 kalyanreddydaida com Step 06 Clean Up t Delete Kubernetes Resources kubectl delete f kube manifests Verify Load Balancer Deleted Go to Network Services Load Balancing No Load balancers should be present
gcp gke docs Step 00 Pre requisites title GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity 2 Verify if kubeconfig for kubectl is configured in your local terminal Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity Configure kubeconfig for kubectl 1 Verify if GKE Cluster is created t
--- title: GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity description: Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity --- ## Step-00: Pre-requisites 1. Verify if GKE Cluster is created 2. Verify if kubeconfig for kubectl is configured in your local terminal ```t # Configure kubeconfig for kubectl gcloud container clusters get-credentials <CLUSTER-NAME> --region <REGION> --project <PROJECT> # Replace Values CLUSTER-NAME, REGION, PROJECT gcloud container clusters get-credentials standard-cluster-private-1 --region us-central1 --project kdaida123 # List Kubernetes Nodes kubectl get nodes ``` 3. ExternalDNS Controller should be installed and ready to use ```t # List Namespaces (external-dns-ns namespace should be present) kubectl get ns # List External DNS Pods kubectl -n external-dns-ns get pods ``` ## Step-01: Introduction - Implement following Features for Ingress Service - BackendConfig - CLIENT_IP Affinity for Ingress Service - We are going to create two projects - **Project-01:** CLIENT_IP Affinity enabled - **Project-02:** CLIENT_IP Affinity disabled ## Step-02: Create External IP Address using gcloud ```t # Create External IP Address 1 (IF NOT CREATED - ALREADY CREATED IN PREVIOUS SECTIONS) gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip1 --global # Create External IP Address 2 gcloud compute addresses create ADDRESS_NAME --global gcloud compute addresses create gke-ingress-extip2 --global # Describe External IP Address to get gcloud compute addresses describe ADDRESS_NAME --global gcloud compute addresses describe gke-ingress-extip2 --global # Verify Go to VPC Network -> IP Addresses -> External IP Address ``` ## Step-03: Project-01: Review YAML Manifests - **Project Folder:** 01-kube-manifests-with-clientip-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "CLIENT_IP" ``` ## Step-04: Project-02: Review YAML Manifests - **Project Folder:** 02-kube-manifests-without-clientip-affinity - 01-kubernetes-deployment.yaml - 02-kubernetes-NodePort-service.yaml - 03-ingress.yaml - 04-backendconfig.yaml ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig2 spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 ``` ## Step-05: Deploy Kubernetes Manifests ```t # Project-01: Deploy Kubernetes Manifests kubectl apply -f 01-kube-manifests-with-clientip-affinity # Project-02: Deploy Kubernetes Manifests kubectl apply -f 02-kube-manifests-without-clientip-affinity # Verify Deployments kubectl get deploy # Verify Pods kubectl get pods # Verify Node Port Services kubectl get svc # Verify Ingress Services kubectl get ingress # Verify Backend Config kubectl get backendconfig # Project-01: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Observation: Client IP Affinity setting should be in enabled state # Project-02: Verify Load Balancer Settings Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Client IP Affinity setting should be in disabled state ``` ## Step-06: Access Application ```t # Project-01: Access Application using DNS or ExtIP http://ingress-with-clientip-affinity.kalyanreddydaida.com http://<EXT-IP-1> curl ingress-with-clientip-affinity.kalyanreddydaida.com Observation: 1. Request will keep going always to only one POD due to CLIENT_IP Affinity we configured # Project-02: Access Application using DNS or ExtIP http://ingress-without-clientip-affinity.kalyanreddydaida.com http://<EXT-IP-2> curl ingress-without-clientip-affinity.kalyanreddydaida.com Observation: 1. Requests will be load balanced to 4 pods created as part of "cdn-demo2" deployment. ``` ## Step-07: How to remove a setting from FrontendConfig or BackendConfig - To revoke an Ingress feature, you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD - **Important Note:** To clear or disable a previously enabled configuration, set the field's value to an empty string ("") or to a Boolean value of false, depending on the field type. ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: #affinityType: "CLIENT_IP" # Disable at Step-07 affinityType: "" # Enable at Step-07 ``` ## Step-08: Apply Changes and Verify ```t # Apply Changes kubectl apply -f 01-kube-manifests-with-clientip-affinity # Verify Load Balancer Go to Network Services -> Load Balancing -> Load Balancer -> Backends -> Verify Client IP Affinity Setting Observation: Should be disabled ``` ## Step-09: Deleting a FrontendConfig or BackendConfig - [Deleting a FrontendConfig or BackendConfig](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#deleting_a_frontendconfig_or_backendconfig) ## Step-10: Clean-Up ```t # Project-01: Delete Kubernetes Resources kubectl delete -f 01-kube-manifests-with-clientip-affinity # Project-02: Delete Kubernetes Resources kubectl delete -f 02-kube-manifests-without-clientip-affinity ``` ## Step-11: Rollback 04-backendconfig.yaml - Put back `affinityType: "CLIENT_IP"` it will be ready for Students Demo. ```yaml apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig spec: timeoutSec: 42 # Backend service timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#timeout connectionDraining: # Connection draining timeout: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#draining_timeout drainingTimeoutSec: 62 logging: # HTTP access logging: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#http_logging enable: true sampleRate: 1.0 sessionAffinity: affinityType: "CLIENT_IP" # Disable at Step-07 #affinityType: "" # Enable at Step-07 ``` ## References - [Ingress Features](https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features)
gcp gke docs
title GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity description Implement GCP Google Kubernetes Engine GKE Ingress ClientIP Affinity Step 00 Pre requisites 1 Verify if GKE Cluster is created 2 Verify if kubeconfig for kubectl is configured in your local terminal t Configure kubeconfig for kubectl gcloud container clusters get credentials CLUSTER NAME region REGION project PROJECT Replace Values CLUSTER NAME REGION PROJECT gcloud container clusters get credentials standard cluster private 1 region us central1 project kdaida123 List Kubernetes Nodes kubectl get nodes 3 ExternalDNS Controller should be installed and ready to use t List Namespaces external dns ns namespace should be present kubectl get ns List External DNS Pods kubectl n external dns ns get pods Step 01 Introduction Implement following Features for Ingress Service BackendConfig CLIENT IP Affinity for Ingress Service We are going to create two projects Project 01 CLIENT IP Affinity enabled Project 02 CLIENT IP Affinity disabled Step 02 Create External IP Address using gcloud t Create External IP Address 1 IF NOT CREATED ALREADY CREATED IN PREVIOUS SECTIONS gcloud compute addresses create ADDRESS NAME global gcloud compute addresses create gke ingress extip1 global Create External IP Address 2 gcloud compute addresses create ADDRESS NAME global gcloud compute addresses create gke ingress extip2 global Describe External IP Address to get gcloud compute addresses describe ADDRESS NAME global gcloud compute addresses describe gke ingress extip2 global Verify Go to VPC Network IP Addresses External IP Address Step 03 Project 01 Review YAML Manifests Project Folder 01 kube manifests with clientip affinity 01 kubernetes deployment yaml 02 kubernetes NodePort service yaml 03 ingress yaml 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 sessionAffinity affinityType CLIENT IP Step 04 Project 02 Review YAML Manifests Project Folder 02 kube manifests without clientip affinity 01 kubernetes deployment yaml 02 kubernetes NodePort service yaml 03 ingress yaml 04 backendconfig yaml yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig2 spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 Step 05 Deploy Kubernetes Manifests t Project 01 Deploy Kubernetes Manifests kubectl apply f 01 kube manifests with clientip affinity Project 02 Deploy Kubernetes Manifests kubectl apply f 02 kube manifests without clientip affinity Verify Deployments kubectl get deploy Verify Pods kubectl get pods Verify Node Port Services kubectl get svc Verify Ingress Services kubectl get ingress Verify Backend Config kubectl get backendconfig Project 01 Verify Load Balancer Settings Go to Network Services Load Balancing Load Balancer Backends Verify Client IP Affinity Setting Observation Client IP Affinity setting should be in enabled state Project 02 Verify Load Balancer Settings Go to Network Services Load Balancing Load Balancer Backends Verify Client IP Affinity Setting Client IP Affinity setting should be in disabled state Step 06 Access Application t Project 01 Access Application using DNS or ExtIP http ingress with clientip affinity kalyanreddydaida com http EXT IP 1 curl ingress with clientip affinity kalyanreddydaida com Observation 1 Request will keep going always to only one POD due to CLIENT IP Affinity we configured Project 02 Access Application using DNS or ExtIP http ingress without clientip affinity kalyanreddydaida com http EXT IP 2 curl ingress without clientip affinity kalyanreddydaida com Observation 1 Requests will be load balanced to 4 pods created as part of cdn demo2 deployment Step 07 How to remove a setting from FrontendConfig or BackendConfig To revoke an Ingress feature you must explicitly disable the feature configuration in the FrontendConfig or BackendConfig CRD Important Note To clear or disable a previously enabled configuration set the field s value to an empty string or to a Boolean value of false depending on the field type yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 sessionAffinity affinityType CLIENT IP Disable at Step 07 affinityType Enable at Step 07 Step 08 Apply Changes and Verify t Apply Changes kubectl apply f 01 kube manifests with clientip affinity Verify Load Balancer Go to Network Services Load Balancing Load Balancer Backends Verify Client IP Affinity Setting Observation Should be disabled Step 09 Deleting a FrontendConfig or BackendConfig Deleting a FrontendConfig or BackendConfig https cloud google com kubernetes engine docs how to ingress features deleting a frontendconfig or backendconfig Step 10 Clean Up t Project 01 Delete Kubernetes Resources kubectl delete f 01 kube manifests with clientip affinity Project 02 Delete Kubernetes Resources kubectl delete f 02 kube manifests without clientip affinity Step 11 Rollback 04 backendconfig yaml Put back affinityType CLIENT IP it will be ready for Students Demo yaml apiVersion cloud google com v1 kind BackendConfig metadata name my backendconfig spec timeoutSec 42 Backend service timeout https cloud google com kubernetes engine docs how to ingress features timeout connectionDraining Connection draining timeout https cloud google com kubernetes engine docs how to ingress features draining timeout drainingTimeoutSec 62 logging HTTP access logging https cloud google com kubernetes engine docs how to ingress features http logging enable true sampleRate 1 0 sessionAffinity affinityType CLIENT IP Disable at Step 07 affinityType Enable at Step 07 References Ingress Features https cloud google com kubernetes engine docs how to ingress features
gcp gke docs Learn to write and test Kubernetes Pods with YAML yml Step 01 Kubernetes YAML Top level Objects title Kubernetes Pods with YAML apiVersion Discuss about the k8s YAML top level objects kube base definition yml
--- title: Kubernetes Pods with YAML description: Learn to write and test Kubernetes Pods with YAML --- ## Step-01: Kubernetes YAML Top level Objects - Discuss about the k8s YAML top level objects - **kube-base-definition.yml** ```yml apiVersion: kind: metadata: spec: ``` - [Kubernetes Reference](https://kubernetes.io/docs/reference/) - [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/) - [Pod API Objects Reference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) ## Step-02: Create Simple Pod Definition using YAML - We are going to create a very basic pod definition - **01-pod-definition.yml** ```yaml apiVersion: v1 # String kind: Pod # String metadata: # Dictionary name: myapp-pod labels: # Dictionary app: myapp spec: containers: # List - name: myapp image: stacksimplify/kubenginx:1.0.0 ports: - containerPort: 80 ``` - **Create Pod** ```t # Change Directory cd kube-manifests # Create Pod kubectl create -f 01-pod-definition.yml [or] kubectl apply -f 01-pod-definition.yml # List Pods kubectl get pods ``` ## Step-03: Create a LoadBalancer Service - **02-pod-LoadBalancer-service.yml** ```yaml apiVersion: v1 kind: Service metadata: name: myapp-pod-loadbalancer-service # Name of the Service spec: type: LoadBalancer selector: # Loadbalance traffic across Pods matching this label selector app: myapp # Accept traffic sent to port 80 ports: - name: http port: 80 # Service Port targetPort: 80 # Container Port ``` - **Create LoadBalancer Service for Pod** ```t # Create Service kubectl apply -f 02-pod-LoadBalancer-service.yml # List Service kubectl get svc # Access Application http://<Load-Balancer-Service-IP> curl http://<Load-Balancer-Service-IP> ``` ## Step-04: Clean-Up Kubernetes Pod and Service ```t # Change Directory cd kube-manifests # Delete Pod kubectl delete -f 01-pod-definition.yml # Delete Service kubectl delete -f 02-pod-LoadBalancer-service.yml ``` ## API Object References - [Kubernetes API Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/) - [Pod Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#pod-v1-core) - [Service Spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#service-v1-core) - [Kubernetes API Reference](https://kubernetes.io/docs/reference/kubernetes-api/)
gcp gke docs
title Kubernetes Pods with YAML description Learn to write and test Kubernetes Pods with YAML Step 01 Kubernetes YAML Top level Objects Discuss about the k8s YAML top level objects kube base definition yml yml apiVersion kind metadata spec Kubernetes Reference https kubernetes io docs reference Kubernetes API Reference https kubernetes io docs reference kubernetes api Pod API Objects Reference https kubernetes io docs reference generated kubernetes api v1 24 pod v1 core Step 02 Create Simple Pod Definition using YAML We are going to create a very basic pod definition 01 pod definition yml yaml apiVersion v1 String kind Pod String metadata Dictionary name myapp pod labels Dictionary app myapp spec containers List name myapp image stacksimplify kubenginx 1 0 0 ports containerPort 80 Create Pod t Change Directory cd kube manifests Create Pod kubectl create f 01 pod definition yml or kubectl apply f 01 pod definition yml List Pods kubectl get pods Step 03 Create a LoadBalancer Service 02 pod LoadBalancer service yml yaml apiVersion v1 kind Service metadata name myapp pod loadbalancer service Name of the Service spec type LoadBalancer selector Loadbalance traffic across Pods matching this label selector app myapp Accept traffic sent to port 80 ports name http port 80 Service Port targetPort 80 Container Port Create LoadBalancer Service for Pod t Create Service kubectl apply f 02 pod LoadBalancer service yml List Service kubectl get svc Access Application http Load Balancer Service IP curl http Load Balancer Service IP Step 04 Clean Up Kubernetes Pod and Service t Change Directory cd kube manifests Delete Pod kubectl delete f 01 pod definition yml Delete Service kubectl delete f 02 pod LoadBalancer service yml API Object References Kubernetes API Spec https kubernetes io docs reference generated kubernetes api v1 24 Pod Spec https kubernetes io docs reference generated kubernetes api v1 24 pod v1 core Service Spec https kubernetes io docs reference generated kubernetes api v1 24 service v1 core Kubernetes API Reference https kubernetes io docs reference kubernetes api
GCP and reproducibility and many others What s important you should expect that other engineers would be working on your code and occasionally changing it Your code should be readable and maintainable are covered by unit tests This helps you to find errors earlier before submitting a training job or even worse deploying a model to production to find out serving is broken because of a typo One way to achieve this is splitting it into smaller pieces separate functions and classes that After you ve built a successful prototype of a machine learning model there s still plenty of Get to know recommended for testing Tensorflow code It s also typically a good idea to care about plenty of things such as operationalization of your model monitoring CI CD reliability things to do To some extent your journey as an ML engineer only begins here You d need to take
After you’ve built a successful prototype of a machine learning model, there’s still plenty of things to do. To some extent, your journey as an ML engineer only begins here. You’d need to take care about plenty of things such as operationalization of your model: monitoring, CI/CD, reliability and reproducibility, and many others. What’s important, you should expect that other engineers would be working on your code and occasionally changing it. Your code should be readable and maintainable. One way to achieve this is splitting it into smaller pieces (separate functions and classes) that are covered by unit tests. This helps you to find errors earlier (before submitting a training job or - even worse - deploying a model to production to find out serving is broken because of a typo) Get to know recommended [practices](https://www.tensorflow.org/community/contribute/tests) for testing Tensorflow code. It’s also typically a good idea to look at test cases examples in TensorFlow source code. Use [tf.test.TestCase](https://www.tensorflow.org/api_docs/python/tf/test/TestCase) class to implement your unit tests. Get to know this class and how it makes your life easier - e.g., it takes care to use the same random seed (to make your tests more stable), has a lot of useful assertions as well as takes care about creating temp dir or managing TensorFlow sessions. Start simple and continue extending your test coverage as long as your model gets more complex and needs more debugging. It’s typically a good idea to add a unit test each time you fix any specific error to make sure this error wouldn’t occur again. ## Testing custom layers When you implement custom training routines, the recommended practice is to create [custom layers](https://www.tensorflow.org/guide/keras/custom_layers_and_models) by subclassing [tf.keras.layers.Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer). This gives you a possibility to isolate (and test) logical pieces of your code, which makes it more readable and easier to maintain. Let's play with `ExampleBlock` - a simple custom layer. First, we’d like to test whether the shape of the output tensor is the one we expect. ```python class LinearBlockTest(tf.test.TestCase): def test_shape_default(self): x = np.ones((4, 32)) layer = example.LinearBlock() output = layer(x) self.assertAllEqual(output.shape, (4, 32)) ``` You can find more examples testing output shape by exploring the `LinearBlockTest` class. The next thing we can also check is the actual output. It’s not always needed but it might be a good idea when you have a layer with a custom logic that needs to be double-checked. Please note that despite using initializers for weights, our test is not flaky (tf.test.TestCase takes care about it). We can also patch various pieces of the layer we’re concerned about (other layers used, loss functions, stdout, etc.) to check the desired output. Let’s have a look at the example where we’ve patched initializer: ```python @patch.object(initializers, 'get', lambda _: tf.compat.v1.keras.initializers.Ones) def test_output_ones(self): dim = 4 batch_size = 3 output_dim = 2 x = np.ones((batch_size, dim)) layer = example.LinearBlock(output_dim) output = layer(x) expected_output = np.ones((batch_size, output_dim)) * (dim + 1) self.assertAllClose(output, expected_output, atol=1e-4) ``` ## Testing custom keras models The easiest way to test your model is to prepare a small fake dataset and run a few training steps on this dataset to check whether the model can be successfully trained, and validation and prediction also works. Please keep in mind, that successfully means all steps can be run without generating errors, but in the basic case we don’t check whether the training itself makes sense - i.e., whether a loss decreases to any meaningful value. But more about it later. Let’s have a look at a simple example of how to test a model from this [tutorial](https://www.tensorflow.org/tutorials/keras/regression). ```python class ExampleModelTest(tf.test.TestCase): def _get_data(self): dataset_path = tf.keras.utils.get_file( 'auto-mpg.data', 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data') column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] dataset = pd.read_csv(dataset_path, names=column_names, na_values='?', comment='\t', sep=' ', skipinitialspace=True) dataset = dataset.dropna() dataset['Origin'] = dataset['Origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'}) dataset = pd.get_dummies(dataset, prefix='', prefix_sep='') dataset = dataset[dataset.columns].astype('float64') labels = dataset.pop('MPG') return dataset, labels def test_basic(self): train_dataset, train_labels = self._get_data() dim = len(train_dataset.keys()) example_model = example.get_model(dim) test_ind = train_dataset.sample(10).index test_dataset, test_labels = train_dataset.iloc[test_ind], train_labels.iloc[test_ind] history = example_model.fit( train_dataset, train_labels, steps_per_epoch=2, epochs=2, batch_size=10, validation_split=0.1, validation_steps=1) self.assertAlmostEqual( history.history['mse'][-1], history.history['loss'][-1], places=2) _ = example_model.evaluate(test_dataset, test_labels) _ = example_model.predict(test_dataset) ``` You can find additional examples by looking at `ExampleModelTest` class. ## Testing custom estimators If we’re using tf.estimator API, we can use the same approach. There are a few moments to keep in mind though. First, if you might want to convert your pandas DataFrame (you’ve read from csv or other source) into a [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) to make a dataset for testing purposes: ``` faked_dataset = tf.data.Dataset.from_tensor_slices((dict(df.pop(LABEL)), df[LABEL].values)) return faked_dataset.repeat().batch(batch_size) ``` Tensorflow training routine consists of two main pieces - first, the graph is created and compiled, and then the same computational graph is run over input data step by step. You can test whether the graph can be compiled in train/eval/... mode with just invoking the `model_fn`: ```python ... features = make_faked_batch(...) model_fn(features, {}, tf.estimator.ModeKeys.TRAIN)) ``` Or you actually can create an estimator run a few training steps on a faked dataset you’ve prepared: ```python ... e = tf.estimator.Estimator(model_fn, config=tf.estimator.RunConfig(model_dir=self.get_temp_dir()))) e.train(input_fn=lambda: make_faked_batch(...), steps=3) ``` ## Testing model logic All above guarantees only that our model is formally correct (i.e., tensor input and output shapes match one another, there’s no typos and other formal errors in the code). Having these unit tests typically is a huge step forward since it speeds up the model development process. We still might want to extend the test coverage, but it’s also worth looking into other possibilities, for example: * use [TensorFlow Data Validation](https://www.tensorflow.org/tfx/tutorials/data_validation/tfdv_basic) to inspect your data for anomalies and skewness * use [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic) to check for performance of your trained model * have a look how [PerfZero](https://github.com/tensorflow/benchmarks/tree/master/perfzero) helps debug and track TensorFlow models performance with help of [tf.test.Benchmark](https://www.tensorflow.org/api_docs/python/tf/test/Benchmark) * implement integration tests if needed * consider adding unit tests for your tfx/kubeflow pipelines * implement proper monitoring and alerting for your ML models * check additional materials, e.g. [What is your ML test score](https://research.google/pubs/pub45742/) and [Testing & Debugging ML systems](https://developers.google.com/machine-learning/testing-debugging) course
GCP
After you ve built a successful prototype of a machine learning model there s still plenty of things to do To some extent your journey as an ML engineer only begins here You d need to take care about plenty of things such as operationalization of your model monitoring CI CD reliability and reproducibility and many others What s important you should expect that other engineers would be working on your code and occasionally changing it Your code should be readable and maintainable One way to achieve this is splitting it into smaller pieces separate functions and classes that are covered by unit tests This helps you to find errors earlier before submitting a training job or even worse deploying a model to production to find out serving is broken because of a typo Get to know recommended practices https www tensorflow org community contribute tests for testing Tensorflow code It s also typically a good idea to look at test cases examples in TensorFlow source code Use tf test TestCase https www tensorflow org api docs python tf test TestCase class to implement your unit tests Get to know this class and how it makes your life easier e g it takes care to use the same random seed to make your tests more stable has a lot of useful assertions as well as takes care about creating temp dir or managing TensorFlow sessions Start simple and continue extending your test coverage as long as your model gets more complex and needs more debugging It s typically a good idea to add a unit test each time you fix any specific error to make sure this error wouldn t occur again Testing custom layers When you implement custom training routines the recommended practice is to create custom layers https www tensorflow org guide keras custom layers and models by subclassing tf keras layers Layer https www tensorflow org api docs python tf keras layers Layer This gives you a possibility to isolate and test logical pieces of your code which makes it more readable and easier to maintain Let s play with ExampleBlock a simple custom layer First we d like to test whether the shape of the output tensor is the one we expect python class LinearBlockTest tf test TestCase def test shape default self x np ones 4 32 layer example LinearBlock output layer x self assertAllEqual output shape 4 32 You can find more examples testing output shape by exploring the LinearBlockTest class The next thing we can also check is the actual output It s not always needed but it might be a good idea when you have a layer with a custom logic that needs to be double checked Please note that despite using initializers for weights our test is not flaky tf test TestCase takes care about it We can also patch various pieces of the layer we re concerned about other layers used loss functions stdout etc to check the desired output Let s have a look at the example where we ve patched initializer python patch object initializers get lambda tf compat v1 keras initializers Ones def test output ones self dim 4 batch size 3 output dim 2 x np ones batch size dim layer example LinearBlock output dim output layer x expected output np ones batch size output dim dim 1 self assertAllClose output expected output atol 1e 4 Testing custom keras models The easiest way to test your model is to prepare a small fake dataset and run a few training steps on this dataset to check whether the model can be successfully trained and validation and prediction also works Please keep in mind that successfully means all steps can be run without generating errors but in the basic case we don t check whether the training itself makes sense i e whether a loss decreases to any meaningful value But more about it later Let s have a look at a simple example of how to test a model from this tutorial https www tensorflow org tutorials keras regression python class ExampleModelTest tf test TestCase def get data self dataset path tf keras utils get file auto mpg data http archive ics uci edu ml machine learning databases auto mpg auto mpg data column names MPG Cylinders Displacement Horsepower Weight Acceleration Model Year Origin dataset pd read csv dataset path names column names na values comment t sep skipinitialspace True dataset dataset dropna dataset Origin dataset Origin map 1 USA 2 Europe 3 Japan dataset pd get dummies dataset prefix prefix sep dataset dataset dataset columns astype float64 labels dataset pop MPG return dataset labels def test basic self train dataset train labels self get data dim len train dataset keys example model example get model dim test ind train dataset sample 10 index test dataset test labels train dataset iloc test ind train labels iloc test ind history example model fit train dataset train labels steps per epoch 2 epochs 2 batch size 10 validation split 0 1 validation steps 1 self assertAlmostEqual history history mse 1 history history loss 1 places 2 example model evaluate test dataset test labels example model predict test dataset You can find additional examples by looking at ExampleModelTest class Testing custom estimators If we re using tf estimator API we can use the same approach There are a few moments to keep in mind though First if you might want to convert your pandas DataFrame you ve read from csv or other source into a tf data Dataset https www tensorflow org api docs python tf data Dataset to make a dataset for testing purposes faked dataset tf data Dataset from tensor slices dict df pop LABEL df LABEL values return faked dataset repeat batch batch size Tensorflow training routine consists of two main pieces first the graph is created and compiled and then the same computational graph is run over input data step by step You can test whether the graph can be compiled in train eval mode with just invoking the model fn python features make faked batch model fn features tf estimator ModeKeys TRAIN Or you actually can create an estimator run a few training steps on a faked dataset you ve prepared python e tf estimator Estimator model fn config tf estimator RunConfig model dir self get temp dir e train input fn lambda make faked batch steps 3 Testing model logic All above guarantees only that our model is formally correct i e tensor input and output shapes match one another there s no typos and other formal errors in the code Having these unit tests typically is a huge step forward since it speeds up the model development process We still might want to extend the test coverage but it s also worth looking into other possibilities for example use TensorFlow Data Validation https www tensorflow org tfx tutorials data validation tfdv basic to inspect your data for anomalies and skewness use TensorFlow Model Analysis https www tensorflow org tfx tutorials model analysis tfma basic to check for performance of your trained model have a look how PerfZero https github com tensorflow benchmarks tree master perfzero helps debug and track TensorFlow models performance with help of tf test Benchmark https www tensorflow org api docs python tf test Benchmark implement integration tests if needed consider adding unit tests for your tfx kubeflow pipelines implement proper monitoring and alerting for your ML models check additional materials e g What is your ML test score https research google pubs pub45742 and Testing Debugging ML systems https developers google com machine learning testing debugging course
GCP This example creates a gRCP server that connect to redis to find the name of a gRPC Example user for a given user id Application Project Structure grpcexampleredis
# gRPC Example This example creates a gRCP server that connect to redis to find the name of a user for a given user id. ## Application Project Structure ``` . └── grpc_example_redis └── src └── main β”œβ”€β”€ java └── com.example.grpc β”œβ”€β”€ client └── ConnectClient # Example Client β”œβ”€β”€ server └── ConnectServer # Initializes gRPC Server └── service β”œβ”€β”€ ConnectServiceImpl # Implementation of rpc services in the proto └── RedisUtil # Tool to initialize redis └── proto └── connect_service.proto # Proto definition of the server β”œβ”€β”€ pom.xml └── README.md ``` ## Technology Stack 1. gRPC 2. Redis ## Setup Instructions ### Redis Emulator Setup #### Quick Start Reference guide can be found [here](https://redis.io/topics/quickstart) #### Installation ``` $ wget http://download.redis.io/redis-stable.tar.gz $ tar xvzf redis-stable.tar.gz $ cd redis-stable $ make $ make test $ make install ``` #### Start the Server ``` $ redis-server [28550] 01 Aug 19:29:28 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [28550] 01 Aug 19:29:28 * Server started, Redis version 2.2.12 [28550] 01 Aug 19:29:28 * The server is now ready to accept connections on port 6379 ... more logs ... ``` #### Test the Server ``` $ redis-cli redis 127.0.0.1:6379> ping PONG ``` #### Set Data ``` redis 127.0.0.1:6379> set 1234 MyName OK redis 127.0.0.1:6379> get mykey "MyName" ``` #### Set environmental variables ``` $ export REDIS_HOST=127.0.0.1 $ export REDIS_PORT=6379 $ export REDIS_MAX_TOTAL_CONNECTIONS=128 ``` ## Usage ### Initialize the server ``` $ mvn -DskipTests package exec:java -Dexec.mainClass=com.example.grpc.server.ConnectServer ``` ### Run the Client ``` $ mvn -DskipTests package exec:java -Dexec.mainClass=com.example.grpc.client.ConnectClient ``
GCP
gRPC Example This example creates a gRCP server that connect to redis to find the name of a user for a given user id Application Project Structure grpc example redis src main java com example grpc client ConnectClient Example Client server ConnectServer Initializes gRPC Server service ConnectServiceImpl Implementation of rpc services in the proto RedisUtil Tool to initialize redis proto connect service proto Proto definition of the server pom xml README md Technology Stack 1 gRPC 2 Redis Setup Instructions Redis Emulator Setup Quick Start Reference guide can be found here https redis io topics quickstart Installation wget http download redis io redis stable tar gz tar xvzf redis stable tar gz cd redis stable make make test make install Start the Server redis server 28550 01 Aug 19 29 28 Warning no config file specified using the default config In order to specify a config file use redis server path to redis conf 28550 01 Aug 19 29 28 Server started Redis version 2 2 12 28550 01 Aug 19 29 28 The server is now ready to accept connections on port 6379 more logs Test the Server redis cli redis 127 0 0 1 6379 ping PONG Set Data redis 127 0 0 1 6379 set 1234 MyName OK redis 127 0 0 1 6379 get mykey MyName Set environmental variables export REDIS HOST 127 0 0 1 export REDIS PORT 6379 export REDIS MAX TOTAL CONNECTIONS 128 Usage Initialize the server mvn DskipTests package exec java Dexec mainClass com example grpc server ConnectServer Run the Client mvn DskipTests package exec java Dexec mainClass com example grpc client ConnectClient
GCP Dataflow PubSub XML to Google cloud storage sample pipeline you may not use this file except in compliance with the License License https www apache org licenses LICENSE 2 0 You may obtain a copy of the License at Licensed under the Apache License Version 2 0 the License Copyright 2023 Google LLC
# Dataflow PubSub XML to Google cloud storage sample pipeline ## License Copyright 2023 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # 1. Getting started ## Create a new Google Cloud project **It is recommended to go through this walkthrough using a new temporary Google Cloud project, unrelated to any of your existing Google Cloud projects.** See https://cloud.google.com/resource-manager/docs/creating-managing-projects for more details. For a quick reference, please follow these steps: 1. Open the [Cloud Platform Console][cloud-console]. 2. In the drop-down menu at the top, select **Create a project**. 3. Give your project a name = <CHANGE_ME> 4. Save your project's name to an environment variable for ease of use: ``` export PROJECT=<CHANGE_ME> ``` # 2. Configure a local environment ## Setup the test environment ``` python -m venv dataflow_pub_sub_xml_to_gcs source ./dataflow_pub_sub_xml_to_gcs/bin/activate pip install -q --upgrade pip setuptools wheel pip install 'apache-beam[gcp]' # Linux, Mac \path\to\env\Scripts\activate # Windows ``` If you're running this on an Apple Silicon Mac and face issues when running, please run the following commands to build the _grpcio_ library from source: ``` pip uninstall grpcio export GRPC_PYTHON_LDFLAGS=" -framework CoreFoundation" pip install grpcio --no-binary :all: ``` # 3. Configure the cloud environment ## Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` Make sure to enable necessary APIs: ``` gcloud services enable dataflow.googleapis.com compute.googleapis.com logging.googleapis.com storage-component.googleapis.com storage-api.googleapis.com pubsub.googleapis.com cloudresourcemanager.googleapis.com cloudscheduler.googleapis.com ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ## Configure a PubSub topic ### Pubsub Setup The following [doc](https://cloud.google.com/pubsub/docs/quickstart-console) can be used to set up the topic and optional subscription needed to run this example. #### Topics To run this example one topic needs to be created: 1. A topic to publish the XML formatted data ``` export TOPIC_ID=<CHANGE_ME> gcloud pubsub topics create $TOPIC_ID ``` #### Subscription **Optionally** You can set up a custom subscription. However, this is not mandatory since the Dataflow PubSub source automatically creates one if a topic is provided. ## Create a GCS bucket The output will write to a GCS bucket: ``` export BUCKET_NAME=<CHANGE_ME> gsutil mb gs://$BUCKET_NAME ``` # 4. Run the test ## Start sending messages to PubSub Execute the message sending script as follows: ``` python publish2PubSub.py \ --project_id $PROJECT \ --pub_sub_topic_id $TOPIC_ID \ --xml_string XML_STRING \ --message_send_interval MESSAGE_SEND_INTERVAL ``` For example: ``` python publish2PubSub.py \ --project_id $PROJECT \ --pub_sub_topic_id $TOPIC_ID \ --xml_string "<note><to>PubSub</to><from>Test</from><heading>Test</heading><body>Sample body</body></note>" \ --message_send_interval 1 ``` ## Start the Pipeline Open up a new terminal and execute the following command: ``` python beamPubSubXml2Gcs.py \ --project_id $PROJECT \ --input_topic_id $TOPIC_ID \ --runner RUNNER \ --window_size WINDOW_SIZE \ --output_path "gs://$BUCKET_NAME/" \ --num_shards NUM_SHARDS ``` For example: ``` python beamPubSubXml2Gcs.py \ --project_id $PROJECT \ --input_topic_id $TOPIC_ID \ --runner DataflowRunner \ --window_size 1.0 \ --gcs_path gs://$BUCKET_NAME/ \ --num_shards 2 ``` ## Monitor the Dataflow Job Navigate to https://console.cloud.google.com/dataflow/jobs to locate the job you just created. Clicking on the job will let you navigate to the job monitoring screen. ## Debug the Pipeline **Optionally** This sample contains the necessary bindings to debug step by step and/or breakpoint this code in Vs Code. To do so, please install the VsCode Google Cloud [extension](https://cloud.google.com/code/docs/vscode/install) ## View the output in CGS List the generated files in the GCS bucket and inspect their contents ``` gsutil ls gs://${BUCKET_NAME}/output_location/ gsutil cat gs://${BUCKET_NAME}/output_location/* ``` # 5. Clean up ## Remove cloud resources 1. Delete the PubSub topic ``` gcloud pubsub topics delete $TOPIC_ID ``` 2. Delete the GCS files ``` gsutil -m rm -rf "gs://${BUCKET_NAME}/output_location/*" ``` 3. Remove the GCS bucket ``` gsutil rb gs://${BUCKET_NAME} ``` 4. **Optionally** Revoke the authentication credentials that you created, and delete the local credential file. ``` gcloud auth application-default revoke ``` 5. **Optionally** Revoke credentials from the gcloud CLI. ``` gcloud auth revoke ``` ## Terminate the PubSub streaming On the terminal where you ran the _publish2PubSub_ script, press _Ctrl+C_ and _Y_ to confirm
GCP
Dataflow PubSub XML to Google cloud storage sample pipeline License Copyright 2023 Google LLC Licensed under the Apache License Version 2 0 the License you may not use this file except in compliance with the License You may obtain a copy of the License at https www apache org licenses LICENSE 2 0 Unless required by applicable law or agreed to in writing software distributed under the License is distributed on an AS IS BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express or implied See the License for the specific language governing permissions and limitations under the License 1 Getting started Create a new Google Cloud project It is recommended to go through this walkthrough using a new temporary Google Cloud project unrelated to any of your existing Google Cloud projects See https cloud google com resource manager docs creating managing projects for more details For a quick reference please follow these steps 1 Open the Cloud Platform Console cloud console 2 In the drop down menu at the top select Create a project 3 Give your project a name CHANGE ME 4 Save your project s name to an environment variable for ease of use export PROJECT CHANGE ME 2 Configure a local environment Setup the test environment python m venv dataflow pub sub xml to gcs source dataflow pub sub xml to gcs bin activate pip install q upgrade pip setuptools wheel pip install apache beam gcp Linux Mac path to env Scripts activate Windows If you re running this on an Apple Silicon Mac and face issues when running please run the following commands to build the grpcio library from source pip uninstall grpcio export GRPC PYTHON LDFLAGS framework CoreFoundation pip install grpcio no binary all 3 Configure the cloud environment Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login Make sure to enable necessary APIs gcloud services enable dataflow googleapis com compute googleapis com logging googleapis com storage component googleapis com storage api googleapis com pubsub googleapis com cloudresourcemanager googleapis com cloudscheduler googleapis com cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Configure a PubSub topic Pubsub Setup The following doc https cloud google com pubsub docs quickstart console can be used to set up the topic and optional subscription needed to run this example Topics To run this example one topic needs to be created 1 A topic to publish the XML formatted data export TOPIC ID CHANGE ME gcloud pubsub topics create TOPIC ID Subscription Optionally You can set up a custom subscription However this is not mandatory since the Dataflow PubSub source automatically creates one if a topic is provided Create a GCS bucket The output will write to a GCS bucket export BUCKET NAME CHANGE ME gsutil mb gs BUCKET NAME 4 Run the test Start sending messages to PubSub Execute the message sending script as follows python publish2PubSub py project id PROJECT pub sub topic id TOPIC ID xml string XML STRING message send interval MESSAGE SEND INTERVAL For example python publish2PubSub py project id PROJECT pub sub topic id TOPIC ID xml string note to PubSub to from Test from heading Test heading body Sample body body note message send interval 1 Start the Pipeline Open up a new terminal and execute the following command python beamPubSubXml2Gcs py project id PROJECT input topic id TOPIC ID runner RUNNER window size WINDOW SIZE output path gs BUCKET NAME num shards NUM SHARDS For example python beamPubSubXml2Gcs py project id PROJECT input topic id TOPIC ID runner DataflowRunner window size 1 0 gcs path gs BUCKET NAME num shards 2 Monitor the Dataflow Job Navigate to https console cloud google com dataflow jobs to locate the job you just created Clicking on the job will let you navigate to the job monitoring screen Debug the Pipeline Optionally This sample contains the necessary bindings to debug step by step and or breakpoint this code in Vs Code To do so please install the VsCode Google Cloud extension https cloud google com code docs vscode install View the output in CGS List the generated files in the GCS bucket and inspect their contents gsutil ls gs BUCKET NAME output location gsutil cat gs BUCKET NAME output location 5 Clean up Remove cloud resources 1 Delete the PubSub topic gcloud pubsub topics delete TOPIC ID 2 Delete the GCS files gsutil m rm rf gs BUCKET NAME output location 3 Remove the GCS bucket gsutil rb gs BUCKET NAME 4 Optionally Revoke the authentication credentials that you created and delete the local credential file gcloud auth application default revoke 5 Optionally Revoke credentials from the gcloud CLI gcloud auth revoke Terminate the PubSub streaming On the terminal where you ran the publish2PubSub script press Ctrl C and Y to confirm
GCP This example covers how distributed data preprocessing training and serving A simple machine learning system capable of recommending songs given a user as a query using collaborative filtering and TensorFlow user and item features to be included during training can be done on GCP CloudML Deep Collaborative Filtering Unlike classic matrix factorization approaches using a neural network allows
# CloudML Deep Collaborative Filtering A simple machine learning system capable of recommending songs given a user as a query using collaborative filtering and TensorFlow. Unlike classic matrix factorization approaches, using a neural network allows user and item features to be included during training. This example covers how distributed data preprocessing, training, and serving can be done on [Google Cloud Platform](https://cloud.google.com/)(GCP). Further reading: - [Neural Collaborative Filtering](https://arxiv.org/abs/1708.05031): A paper on using neural networks instead of matrix factorization to perform collaborative filtering. - [Deep Neural Networks for YouTube Recommendations](https://ai.google/research/pubs/pub45530): Youtube's approach to recommending millions of videos at low latencies by first generating candidates from multiple models and ranking the candidate pool. For a fully managed service, check out [Recommendations AI](https://cloud.google.com/recommendations/). ## Setup Create a new project on GCP and set up GCP credentials: ```shell gcloud auth login gcloud auth application-default login ``` Enable the following APIS: - [Dataflow](http://console.cloud.google.com/apis/api/dataflow.googleapis.com) - [AI Platform](http://console.cloud.google.com/apis/api/ml.googleapis.com) Using the `preprocessing/config.example.ini` template, create `preprocessing/config.ini` with the GCP project id fields filled in. Additionally, you will need to create a GCS bucket. This code assumes a bucket exists by the name of `[project-id]-bucket`. Set up your python environment: ```shell python3 -m venv venv source ./venv/bin/activate pip install -r requirements.txt ``` ## Preprocessing The data preprocessing pipeline uses the [ListenBrainz](https://console.cloud.google.com/marketplace/details/metabrainz/listenbrainz) dataset hosted on [Cloud Marketplace](https://console.cloud.google.com/marketplace). Data is processed and written to [Google Cloud Storage](https://cloud.google.com/storage/)(GCS) as [TFRecords](https://www.tensorflow.org/tutorials/load_data/tf_records). These files are read using [Cloud DataFlow](https://cloud.google.com/dataflow/). The steps involved are as follows: 1. Read the data in using the [BigQuery](https://cloud.google.com/bigquery/) query found [here](trainer/query.py). This query cleans the features and creates a label for each unique user-item pair that exists. This label is 1 if a user has listened to a song more than twice and 0 otherwise. Samples are also given weights based on how many interactions there were between the user and item. 2. Using [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started), map each username and product id to an integer value and write the vocabularies to text files. Leave users and items under a set frequency threshold out of the vocabularies. 3. Filter away user-item pairs where either element is outside of its corresponding vocabulary. 4. Split the data into train, validation, and test sets. 5. Write each dataset as TFRecords to GCS. ### Execution | Command | Description | |---------|-------------| | `bin/run.preprocess.local.sh` | Process a sample of the data locally and write outputs to a local directory. | | `bin/run.preprocess.cloud.sh` | Process the data on GCP using DataFlow and write outputs to a GCS bucket. | | `bin/run.test.sh` | Run unit tests for the preprocessing pipeline. | ## Training A [Custom Estimator](https://www.tensorflow.org/guide/custom_estimators) is trained using TensorFlow and [Cloud AI Platform](https://cloud.google.com/ai-platform/)(CAIP). The training steps are as follows: 1. Read TFRecords from GCS and create a `tf.data.Dataset` for each of them that yields data in batches. 2. Use the TensorFlow Transform output from preprocessing to transform usernames and product ids into int ids. 3. Use `user_id`s and `item_id`s to train embeddings. 4. Add item features and create two input layers: one with user embedding vectors, and another with the concatenation of item embedding vectors and item features. 5. Create a user neural net and item neural net from the input layers, ensuring that the final layers are the same size. 6. Compute the cosine similarity between the final layers of the user and item nets. Take the absolute value to get a value between 0 and 1. 7. Calculate error using log loss and train the model. 8. Evaluate the model performance by sampling 1000 random items and calculating the average recall@k when each positive sample's item is ranked against these random items for the sample's user. 9. Export a `SavedModel` for use in serving. ### Execution Training job scripts expect the following argument: - `MODEL_INPUTS_DIR`: The directory containing the TFRecords from preprocessing. | Command | Description | |---------|-------------| | `bin/run.train.local.sh` | Train the model locally and save model checkpoints to a local model dir. | | `bin/run.train.cloud.sh` | Train the model on CAIP and save model checkpoints to a GCS bucket. | | `bin/run.train.tune.sh` | Train the model on CAIP as above, but using hyperparameter tuning. | Note: `SCALE_TIER` is set to `STANDARD_1` to demonstrate distributed training. However, it can be set to `BASIC` to reduce costs. See [scale tiers](https://cloud.google.com/ml-engine/docs/tensorflow/machine-types). ### Tensorboard Model training can be monitored on Tensorboard using the following command: ```shell tensorboard --logdir <path to model dir>/<trial number> ``` Tensorboard's projector, in particular, is very useful for debugging or analyzing embeddings. In the projector tab in Tensorboard, try setting the label to `name`. ## Serving Models can be hosted on CAIP, which can be used to make online and batch predictions via JSON requests. 1. Upload the `SavedModel` from training to CAIP. 2. Using a file containing a list of usernames, create inputs to pass to the model hosted on CAIP for predictions. 3. Make the predictions. ### Execution The cloud serving job and prediction job scripts expect the same argument: - `MODEL_OUTPUTS_DIR`: The model directory containing each model trial. - `TRIAL` (optional): The trial number to use. The local serving job expects no arguments, and the local prediction job expects the model version number. | Command | Description | |---------|-------------| | `bin/run.serve.local.sh` | Upload a new version of the recommender model to CAIP using a locally trained model. | | `bin/run.serve.cloud.sh` | Upload a new version of the recommender model to CAIP using a model trained on CAIP. | | `bin/run.predict.local.sh` | Using `serving/test.json`, create a prediction job on CAIP after using the local serving script. | | `bin/run.predict.cloud.sh` | Using `serving/test.json`, create a prediction job on CAIP after using the cloud sering script. |
GCP
CloudML Deep Collaborative Filtering A simple machine learning system capable of recommending songs given a user as a query using collaborative filtering and TensorFlow Unlike classic matrix factorization approaches using a neural network allows user and item features to be included during training This example covers how distributed data preprocessing training and serving can be done on Google Cloud Platform https cloud google com GCP Further reading Neural Collaborative Filtering https arxiv org abs 1708 05031 A paper on using neural networks instead of matrix factorization to perform collaborative filtering Deep Neural Networks for YouTube Recommendations https ai google research pubs pub45530 Youtube s approach to recommending millions of videos at low latencies by first generating candidates from multiple models and ranking the candidate pool For a fully managed service check out Recommendations AI https cloud google com recommendations Setup Create a new project on GCP and set up GCP credentials shell gcloud auth login gcloud auth application default login Enable the following APIS Dataflow http console cloud google com apis api dataflow googleapis com AI Platform http console cloud google com apis api ml googleapis com Using the preprocessing config example ini template create preprocessing config ini with the GCP project id fields filled in Additionally you will need to create a GCS bucket This code assumes a bucket exists by the name of project id bucket Set up your python environment shell python3 m venv venv source venv bin activate pip install r requirements txt Preprocessing The data preprocessing pipeline uses the ListenBrainz https console cloud google com marketplace details metabrainz listenbrainz dataset hosted on Cloud Marketplace https console cloud google com marketplace Data is processed and written to Google Cloud Storage https cloud google com storage GCS as TFRecords https www tensorflow org tutorials load data tf records These files are read using Cloud DataFlow https cloud google com dataflow The steps involved are as follows 1 Read the data in using the BigQuery https cloud google com bigquery query found here trainer query py This query cleans the features and creates a label for each unique user item pair that exists This label is 1 if a user has listened to a song more than twice and 0 otherwise Samples are also given weights based on how many interactions there were between the user and item 2 Using TensorFlow Transform https www tensorflow org tfx transform get started map each username and product id to an integer value and write the vocabularies to text files Leave users and items under a set frequency threshold out of the vocabularies 3 Filter away user item pairs where either element is outside of its corresponding vocabulary 4 Split the data into train validation and test sets 5 Write each dataset as TFRecords to GCS Execution Command Description bin run preprocess local sh Process a sample of the data locally and write outputs to a local directory bin run preprocess cloud sh Process the data on GCP using DataFlow and write outputs to a GCS bucket bin run test sh Run unit tests for the preprocessing pipeline Training A Custom Estimator https www tensorflow org guide custom estimators is trained using TensorFlow and Cloud AI Platform https cloud google com ai platform CAIP The training steps are as follows 1 Read TFRecords from GCS and create a tf data Dataset for each of them that yields data in batches 2 Use the TensorFlow Transform output from preprocessing to transform usernames and product ids into int ids 3 Use user id s and item id s to train embeddings 4 Add item features and create two input layers one with user embedding vectors and another with the concatenation of item embedding vectors and item features 5 Create a user neural net and item neural net from the input layers ensuring that the final layers are the same size 6 Compute the cosine similarity between the final layers of the user and item nets Take the absolute value to get a value between 0 and 1 7 Calculate error using log loss and train the model 8 Evaluate the model performance by sampling 1000 random items and calculating the average recall k when each positive sample s item is ranked against these random items for the sample s user 9 Export a SavedModel for use in serving Execution Training job scripts expect the following argument MODEL INPUTS DIR The directory containing the TFRecords from preprocessing Command Description bin run train local sh Train the model locally and save model checkpoints to a local model dir bin run train cloud sh Train the model on CAIP and save model checkpoints to a GCS bucket bin run train tune sh Train the model on CAIP as above but using hyperparameter tuning Note SCALE TIER is set to STANDARD 1 to demonstrate distributed training However it can be set to BASIC to reduce costs See scale tiers https cloud google com ml engine docs tensorflow machine types Tensorboard Model training can be monitored on Tensorboard using the following command shell tensorboard logdir path to model dir trial number Tensorboard s projector in particular is very useful for debugging or analyzing embeddings In the projector tab in Tensorboard try setting the label to name Serving Models can be hosted on CAIP which can be used to make online and batch predictions via JSON requests 1 Upload the SavedModel from training to CAIP 2 Using a file containing a list of usernames create inputs to pass to the model hosted on CAIP for predictions 3 Make the predictions Execution The cloud serving job and prediction job scripts expect the same argument MODEL OUTPUTS DIR The model directory containing each model trial TRIAL optional The trial number to use The local serving job expects no arguments and the local prediction job expects the model version number Command Description bin run serve local sh Upload a new version of the recommender model to CAIP using a locally trained model bin run serve cloud sh Upload a new version of the recommender model to CAIP using a model trained on CAIP bin run predict local sh Using serving test json create a prediction job on CAIP after using the local serving script bin run predict cloud sh Using serving test json create a prediction job on CAIP after using the cloud sering script
GCP The processor service subscribes to the topic processes every message The DFDL definitions are stored in a Bigtable database The application send a request with the binary to process to a pubsub topic Project Structure Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition applies the definition and publishes the json result to a topic in pubsub
# Data Format Description Language ([DFDL](https://en.wikipedia.org/wiki/Data_Format_Description_Language)) Processor Example This module is a example how to process a binary using a DFDL definition. The DFDL definitions are stored in a Bigtable database. The application send a request with the binary to process to a pubsub topic. The processor service subscribes to the topic, processes every message, applies the definition and publishes the json result to a topic in pubsub. ## Project Structure ``` . └── dfdl_example β”œβ”€β”€ examples # Contain a binary and dfdl definition to be used to run this example └── src └── main └── java └── com.example.dfdl β”œβ”€β”€ BigtableServer # Configures bigtable database β”œβ”€β”€ BigtableService # Reads dfdl definitons from a bigtable database β”œβ”€β”€ DfdlDef # Embedded entities β”œβ”€β”€ DfdlService # Processes the binary using a dfdl definition and output a json β”œβ”€β”€ MessageController # Publishes message to a topic with a binary to be processed. β”œβ”€β”€ ProcessorService # Initializes components, configurations and services. β”œβ”€β”€ PubSubServer # Publishes and subscribes to topics using channels adapters. └── resources └── application.properties └── pom.xml └── README.md ``` ### Tools Before you start is recommended that you install the following tools: 1. [Google Cloud SDK](https://cloud.google.com/sdk/docs/install) 2. [Cloud Bigtable Tool](https://cloud.google.com/bigtable/docs/cbt-overview) ## Technology Stack 1. Cloud Bigtable 2. Cloud Pubsub ## Frameworks 1. Spring Boot ## Libraries 1. [Apache Daffodil](https://daffodil.apache.org/) ## Setup Instructions ### Project Setup #### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 1. In the drop-down menu at the top, select **Create a project**. 1. Give your project a name = my-dfdl-project 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ #### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings #### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ #### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ### Bigtable Setup How to create a Bigtable database instance can be found [here](https://cloud.google.com/bigtable/docs/creating-instance) #### How to add data to bigtable The following doc, [Writing to Bigtable](https://cloud.google.com/bigtable/docs/writing-data), can be used to add data to bigtable to run the example. This example connects to a Cloud Bigtable with a collection with the following specification. The configuration can be changed by changing the application.properties file. ``` Table dfdl-schemas => Column Family dfdl => Column Family Qualifier => binary_example => { 'name': "dfdl-name" 'definiton': "<?xml version"1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:dfdl="http://www.ogf.org/dfdl/dfdl-1.0/" targetNamespace="http://example.com/dfdl/helloworld/"> <xs:include schemaLocation="org/apache/daffodil/xsd/DFDLGeneralFormat.dfdl.xsd" /> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:format ref="GeneralFormat" /> </xs:appinfo> </xs:annotation> <xs:element name="binary_example"> <xs:complexType> <xs:sequence> <xs:element name="w" type="xs:int"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryNumberRep="binary" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="x" type="xs:int"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryNumberRep="binary" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="y" type="xs:double"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryFloatRep="ieee" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="z" type="xs:float"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" byteOrder="bigEndian" lengthKind="implicit" binaryFloatRep="ieee" /> </xs:appinfo> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>"; } ``` This dfdl definition example can be found in the binary_example.dfdl.xsd file. ### Pubsub Setup The following [doc](https://cloud.google.com/pubsub/docs/quickstart-console) can be used to set up the topics and subscriptions needed to run this example. #### Topics To run this example two topics need to be created: 1. A topic to publish the final json output: "data-output-json-topic" 2. A topic to publish the binary to be processed: "data-input-binary-topic" #### Subscription The following subscriptions need to be created: 1. A subscription to pull the binary data: data-input-binary-sub ## Usage ### Initialize the application Reference: [Building an Application with Spring Boot](https://spring.io/guides/gs/spring-boot/) ``` ./mvnw spring-boot:run ``` ### Send a request ``` curl --data "message=0000000500779e8c169a54dd0a1b4a3fce2946f6" localhost:8081/publish ``
GCP
Data Format Description Language DFDL https en wikipedia org wiki Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition The DFDL definitions are stored in a Bigtable database The application send a request with the binary to process to a pubsub topic The processor service subscribes to the topic processes every message applies the definition and publishes the json result to a topic in pubsub Project Structure dfdl example examples Contain a binary and dfdl definition to be used to run this example src main java com example dfdl BigtableServer Configures bigtable database BigtableService Reads dfdl definitons from a bigtable database DfdlDef Embedded entities DfdlService Processes the binary using a dfdl definition and output a json MessageController Publishes message to a topic with a binary to be processed ProcessorService Initializes components configurations and services PubSubServer Publishes and subscribes to topics using channels adapters resources application properties pom xml README md Tools Before you start is recommended that you install the following tools 1 Google Cloud SDK https cloud google com sdk docs install 2 Cloud Bigtable Tool https cloud google com bigtable docs cbt overview Technology Stack 1 Cloud Bigtable 2 Cloud Pubsub Frameworks 1 Spring Boot Libraries 1 Apache Daffodil https daffodil apache org Setup Instructions Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 1 In the drop down menu at the top select Create a project 1 Give your project a name my dfdl project 1 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Bigtable Setup How to create a Bigtable database instance can be found here https cloud google com bigtable docs creating instance How to add data to bigtable The following doc Writing to Bigtable https cloud google com bigtable docs writing data can be used to add data to bigtable to run the example This example connects to a Cloud Bigtable with a collection with the following specification The configuration can be changed by changing the application properties file Table dfdl schemas Column Family dfdl Column Family Qualifier binary example name dfdl name definiton xml version 1 0 encoding UTF 8 xs schema xmlns xs http www w3 org 2001 XMLSchema xmlns dfdl http www ogf org dfdl dfdl 1 0 targetNamespace http example com dfdl helloworld xs include schemaLocation org apache daffodil xsd DFDLGeneralFormat dfdl xsd xs annotation xs appinfo source http www ogf org dfdl dfdl format ref GeneralFormat xs appinfo xs annotation xs element name binary example xs complexType xs sequence xs element name w type xs int xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryNumberRep binary byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name x type xs int xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryNumberRep binary byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name y type xs double xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryFloatRep ieee byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name z type xs float xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary byteOrder bigEndian lengthKind implicit binaryFloatRep ieee xs appinfo xs annotation xs element xs sequence xs complexType xs element xs schema This dfdl definition example can be found in the binary example dfdl xsd file Pubsub Setup The following doc https cloud google com pubsub docs quickstart console can be used to set up the topics and subscriptions needed to run this example Topics To run this example two topics need to be created 1 A topic to publish the final json output data output json topic 2 A topic to publish the binary to be processed data input binary topic Subscription The following subscriptions need to be created 1 A subscription to pull the binary data data input binary sub Usage Initialize the application Reference Building an Application with Spring Boot https spring io guides gs spring boot mvnw spring boot run Send a request curl data message 0000000500779e8c169a54dd0a1b4a3fce2946f6 localhost 8081 publish
GCP 1 Drastically reduce time to perform or by supporting execution of resource plans A state scalable project factory pattern with Terragrunt 1 Providing a dynamic way to configure for categories of resources in directories Resolves the problem of state volume explotion with project factory Terragrunt helps with that by Overview 1 Providing configuration of source code by generating code in target directories using dynamic definitions
# A 'state-scalable' project factory pattern with Terragrunt ## Overview Resolves the problem of state volume explotion with project factory. Terragrunt helps with that by: 1. Providing a dynamic way to configure [remote_state](https://terragrunt.gruntwork.io/docs/features/keep-your-remote-state-configuration-dry/#keep-your-remote-state-configuration-dry) for categories of resources in directories. 1. Providing [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) configuration of source code by generating code in target directories using dynamic [source](https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/#motivation) definitions. 1. Drastically reduce time to perform `terraform plan` or `terraform apply` by supporting [parallel](https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/) execution of resource plans. This pattern scales the 'factory' oriented approach of IaC implementation, facilitating both scalability of the Terraform state file size and also developer productivity by minimizing time to run *plans*. By providing mechanisms to create resource group definitions using both local and common data configurations through `defaults`, and implementing `DRY` code in a central `source`, it encourages a mature `Infrastructure as Data` implementation practice. ## Expalanation ![Diagram](docs/images/image2.png) Implementing a factory oriented pattern for deploying resource groups is a common practice in IaC (Infrastructure as Code). This is typically done by having a configurable blueprint of data to describe the infrastructure to avoid repetition of code. [Project-factory](https://registry.terraform.io/modules/terraform-google-modules/project-factory/google/latest) is a common manifestation of this requirement since in GCP, projects need to be created ubiquitously. This pattern can however result in intractable state file sizes resulting in slow pipeline steps everytime `terraform plan` or `apply` needs to run, because it reads all resources from the Cloud environment. The general guideline from [Google Cloud's documentation](https://cloud.google.com/docs/terraform/best-practices-for-terraform#minimize-resources) is that each state file should not have more than 100 resources -- which itself can be obfuscated by the use of modules. This example provides a solution to *state explosion* using Terragrunt. In addition, it describes a pattern such that the *Terraform source code* for implementing resources can be defined in the sub-directory as a data configuraiton, instead of repeating the code. Generally, this pattern splits IaC into `data` and `src` directories at the top level with their configuration connected by `terragrunt.hcl` files at different levels of the file hierarchy. In this example, the public `project-factory` module is used to create projects for `team1` and `team2` categories, while maintaining separate state files using the Terragrunt's configuration. As described in the diagram below, the state files for each category would be stored in the following GCS bucket URL paths: ``` Team1 - gcs://<bucket>/data/team1/default.state Team2 - gcs://<bucket>/data/team2/default.state ``` This is enabled by the root `terragrunt.hcl` file located under the repository root, defining a dynamic `remote-back-end` configuration that is set at the subdirectory level. ```terraform # Root -> terragrunt.hcl remote_state { backend = "gcs" config = { bucket = local.vars.bucket_prefix prefix = path_relative_to_include() project = local.vars.root_project location = local.vars.region } } ``` Under the subdirectory for team1, the `include` block is defined and the `path` variable is set relative to the root configuration. ``` # Root -> data -> team1 -> terragrunt.hcl include "root" { path = find_in_parent_folders() } ``` ### Dynamic source configuration An additional configuration defined in the `terragrunt.hcl` file for team1 is the `terraform` block. This specifies that the source code for the data configuration describing the resources for this directory will be implemented by following `terraform` -> `source` code. Terragrunt manages a temporary instance of the source code inside a directory `.terragrunt-cache`, absolving the developer from maintaining several instances of the code base in different data subdirectories. ```terraform # Root -> data -> team1 -> terragrunt.hcl terraform { # Pull the terraform configuration from the local file system. Terragrunt will make a copy of the source folder in the # Terragrunt working directory (typically `.terragrunt-cache`). source = "../..//src" # Files to include from the Terraform src directory include_in_copy = [ "main.tf", "variables.tf", "outputs.tf", "provider.tf" ] } ``` In this example, the `terraform` configuration of a local source code module in `src/` is provided, which simply invokes the public `project-factory` module by flattening the specifications provided in `default` and `data` directories. In practice, instead of local ones this can be modules hosted in private or public registries implementing IaC blueprints for common use-cases -- eg. Projects and supporting resoources for Data Platform Teams and Application Teams. ## Requirements A few resoures need to be created before running terragrunt. Either use the terraform scripts under setup folder or follow manual steps given below. In either case make sure individual running the steps have folder creator, project creator and storage admin roles. ### Setup by terraform scripts 1. cd setup 2. Create a terraform.tfvars from the sample with the correct org_id, billing_account, default region and bucket name where state will be stored. 3. This creates resources that are needed to run the sample terragrunt project factory. - "terragrunt_test" folder under org - "terragrunt-seedproject" project under "terragrunt_test" folder - "terragrunt-iac-core-bkt" GCS Bucket for storing state - "Team1" and "Team2" folders - Generate root.yaml and defaults.yaml files inside the teams' directoriees from template files. ``` terraform init terraform plan terraform apply ``` ### Manual setup - Create two Folders where Terragrunt will create projects. Add corresponding folders id's in data/team/defaults.yaml and data/team2/defaults.yaml - Create a Project to store terraform state and a gcs bucket in that project. - Add project_id and gcs_bucket name in root.yaml. ## How to run *Steps 2 to 5 can be skipped if you ran the setup scripts* 1. [Install Terragrunt](https://terragrunt.gruntwork.io/docs/getting-started/install/) 1. Create [folders](https://cloud.google.com/resource-manager/docs/creating-managing-folders#creating-folders) in your organization; similar to `team1`, `team2` as shown in sample. 1. Create project files in data \<category\> projects similar to `*.yaml.sample` files provided to project specific configurations. 1. Create defaults.yaml file for each category similar to `defaults.yaml.sample` file provided for common configurations. 1. Create root.yaml file similar to `root.yaml.sample` for remote backend configurations. ``` terragrunt run-all init terragrunt run-all plan terragrunt run-all apply ``` **Note: `terragrunt plan` or `apply` can be run directly in subdirectories (ie. data/team1 etc) with a `terragrunt.hcl` file, to create resources for each team. This is useful for separating pipelines. `terragrunt run-all` is useful for runnning all deployments at once and in parallel.** ## Variations - A version of this pattern that integrates easy with [Fabric FAST](https://github.com/GoogleCloudPlatform/cloud-foundation-fabric/tree/master/blueprints/factories/project-factory) or [Cloud Foundatiaon Toolkit](https://github.com/GoogleCloudPlatform/cloud-foundation-toolkit) ## Resources - [Terragrunt](https://terragrunt.gruntwork.io/docs/getting-started) - [*Infrastructure as Data*](https://medium.com/dzerolabs/shifting-from-infrastructure-as-code-to-infrastructure-as-data-bdb1ae1840e3) Medium link - [Splitting a monolithic Terraform state using Terragrunt](https://medium.com/cts-technologies/murdering-monoliths-using-terragrunt-to-split-monolithic-terraform-state-up-into-multiple-stacks-17ead2d8e0e9) ## TODO 1. Update example with different service accounts for team directories 1. Create branches for variations. Variations can be like, integrating with FAST Fabric project factory pattern eg. ## Caveats - Terragrunt has [restrictions](https://docs.gruntwork.io/guides/working-with-code/tfc-integration) when it comes to integrating with Hashicorp's Terraform Cloud or Terraform Cloud Enterprise platform. TL;DR: You can still use TCE/TC for storing states, monitoring and auditing but cannot use the UI for Terraform runs. Initiating runs using the CLI is still possible.
GCP
A state scalable project factory pattern with Terragrunt Overview Resolves the problem of state volume explotion with project factory Terragrunt helps with that by 1 Providing a dynamic way to configure remote state https terragrunt gruntwork io docs features keep your remote state configuration dry keep your remote state configuration dry for categories of resources in directories 1 Providing DRY https en wikipedia org wiki Don 27t repeat yourself configuration of source code by generating code in target directories using dynamic source https terragrunt gruntwork io docs features keep your terraform code dry motivation definitions 1 Drastically reduce time to perform terraform plan or terraform apply by supporting parallel https terragrunt gruntwork io docs features execute terraform commands on multiple modules at once execution of resource plans This pattern scales the factory oriented approach of IaC implementation facilitating both scalability of the Terraform state file size and also developer productivity by minimizing time to run plans By providing mechanisms to create resource group definitions using both local and common data configurations through defaults and implementing DRY code in a central source it encourages a mature Infrastructure as Data implementation practice Expalanation Diagram docs images image2 png Implementing a factory oriented pattern for deploying resource groups is a common practice in IaC Infrastructure as Code This is typically done by having a configurable blueprint of data to describe the infrastructure to avoid repetition of code Project factory https registry terraform io modules terraform google modules project factory google latest is a common manifestation of this requirement since in GCP projects need to be created ubiquitously This pattern can however result in intractable state file sizes resulting in slow pipeline steps everytime terraform plan or apply needs to run because it reads all resources from the Cloud environment The general guideline from Google Cloud s documentation https cloud google com docs terraform best practices for terraform minimize resources is that each state file should not have more than 100 resources which itself can be obfuscated by the use of modules This example provides a solution to state explosion using Terragrunt In addition it describes a pattern such that the Terraform source code for implementing resources can be defined in the sub directory as a data configuraiton instead of repeating the code Generally this pattern splits IaC into data and src directories at the top level with their configuration connected by terragrunt hcl files at different levels of the file hierarchy In this example the public project factory module is used to create projects for team1 and team2 categories while maintaining separate state files using the Terragrunt s configuration As described in the diagram below the state files for each category would be stored in the following GCS bucket URL paths Team1 gcs bucket data team1 default state Team2 gcs bucket data team2 default state This is enabled by the root terragrunt hcl file located under the repository root defining a dynamic remote back end configuration that is set at the subdirectory level terraform Root terragrunt hcl remote state backend gcs config bucket local vars bucket prefix prefix path relative to include project local vars root project location local vars region Under the subdirectory for team1 the include block is defined and the path variable is set relative to the root configuration Root data team1 terragrunt hcl include root path find in parent folders Dynamic source configuration An additional configuration defined in the terragrunt hcl file for team1 is the terraform block This specifies that the source code for the data configuration describing the resources for this directory will be implemented by following terraform source code Terragrunt manages a temporary instance of the source code inside a directory terragrunt cache absolving the developer from maintaining several instances of the code base in different data subdirectories terraform Root data team1 terragrunt hcl terraform Pull the terraform configuration from the local file system Terragrunt will make a copy of the source folder in the Terragrunt working directory typically terragrunt cache source src Files to include from the Terraform src directory include in copy main tf variables tf outputs tf provider tf In this example the terraform configuration of a local source code module in src is provided which simply invokes the public project factory module by flattening the specifications provided in default and data directories In practice instead of local ones this can be modules hosted in private or public registries implementing IaC blueprints for common use cases eg Projects and supporting resoources for Data Platform Teams and Application Teams Requirements A few resoures need to be created before running terragrunt Either use the terraform scripts under setup folder or follow manual steps given below In either case make sure individual running the steps have folder creator project creator and storage admin roles Setup by terraform scripts 1 cd setup 2 Create a terraform tfvars from the sample with the correct org id billing account default region and bucket name where state will be stored 3 This creates resources that are needed to run the sample terragrunt project factory terragrunt test folder under org terragrunt seedproject project under terragrunt test folder terragrunt iac core bkt GCS Bucket for storing state Team1 and Team2 folders Generate root yaml and defaults yaml files inside the teams directoriees from template files terraform init terraform plan terraform apply Manual setup Create two Folders where Terragrunt will create projects Add corresponding folders id s in data team defaults yaml and data team2 defaults yaml Create a Project to store terraform state and a gcs bucket in that project Add project id and gcs bucket name in root yaml How to run Steps 2 to 5 can be skipped if you ran the setup scripts 1 Install Terragrunt https terragrunt gruntwork io docs getting started install 1 Create folders https cloud google com resource manager docs creating managing folders creating folders in your organization similar to team1 team2 as shown in sample 1 Create project files in data category projects similar to yaml sample files provided to project specific configurations 1 Create defaults yaml file for each category similar to defaults yaml sample file provided for common configurations 1 Create root yaml file similar to root yaml sample for remote backend configurations terragrunt run all init terragrunt run all plan terragrunt run all apply Note terragrunt plan or apply can be run directly in subdirectories ie data team1 etc with a terragrunt hcl file to create resources for each team This is useful for separating pipelines terragrunt run all is useful for runnning all deployments at once and in parallel Variations A version of this pattern that integrates easy with Fabric FAST https github com GoogleCloudPlatform cloud foundation fabric tree master blueprints factories project factory or Cloud Foundatiaon Toolkit https github com GoogleCloudPlatform cloud foundation toolkit Resources Terragrunt https terragrunt gruntwork io docs getting started Infrastructure as Data https medium com dzerolabs shifting from infrastructure as code to infrastructure as data bdb1ae1840e3 Medium link Splitting a monolithic Terraform state using Terragrunt https medium com cts technologies murdering monoliths using terragrunt to split monolithic terraform state up into multiple stacks 17ead2d8e0e9 TODO 1 Update example with different service accounts for team directories 1 Create branches for variations Variations can be like integrating with FAST Fabric project factory pattern eg Caveats Terragrunt has restrictions https docs gruntwork io guides working with code tfc integration when it comes to integrating with Hashicorp s Terraform Cloud or Terraform Cloud Enterprise platform TL DR You can still use TCE TC for storing states monitoring and auditing but cannot use the UI for Terraform runs Initiating runs using the CLI is still possible
GCP This repo houses the example code for a blog post on using a persistent history Dataproc Persistent History Server server to view job history about your Spark MapReduce jobs and aggregated YARN logs from short lived clusters on GCS Directory structure
# Dataproc Persistent History Server This repo houses the example code for a blog post on using a persistent history server to view job history about your Spark / MapReduce jobs and aggregated YARN logs from short-lived clusters on GCS. ![Architecture Diagram](img/persistent-history-arch.png) ## Directory structure - `cluster_templates/` - `history_server.yaml` - `ephemeral_cluster.yaml` - `init_actions` - `disable_history_servers.sh` - `workflow_templates` - `spark_mr_workflow_template.yaml` - `terraform` - `variables.tf` - `network.tf` - `history-server.tf` - `history-bucket.tf` - `long-running-cluster.tf` - `firewall.tf` - `service-account.tf` ## Usage The recommended way to run this example is to use terraform as it creates a vpc network to run the example with the appropriate firewall rules. ### Enabling services ``` gcloud services enable \ compute.googleapis.com \ dataproc.googleapis.com ``` ### Pre-requisites - [Install Google Cloud SDK](https://cloud.google.com/sdk/) - Enable the following APIs if not already enabled. - `gcloud services enable compute.googleapis.com dataproc.googleapis.com` - \[Optional\] [Install Terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) ### Disclaimer This is for example purposes only. You should take a much closer look at the firewall rules that make sense for your organization's security requirements. ### Should I run with Terraform or `gcloud` SDK ? This repo provides artifacts to spin up the infrastructure for persisting job history and yarn logs with terraform or with gcloud. The recommended way to use this is to modify the Terraform code to fit your needs for long running resources. However, the cluster templates are included as an example of standardizing cluster creation for ephemeral clusters. You might ask, "Why is there a cluster template for the history server?". The history server is simply a cleaner interface for reading your logs from GCS. For Spark, it is stateless and you may wish to only spin up a history server when you'll actually be using it. For MapReduce, the history server will only be aware of the files on GCS when it was created and those files which it moves from intermediate done directory to the done directory. For this reason, MapReduce workflows should use a persistent history server. ### Managing Log Retention Often times, it makes sense to leave the history server running because several teams may use it and you could configure it to manage clean up of your logs by setting the following additional properties: - `yarn:yarn.log-aggregation.retain-seconds` - `spark:spark.history.fs.cleaner.enabled` - `spark:spark.history.fs.cleaner.maxAge` - `mapred:mapreduce.jobhistory.cleaner.enable` - `mapred:mapreduce.jobhistory.max-age-ms` ### Terraform To spin up the whole example you could simply edit the `terraform.tfvars` file to set the variables to the desired values and run the following commands. Note, this assumes that you have an existing project and the sufficient permissions to spin up the resources for this example. ``` cd terraform terraform init terraform apply ``` This will create: 1. VPC Network and subnetwork for your dataproc clusters. 1. Various firewall rules for this network. 1. A single node dataproc history-server cluster. 1. A long running dataproc cluster. 1. A GCS Bucket for YARN log aggregation, and Spark MapReduce Job History as well as initialization actions for your clusters. ### Alternatively, with Google Cloud SDK These instructions detail how to run this entire example with `gcloud`. 1. Replace `PROJECT` with your GCP project id in each file. 1. Replace `HISTORY_BUCKET` with your GCS bucket for logs in each file. 1. Replace `HISTORY_SERVER` with your dataproc history server. 1. Replace `REGION` with your desired GCP Compute region. 1. Replace `ZONE` with your desired GCP Compute zone. ``` cd workflow_templates sed -i 's/PROJECT/your-gcp-project-id/g' * sed -i 's/HISTORY_BUCKET/your-history-bucket/g' * sed -i 's/HISTORY_SERVER/your-history-server/g' * sed -i 's/REGION/us-central1/g' * sed -i 's/ZONE/us-central1-f/g' * sed -i 's/SUBNET/your-subnet-id/g' * cd cluster_templates sed -i 's/PROJECT/your-gcp-project-id/g' * sed -i 's/HISTORY_BUCKET/your-history-bucket/g' * sed -i 's/HISTORY_SERVER/your-history-server/g' * sed -i 's/REGION/us-central1/g' * sed -i 's/ZONE/us-central1-f/g' * sed -i 's/SUBNET/your-subnet-id/g' * ``` Stage an empty file to create the spark-events path on GCS. ``` touch .keep gsutil cp .keep gs://your-history-bucket/spark-events/.keep rm .keep ``` Stage our initialization action for disabling history servers on your ephemeral clusters. ``` gsutil cp init_actions/disable_history_servers.sh ``` Create the history server. ```sh gcloud beta dataproc clusters import \ history-server \ --source=cluster_templates/history-server.yaml \ --region=us-central1 ``` Create a cluster which you can manually submit jobs to and tear down. ```sh gcloud beta dataproc clusters import \ ephemeral-cluster \ --source=cluster_templates/ephemeral-cluster.yaml \ --region=us-central1 ``` ### Running the Workflow Template Import the workflow template to run an example spark and hadoop job to verify your setup is working. ```sh gcloud dataproc workflow-templates import spark-mr-example \ --source=workflow_templates/spark_mr_workflow_template.yaml ``` Trigger the workflow template to spin up a cluster, run the example jobs and tear it down. ```sh gcloud dataproc workflow-templates instantiate spark-mr-example ``` ### Viewing the History UI Follow [these instructions](https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces) to look at the UI by ssh tunneling to the history server. Ports to visit: - MapReduce Job History: 19888 - Spark Job History: 18080 ### Closing Note If you're adapting this example for your own use consider the following: 1. Setting an appropriate retention for your logs. 1. Setting more appropriate firewall rules for your security requirements.
GCP
Dataproc Persistent History Server This repo houses the example code for a blog post on using a persistent history server to view job history about your Spark MapReduce jobs and aggregated YARN logs from short lived clusters on GCS Architecture Diagram img persistent history arch png Directory structure cluster templates history server yaml ephemeral cluster yaml init actions disable history servers sh workflow templates spark mr workflow template yaml terraform variables tf network tf history server tf history bucket tf long running cluster tf firewall tf service account tf Usage The recommended way to run this example is to use terraform as it creates a vpc network to run the example with the appropriate firewall rules Enabling services gcloud services enable compute googleapis com dataproc googleapis com Pre requisites Install Google Cloud SDK https cloud google com sdk Enable the following APIs if not already enabled gcloud services enable compute googleapis com dataproc googleapis com Optional Install Terraform https learn hashicorp com terraform getting started install html Disclaimer This is for example purposes only You should take a much closer look at the firewall rules that make sense for your organization s security requirements Should I run with Terraform or gcloud SDK This repo provides artifacts to spin up the infrastructure for persisting job history and yarn logs with terraform or with gcloud The recommended way to use this is to modify the Terraform code to fit your needs for long running resources However the cluster templates are included as an example of standardizing cluster creation for ephemeral clusters You might ask Why is there a cluster template for the history server The history server is simply a cleaner interface for reading your logs from GCS For Spark it is stateless and you may wish to only spin up a history server when you ll actually be using it For MapReduce the history server will only be aware of the files on GCS when it was created and those files which it moves from intermediate done directory to the done directory For this reason MapReduce workflows should use a persistent history server Managing Log Retention Often times it makes sense to leave the history server running because several teams may use it and you could configure it to manage clean up of your logs by setting the following additional properties yarn yarn log aggregation retain seconds spark spark history fs cleaner enabled spark spark history fs cleaner maxAge mapred mapreduce jobhistory cleaner enable mapred mapreduce jobhistory max age ms Terraform To spin up the whole example you could simply edit the terraform tfvars file to set the variables to the desired values and run the following commands Note this assumes that you have an existing project and the sufficient permissions to spin up the resources for this example cd terraform terraform init terraform apply This will create 1 VPC Network and subnetwork for your dataproc clusters 1 Various firewall rules for this network 1 A single node dataproc history server cluster 1 A long running dataproc cluster 1 A GCS Bucket for YARN log aggregation and Spark MapReduce Job History as well as initialization actions for your clusters Alternatively with Google Cloud SDK These instructions detail how to run this entire example with gcloud 1 Replace PROJECT with your GCP project id in each file 1 Replace HISTORY BUCKET with your GCS bucket for logs in each file 1 Replace HISTORY SERVER with your dataproc history server 1 Replace REGION with your desired GCP Compute region 1 Replace ZONE with your desired GCP Compute zone cd workflow templates sed i s PROJECT your gcp project id g sed i s HISTORY BUCKET your history bucket g sed i s HISTORY SERVER your history server g sed i s REGION us central1 g sed i s ZONE us central1 f g sed i s SUBNET your subnet id g cd cluster templates sed i s PROJECT your gcp project id g sed i s HISTORY BUCKET your history bucket g sed i s HISTORY SERVER your history server g sed i s REGION us central1 g sed i s ZONE us central1 f g sed i s SUBNET your subnet id g Stage an empty file to create the spark events path on GCS touch keep gsutil cp keep gs your history bucket spark events keep rm keep Stage our initialization action for disabling history servers on your ephemeral clusters gsutil cp init actions disable history servers sh Create the history server sh gcloud beta dataproc clusters import history server source cluster templates history server yaml region us central1 Create a cluster which you can manually submit jobs to and tear down sh gcloud beta dataproc clusters import ephemeral cluster source cluster templates ephemeral cluster yaml region us central1 Running the Workflow Template Import the workflow template to run an example spark and hadoop job to verify your setup is working sh gcloud dataproc workflow templates import spark mr example source workflow templates spark mr workflow template yaml Trigger the workflow template to spin up a cluster run the example jobs and tear it down sh gcloud dataproc workflow templates instantiate spark mr example Viewing the History UI Follow these instructions https cloud google com dataproc docs concepts accessing cluster web interfaces to look at the UI by ssh tunneling to the history server Ports to visit MapReduce Job History 19888 Spark Job History 18080 Closing Note If you re adapting this example for your own use consider the following 1 Setting an appropriate retention for your logs 1 Setting more appropriate firewall rules for your security requirements
GCP BigQuery Benchmark Repos While Google provides some high level guidelines on BigQuery performance in these scenarios we don t provide consistent metrics on how the above factors can impact Customers new to BigQuery often have questions on how to best utilize the platform with regards to performance For example a common question which has routinely resurfaced in this area is the performance of file loads for efficient load times As a second example when informing customers that queries run on external data sources are less efficient than those run on BigQuery managed tables customers have followed up asking exactly how much less efficient queries on external sources are into BigQuery specifically the optimal file parameters file type columns column types file size etc
# BigQuery Benchmark Repos Customers new to BigQuery often have questions on how to best utilize the platform with regards to performance. For example, a common question which has routinely resurfaced in this area is the performance of file loads into BigQuery, specifically the optimal file parameters (file type, # columns, column types, file size, etc) for efficient load times. As a second example, when informing customers that queries run on external data sources are less efficient than those run on BigQuery managed tables, customers have followed up, asking exactly how much less efficient queries on external sources are. While Google provides some high-level guidelines on BigQuery performance in these scenarios, we don’t provide consistent metrics on how the above factors can impact performance. This repository seeks to create benchmarks to address questions with quantitative data and a higher level of confidence, allowing more definitive answers when interacting with customers. While this repository is intended to continue growing, it currently includes the following benchmarks: ### File Loader Benchmark The File Loader benchmark measures the affect of file properties on performance when loading files into BigQuery tables. Files are created using a combination of properties such as file type, compression type, number of columns, column types (such as 100% STRING vs 50% STRING/ 50% NUMERIC), number of files, and the size of files. Once the files are created, they are loaded into BigQuery tables. #### Benchmark Parameters Specific file parameters are used in this project for performance testing. While the list of parameters is growing, the current list of parameters and values is as follows: **File Type**: * Avro * CSV * JSON * Parquet **Compression**: * gzip (for CSV and JSON) * snappy (for AVRO) **Number of columns**: * 10 * 100 * 1000 **Column Types**: * String-only * 50% String / 50% NUMERIC * 10% String / 90% NUMERIC **Number of files**: * 1 * 100 * 1000 * 10000 **Target Data Size (Size of the BigQuery staging table used to generate each file)**: * 10MB * 100MB * 1GB * 2GB These parameters are used to create combinations of file types stored on in a bucket on GCS. An example of a file prefix generated from the above list of file parameters is: `fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* `. This prefix holds 100 uncompressed CSV files, each generated from a 10 MB BigQuery staging table with 10 string columns. The tool loads the 100 CSV files with this prefix to a BigQuery table and records the performance to create a benchmark. In the future, a parameter for slot type will be added with values for communal and reserved. In addition, ORC will be added as a value for file type, and struct/ array types will be added to the values for column types. ### Federated Query Benchmark The federated query benchmark quantifies the difference in performance between queries on federated (external) and managed BigQuery tables. A variety of queries ranging in complexity will be created. These queries will be run on managed BigQuery tables and federated Google Cloud Storage files (including AVRO, CSV, JSON, and PARQUET) of identical schemas and sizes. The files created for the File Loader Benchmark will be reused here to run external queries on and to create BQ Managed tables with. #### Benchmark Parameters Parameters for this benchmark will include the type of table, type of query, and the table properties. **Table Type**: * `BQ_MANAGED`: Tables located within and managed by BigQuery. * `EXTERNAL`: Data located in GCS files, which are used to create a temporary external table for querying. **Query Type**: * `SIMPLE_SELECT_*`: Select all columns and all rows. * `SELECT_ONE_STRING`: Select the first string field in the schema. All schemas used in the benchmark contain at least one string field. * `SELECT_50_PERCENT`: Select the first 50% of the table's fields. Future iterations of this benchmark will include more complex queries, such as those that utilize joins, subqueries, window functions, etc. Since the files created for the File Loader Benchmark will be reused for this benchmark, both the BQ Managed tables and GCS files will share the File Loader Benchmark parameters, with the only difference being that the snappy compression type is not supported for federated queries and therefore will not be included for comparison. **File Type**: * Avro * CSV * JSON * Parquet **Compression**: * gzip (for CSV and JSON) **Number of columns**: * 10 * 100 * 1000 **Column Types**: * String-only * 50% String / 50% NUMERIC * 10% String / 90% NUMERIC **Number of files**: * 1 * 100 * 1000 * 10000 **Target Data Size (Size of the BigQuery staging table used to generate each file)**: * 10MB * 100MB * 1GB * 2GB ## Benchmark Results #### BigQuery The results of the benchmarks will be saved in a separate BigQuery table for ad hoc analysis. The results table will use the following schema: [results_table_schema.json](json_schemas/results_table_schema.json) #### DataStudio Once the results table is populated with data, DataStudio can be used to visualize results. See [this article](https://support.google.com/datastudio/answer/6283323?hl=en) to get started with DataStudio. ## Usage This project contains the tools to create the resources needed to run the benchmarks. The main method for the project is located in [`bq_benchmark.py`](bq_benchmark.py). ### Prepping the Benchmarks Resources from Scratch The following steps are needed to create the resources needed for the benchmarks. Some steps will only be needed for certain benchmarks, so feel free to skip them if you are only focused on a certain set of benchmarks. #### 1. Create the Results Table (Needed for all benchmarks) If running the whole project from scratch, the first step is to create a table in BigQuery to store the results of the benchmark loads. A json file has been provided in the json_schemas directory ([results_table_schema.json](json_schemas/results_table_schema.json)) with the above schema. The schema can be used to create the results table by running the using the following command: ``` python bq_benchmark.py \ --create_results_table \ --results_table_schema_path=<optional path to json schema for results table> \ --results_table_name=<results table name> \ --results_dataset_id=<dataset ID> ``` Parameters: `--create_results_table`: Flag to indicate that a results table should be created. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--results_table_schema_path`: Optional argument. It defaults to `json_schemas/results_table_schema.json`. If using a json schema in a different location, provide the path to that schema. `--results_table_name`: String representing the name of the results table. Note that just the table name is needed, not the full project_id.dataset_id.table_name indicator. `--dataset_id`: ID of the dataset to hold the results table. #### 2. Select File Parameters (Needed for File Loader and Federated Query Benchmarks) File parameters are used to help create the files needed for both the File Loader Benchmark and the Federated Query Benchmark. They can be configured in the `FILE_PARAMETERS` dictionary in [`generic_benchmark_tools/file_parameters.py`](generic_benchmark_tools/file_parameters.py). Currently, no file parameters can be added to the dictionary, as this will cause errors. However, parameters can be removed from the dictionary if you are looking for a smaller set of file combinations. Note that the parameter `numFiles` has to include at least the number 1 to ensure that the subsequent number of files are properly created. This is because the program uses this first file to make copies to create subsequent files. This is a much faster alternative than recreating identical files. For example, if you don't want the 1000 or 10000 as `numFile` parameters, you can take them out, but you must leave 1 (e.g. [1, 100]). That way the first file can be copied to create the 100 files. #### 3. Create Schemas for the Benchmark Staging Tables (Needed for File Loader and Federated Query Benchmarks) In order to create the files with the above parameters, the [Dataflow Data Generator tool](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-data-generator) from the Professional Services Examples library needs to be leveraged to create staging tables containing combinations of `columnTypes` and `numColumns` from the list of file parameters in [`generic_benchmark_tools/file_parameters.py`](generic_benchmark_tools/file_parameters.py). The staging tables will later be resized to match the sizes in `targetDataSize` file parameter, and then they will be extracted to files in GCS. However, before any of this can be done, JSON schemas for the staging tables must be created. To do this run the following command: ``` python bq_benchmark.py \ --create_benchmark_schemas \ --benchmark_table_schemas_directory=<optional directory where schemas should be stored> ``` Parameters: `--create_benchmark_schemas`: Flag to indicate that benchmark schemas should be created. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--benchmark_table_schemas_directory`: Optional argument for the directory where the schemas for the staging tables are to be stored. It defaults to `json_schemas/benchmark_table_schemas`. If you would prefer that the schemas are written to a different directory, provide that directory. #### 4. Create Staging Tables (Needed for File Loader and Federated Query Benchmarks) Once the schemas are created for the staging tables, the staging tables themselves can be created. This is a two step process. First, a set of staging tables are created using the data_generator_pipeline module in the [Dataflow Data Generator tool](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-data-generator) using the schemas created in step 3. One staging table is created for each combination of columnTypes and numColumns file parameters. A small number of rows are created in each staging table (500 rows) to get the process started. Once the tables are created, they are saved in a staging dataset. The names of staging tables are generated using their respective columnTypes and numColumms parameters. For example, a staging table created using the 100_STRING `columnTypes` param and 10 `numColumns` would be named `100_STRING_10`. Second, each staging table is used to create resized staging tables to match the sizes in the `targetDataSizes` parameter. This is accomplished using the [bq_table_resizer module](https://github.com/GoogleCloudPlatform/professional-services/blob/master/examples/dataflow-data-generator/bigquery-scripts/bq_table_resizer.py) of the [Dataflow Data Generator tool](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-data-generator). The resized staging tables are saved in a second staging dataset ws). The names of resized staging tables are generated using the name of the staging table they were resized from, plus the `targetDataSizes` param. For example, the `100_STRING_10` staging table from above will be used to create the following four tables in the resized staging dataset: `100_STRING_10_10MB`, `100_STRING_10_100MB`, `100_STRING_10_1GB`, `100_STRING_10_2GB`. To run the process of creating staging and resized staging tables, run the following command: ``` python bq_benchmark.py \ --create_staging_tables \ --bq_project_id=<ID of project holding BigQuery resources> \ --staging_dataset_id=<ID of dataset holding staging tables> \ --resized_staging_dataset_id=<ID of dataset holding resized staging tables> \ --benchmark_table_schemas_directory=<optional directory where staging table schemas are stored> \ --dataflow_staging_location=<path on GCS to serve as staging location for Dataflow> \ --dataflow_temp_location=<path on GCS to serve as temp location for Dataflow> ``` Parameters: `--create_staging_tables`: Flag to indicate that staging and resized staging tables should be created. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--bq_project_id`: The ID of the project that will hold the BigQuery resources for the benchmark, including all datasets, results tables, staging tables, and benchmark tables. `--staging_dataset_id`: The ID of the dataset that will hold the first set of staging tables. For the tool to work correctly, the `staging_dataset_id` must only contain staging tables, and it must be different than the `--resized_staging_dataset_id`. Do not store tables for any other purposes in this dataset. `--resized_staging_dataset_id`: The ID of the dataset that will hold the resized staging tables. For the tool to work correctly, the `resized_staging_dataset_id` must only contain resized staging tables, and it must be different than the `--staging_dataset_id`. Do not store tables for any other purposes in this dataset. `--benchmark_table_schemas_directory`: Optional argument for the directory where the schemas for the staging tables are stored. It defaults to `json_schemas/benchmark_table_schemas`. If your schemas are elsewhere, provide that directory. `--dataflow_staging_location`: Staging location for Dataflow on GCS. Include the 'gs://' prefix, the name of the bucket you want to use, and any prefix. For example '`gs://<bucket_name>/staging`. Note: be sure to use a different bucket than the one provided in the --bucket_name parameter used below with the `--create_files` and -`-create_benchmark` tables flags. `--dataflow_temp_location`: Temp location for Dataflow on GCS. Include the 'gs://' prefix, the name of the bucket you want to use, and any prefix. For example '`gs://<bucket_name>/temp`. Note: be sure to use a different bucket than the one provided in the --bucket_name parameter used below with the `--create_files` and -`-create_benchmark tables flags`. #### 5. Create Files (Needed for File Loader and Federated Query Benchmarks) Once the resized staging tables are created, the next step is to use the resized staging tables to create the files on GCS. The resized staging tables already contain combinations of the `columnTypes`, `numColumns`, and `targetDataSize` parameters. Now each of the resized staging tables must be extracted to combinations of files generated from the fileType and compression parameters. In each combination, the extraction is only done for the first file (`numFiles`=1). For example, the resized staging table `100_STRING_10_10MB` must be use to create the following files on GCS: * fileType=avro/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.avro * fileType=avro/compression=snappy/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.snappy * fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.csv * fileType=csv/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.gzip * fileType=json/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.json * fileType=json/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.gzip * fileType=parquet/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1/tableSize=10MB/file1.parquet The method of extracting the resized staging table depends on the combination of parameters. BigQuery extract jobs are used if the `fileType` is csv or json, or if the `fileType` is avro and the resized staging table size is <= 1 GB. If the `fileType` is avro and the `targetDataSize` is > 1 GB, DataFlow is used to generate the file, since attempting to extract a staging table of this size to avro causes errors. If the `fileType` is parquet, DataFlow is used as well, since BigQuery extract jobs don't support the parquet file type. Once the first file for each combination is generated (`numFiles`=1), it is copied to create the same combination of files, but where numFiles > 1. More specifically, it is copied 100 times for `numFiles`=100, 1000 times for `numFiles`=1000, and 10000 times for `numFiles`=10000. Copying is much faster than extracting each table tens of thousands of times. As an example, the files listed above are copied to create the following 77,700 files: * fileType=avro/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.avro- file100.avro) * fileType=avro/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.avro - file1000.avro) * fileType=avro/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.avro - file10000.avro) * fileType=avro/compression=snappy/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.snappy- file100.snappy) * fileType=avro/compression=snappy/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.snappy - file1000.snappy) * fileType=avro/compression=snappy/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.snappy - file10000.snappy) * fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.csv - file100.csv) * fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.csv - file1000.csv) * fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.csv - file10000.csv) * fileType=csv/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.gzip - file100.gzip) * fileType=csv/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.gzip - file1000.gzip) * fileType=csv/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.gzip - file10000.gzip) * fileType=json/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.json - file100.json) * fileType=json/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.json - file1000.json) * fileType=json/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.json - file10000.json) * fileType=json/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.gzip - file100.gzip) * fileType=json/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.gzip - file1000.gzip) * fileType=json/compression=gzip/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.gzip - file10000.gzip) * fileType=parquet/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=100/tableSize=10MB/* (contains file1.parquet- file100.parquet) * fileType=parquet/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/* (contains file1.parquet - file1000.parquet) * fileType=parquet/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=10000/tableSize=10MB/* (contains file1.parquet - file10000.parquet) To complete the process of creating files, run the following command: ``` python bq_benchmark.py \ --create_files \ --gcs_project_id=<ID of project holding GCS resources> \ --resized_staging_dataset_id=<ID of dataset holding resized staging tables> \ --bucket_name=<name of bucket to hold files> \ --dataflow_staging_location=<path on GCS to serve as staging location for Dataflow> \ --dataflow_temp_location=<path on GCS to serve as temp location for Dataflow> \ --restart_file=<optional file name to restart with if program is stopped> \ ``` Parameters: `--create_files`: Flag to indicate that files should be created and stored on GCS. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--gcs_project_id`: The ID of the project that will hold the GCS resources for the benchmark, including all files and the bucket that holds them. `--resized_staging_dataset_id`: The ID of the dataset that holds the resized staging tables generated using the `--create_staging_tables` command. `--bucket_name`: Name of the bucket that will hold the created files. Note that the only purpose of this bucket should be to hold the created files, and that files used for any other reason should be stored in a different bucket. `--dataflow_staging_location`: Staging location for Dataflow on GCS. Include the 'gs://' prefix, the name of the bucket you want to use, and any prefix. For example `gs://<bucket_name>/staging`. Note: be sure to use a different bucket than the one provided in the `--bucket_name parameter`. `--dataflow_temp_location`: Temp location for Dataflow on GCS. Include the 'gs://' prefix, the name of the bucket you want to use, and any prefix. For example `gs://<bucket_name>/temp`. Note: be sure to use a different bucket than the one provided in the `--bucket_name parameter`. `--restart_file`: Optional file name to start the file creation process with. Creating each file combination can take hours, and often a backend error or a timeout will occur, preventing all the files from being created. If this happens, copy the last file that was successfully created from the logs and use it here. It should start with `fileType=` and end with the file extension. For example, `fileType=csv/compression=none/numColumns=10/columnTypes=100_STRING/numFiles=1000/tableSize=10MB/file324.csv` ### Running the benchmarks #### File Loader Benchmark Once the files are created, the File Loader Benchmark can be run. As a prerequisite for this step, a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables. If a BigQuery log sink is not already set up, follow [these steps](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/bigquery-audit-log#1-getting-the-bigquery-log-data). Note that this benchmark will delete tables after recording information on load time. Before the tables are deleted, the tables and their respective files can be used to run the Federated Query Benchmark. If running the two benchmarks independently, each file will be used to create a BigQuery table two different times. Running the two benchmarks at the same time can save time if results for both benchmarks are desired. In this case, the `--include_federated_query_benchmark` flag can be added to the below command. Be aware that running the queries will add significant time to the benchmark run, so leave the flag out of the command if the primary goal is to obtain results for the File Loader Benchmark. To run the benchmark, use the following command: ``` python bq_benchmark.py \ --run_file_loader_benchmark \ --bq_project_id=<ID of the project holding the BigQuery resources> \ --gcs_project_id=<ID of project holding GCS resources> \ --staging_project_id=<ID of project holding staging tables> \ --staging_dataset_id=<ID of dataset holding staging tables> \ --benchmark_dataset_id=<ID of the dataset holding the benchmark tables> \ --bucket_name=<name of bucket to hold files> \ --results_table_name=<Name of results table> \ --results_dataset_id=<Name dataset holding results table> \ --duplicate_benchmark_tables \ --bq_logs_dataset=<Name of dataset hold BQ logs table> --include_federated_query_benchmark ``` Parameters: `--run_file_loader_benchmark`: Flag to initiate process of running the File Loader Benchmark by creating tables from files and storing results for comparison. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--gcs_project_id`: The ID of the project that will hold the GCS resources for the benchmark, including all files and the bucket that holds them. `--bq_project_id`: The ID of the project that will hold the BigQuery resources for the benchmark, including all datasets, results tables, staging tables, and benchmark tables. `--staging_project_id`: The ID of the project that holds the first set of staging tables. While this will be the same as the `--bq_project_id` if running the project from scratch, it will differ from `--bq_project_id` if you are using file combinations that have already been created and running benchmarks/saving results in your own project. `--staging_dataset_id`: The ID of the dataset that will hold the first set of staging tables. For the tool to work correctly, the `staging_dataset_id` must only contain staging tables, and it must be different than the `--resized_staging_dataset_id`. Do not store tables for any other purposes in this dataset. `--dataset_id`: The ID of the dataset that will hold the benchmark tables. `--bucket_name`: Name of the bucket that will hold the file combinations to be loaded into benchmark tables. Note that the only purpose of this bucket should be to hold the file combinations, and that files used for any other reason should be stored in a different bucket. `--results_table_name`: Name of the results table to hold relevant information about the benchmark loads. `--results_dataset_id`: Name of the dataset that holds the results table. `--duplicate_benchmark_tables`: Flag to indicate that a benchmark table should be created for a given file combination, even if that file combination has a benchmark table already. Creating multiple benchmark tables for each file combination can increase the accuracy of the average runtimes calculated from the results. If this behavior is desired, include the flag. However, if you want to ensure that you first have at least one benchmark table for each file combination, then leave the flag off. In that case, the benchmark creation process will skip a file combination if it already has a benchmark table. `--bq_logs_dataset`: Name of dataset hold BQ logs table. This dataset must be in project used for `--bq_project_id`. `--include_federated_query_benchmark`: Flag to indicate that the Federated Query Benchmark should be run on the created tables and the files the tables were loaded from before the tables are deleted. If results for both benchmarks are desired, this will save time when compared to running each benchmark independently, since the same tables needed for the File Loader Benchmark are needed for the Federated Query Benchmark. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. #### Federated Query Benchmark Once the files are created, the Federated Query Benchmark can be run. As a prerequisite for this step, a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables. If a BigQuery log sink is not already set up, follow [these steps](https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/bigquery-audit-log#1-getting-the-bigquery-log-data). As mentioned above, the Federated Query Benchmark can be run while running the File Loader Benchmark in addition to using the command below. Note, though , that running federated queries on snappy compressed files is not supported. When the File Loader Benchmark encounters a snappy compressed file, it still loads the file into a BigQuery table to capture load results, but it will skip the Federated Query portion. When the Federated Query Benchmark encounters a snappy compressed file, it will skip the load all together. Therefore, if obtaining Federated Query Benchmark results is the primary goal, use the command below. It should also be noted that since the Federated Query Benchmark loads files into tables, the load results for the File Loader Benchmark will also be captured. This will not add significant time to the benchmark run since the tables have to be loaded regardless. To run the benchmark, use the following command: ``` python bq_benchmark.py \ --run_federated_query_benchmark \ --bq_project_id=<ID of the project holding the BigQuery resources> \ --gcs_project_id=<ID of project holding GCS resources> \ --staging_project_id=<ID of project holding staging tables> \ --staging_dataset_id=<ID of dataset holding staging tables> \ --benchmark_dataset_id=<ID of the dataset holding the benchmark tables> \ --bucket_name=<name of bucket to hold files> \ --results_table_name=<Name of results table> \ --results_dataset_id=<Name dataset holding results table> \ --bq_logs_dataset=<Name of dataset hold BQ logs table> ``` Parameters: `--run_federated_query_benchmark`: Flag to initiate the process running the Federated Query Benchmark by creating tables from files, running queries on both the table and the files, and storing performance results. It has a value of `store_true`, so this flag will be set to False, unless it is provided in the command. `--gcs_project_id`: The ID of the project that will hold the GCS resources for the benchmark, including all files and the bucket that holds them. `--bq_project_id`: The ID of the project that will hold the BigQuery resources for the benchmark, including all datasets, results tables, staging tables, and benchmark tables. `--staging_project_id`: The ID of the project that holds the first set of staging tables. While this will be the same as the `--bq_project_id` if running the project from scratch, it will differ from `--bq_project_id` if you are using file combinations that have already been created and running benchmarks/saving results in your own project. `--staging_dataset_id`: The ID of the dataset that will hold the first set of staging tables. For the tool to work correctly, the `staging_dataset_id` must only contain staging tables, and it must be different than the `--resized_staging_dataset_id`. Do not store tables for any other purposes in this dataset. `--dataset_id`: The ID of the dataset that will hold the benchmark tables. `--bucket_name`: Name of the bucket that will hold the file combinations to be loaded into benchmark tables. Note that the only purpose of this bucket should be to hold the file combinations, and that files used for any other reason should be stored in a different bucket. `--results_table_name`: Name of the results table to hold relevant information about the benchmark loads. `--results_dataset_id`: Name of the dataset that holds the results table. `--bq_logs_dataset`: Name of dataset hold BQ logs table. This dataset must be in project used for `--bq_project_id`. ## Testing Tests can be run by running the following command in the bq_file_load_benchmark directory: ``` python -m pytest --project_id=<ID of project that will hold test resources> ``` Note that the tests will create and destroy resources in the project denoted by `--project_id`.
GCP
BigQuery Benchmark Repos Customers new to BigQuery often have questions on how to best utilize the platform with regards to performance For example a common question which has routinely resurfaced in this area is the performance of file loads into BigQuery specifically the optimal file parameters file type columns column types file size etc for efficient load times As a second example when informing customers that queries run on external data sources are less efficient than those run on BigQuery managed tables customers have followed up asking exactly how much less efficient queries on external sources are While Google provides some high level guidelines on BigQuery performance in these scenarios we don t provide consistent metrics on how the above factors can impact performance This repository seeks to create benchmarks to address questions with quantitative data and a higher level of confidence allowing more definitive answers when interacting with customers While this repository is intended to continue growing it currently includes the following benchmarks File Loader Benchmark The File Loader benchmark measures the affect of file properties on performance when loading files into BigQuery tables Files are created using a combination of properties such as file type compression type number of columns column types such as 100 STRING vs 50 STRING 50 NUMERIC number of files and the size of files Once the files are created they are loaded into BigQuery tables Benchmark Parameters Specific file parameters are used in this project for performance testing While the list of parameters is growing the current list of parameters and values is as follows File Type Avro CSV JSON Parquet Compression gzip for CSV and JSON snappy for AVRO Number of columns 10 100 1000 Column Types String only 50 String 50 NUMERIC 10 String 90 NUMERIC Number of files 1 100 1000 10000 Target Data Size Size of the BigQuery staging table used to generate each file 10MB 100MB 1GB 2GB These parameters are used to create combinations of file types stored on in a bucket on GCS An example of a file prefix generated from the above list of file parameters is fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB This prefix holds 100 uncompressed CSV files each generated from a 10 MB BigQuery staging table with 10 string columns The tool loads the 100 CSV files with this prefix to a BigQuery table and records the performance to create a benchmark In the future a parameter for slot type will be added with values for communal and reserved In addition ORC will be added as a value for file type and struct array types will be added to the values for column types Federated Query Benchmark The federated query benchmark quantifies the difference in performance between queries on federated external and managed BigQuery tables A variety of queries ranging in complexity will be created These queries will be run on managed BigQuery tables and federated Google Cloud Storage files including AVRO CSV JSON and PARQUET of identical schemas and sizes The files created for the File Loader Benchmark will be reused here to run external queries on and to create BQ Managed tables with Benchmark Parameters Parameters for this benchmark will include the type of table type of query and the table properties Table Type BQ MANAGED Tables located within and managed by BigQuery EXTERNAL Data located in GCS files which are used to create a temporary external table for querying Query Type SIMPLE SELECT Select all columns and all rows SELECT ONE STRING Select the first string field in the schema All schemas used in the benchmark contain at least one string field SELECT 50 PERCENT Select the first 50 of the table s fields Future iterations of this benchmark will include more complex queries such as those that utilize joins subqueries window functions etc Since the files created for the File Loader Benchmark will be reused for this benchmark both the BQ Managed tables and GCS files will share the File Loader Benchmark parameters with the only difference being that the snappy compression type is not supported for federated queries and therefore will not be included for comparison File Type Avro CSV JSON Parquet Compression gzip for CSV and JSON Number of columns 10 100 1000 Column Types String only 50 String 50 NUMERIC 10 String 90 NUMERIC Number of files 1 100 1000 10000 Target Data Size Size of the BigQuery staging table used to generate each file 10MB 100MB 1GB 2GB Benchmark Results BigQuery The results of the benchmarks will be saved in a separate BigQuery table for ad hoc analysis The results table will use the following schema results table schema json json schemas results table schema json DataStudio Once the results table is populated with data DataStudio can be used to visualize results See this article https support google com datastudio answer 6283323 hl en to get started with DataStudio Usage This project contains the tools to create the resources needed to run the benchmarks The main method for the project is located in bq benchmark py bq benchmark py Prepping the Benchmarks Resources from Scratch The following steps are needed to create the resources needed for the benchmarks Some steps will only be needed for certain benchmarks so feel free to skip them if you are only focused on a certain set of benchmarks 1 Create the Results Table Needed for all benchmarks If running the whole project from scratch the first step is to create a table in BigQuery to store the results of the benchmark loads A json file has been provided in the json schemas directory results table schema json json schemas results table schema json with the above schema The schema can be used to create the results table by running the using the following command python bq benchmark py create results table results table schema path optional path to json schema for results table results table name results table name results dataset id dataset ID Parameters create results table Flag to indicate that a results table should be created It has a value of store true so this flag will be set to False unless it is provided in the command results table schema path Optional argument It defaults to json schemas results table schema json If using a json schema in a different location provide the path to that schema results table name String representing the name of the results table Note that just the table name is needed not the full project id dataset id table name indicator dataset id ID of the dataset to hold the results table 2 Select File Parameters Needed for File Loader and Federated Query Benchmarks File parameters are used to help create the files needed for both the File Loader Benchmark and the Federated Query Benchmark They can be configured in the FILE PARAMETERS dictionary in generic benchmark tools file parameters py generic benchmark tools file parameters py Currently no file parameters can be added to the dictionary as this will cause errors However parameters can be removed from the dictionary if you are looking for a smaller set of file combinations Note that the parameter numFiles has to include at least the number 1 to ensure that the subsequent number of files are properly created This is because the program uses this first file to make copies to create subsequent files This is a much faster alternative than recreating identical files For example if you don t want the 1000 or 10000 as numFile parameters you can take them out but you must leave 1 e g 1 100 That way the first file can be copied to create the 100 files 3 Create Schemas for the Benchmark Staging Tables Needed for File Loader and Federated Query Benchmarks In order to create the files with the above parameters the Dataflow Data Generator tool https github com GoogleCloudPlatform professional services tree master examples dataflow data generator from the Professional Services Examples library needs to be leveraged to create staging tables containing combinations of columnTypes and numColumns from the list of file parameters in generic benchmark tools file parameters py generic benchmark tools file parameters py The staging tables will later be resized to match the sizes in targetDataSize file parameter and then they will be extracted to files in GCS However before any of this can be done JSON schemas for the staging tables must be created To do this run the following command python bq benchmark py create benchmark schemas benchmark table schemas directory optional directory where schemas should be stored Parameters create benchmark schemas Flag to indicate that benchmark schemas should be created It has a value of store true so this flag will be set to False unless it is provided in the command benchmark table schemas directory Optional argument for the directory where the schemas for the staging tables are to be stored It defaults to json schemas benchmark table schemas If you would prefer that the schemas are written to a different directory provide that directory 4 Create Staging Tables Needed for File Loader and Federated Query Benchmarks Once the schemas are created for the staging tables the staging tables themselves can be created This is a two step process First a set of staging tables are created using the data generator pipeline module in the Dataflow Data Generator tool https github com GoogleCloudPlatform professional services tree master examples dataflow data generator using the schemas created in step 3 One staging table is created for each combination of columnTypes and numColumns file parameters A small number of rows are created in each staging table 500 rows to get the process started Once the tables are created they are saved in a staging dataset The names of staging tables are generated using their respective columnTypes and numColumms parameters For example a staging table created using the 100 STRING columnTypes param and 10 numColumns would be named 100 STRING 10 Second each staging table is used to create resized staging tables to match the sizes in the targetDataSizes parameter This is accomplished using the bq table resizer module https github com GoogleCloudPlatform professional services blob master examples dataflow data generator bigquery scripts bq table resizer py of the Dataflow Data Generator tool https github com GoogleCloudPlatform professional services tree master examples dataflow data generator The resized staging tables are saved in a second staging dataset ws The names of resized staging tables are generated using the name of the staging table they were resized from plus the targetDataSizes param For example the 100 STRING 10 staging table from above will be used to create the following four tables in the resized staging dataset 100 STRING 10 10MB 100 STRING 10 100MB 100 STRING 10 1GB 100 STRING 10 2GB To run the process of creating staging and resized staging tables run the following command python bq benchmark py create staging tables bq project id ID of project holding BigQuery resources staging dataset id ID of dataset holding staging tables resized staging dataset id ID of dataset holding resized staging tables benchmark table schemas directory optional directory where staging table schemas are stored dataflow staging location path on GCS to serve as staging location for Dataflow dataflow temp location path on GCS to serve as temp location for Dataflow Parameters create staging tables Flag to indicate that staging and resized staging tables should be created It has a value of store true so this flag will be set to False unless it is provided in the command bq project id The ID of the project that will hold the BigQuery resources for the benchmark including all datasets results tables staging tables and benchmark tables staging dataset id The ID of the dataset that will hold the first set of staging tables For the tool to work correctly the staging dataset id must only contain staging tables and it must be different than the resized staging dataset id Do not store tables for any other purposes in this dataset resized staging dataset id The ID of the dataset that will hold the resized staging tables For the tool to work correctly the resized staging dataset id must only contain resized staging tables and it must be different than the staging dataset id Do not store tables for any other purposes in this dataset benchmark table schemas directory Optional argument for the directory where the schemas for the staging tables are stored It defaults to json schemas benchmark table schemas If your schemas are elsewhere provide that directory dataflow staging location Staging location for Dataflow on GCS Include the gs prefix the name of the bucket you want to use and any prefix For example gs bucket name staging Note be sure to use a different bucket than the one provided in the bucket name parameter used below with the create files and create benchmark tables flags dataflow temp location Temp location for Dataflow on GCS Include the gs prefix the name of the bucket you want to use and any prefix For example gs bucket name temp Note be sure to use a different bucket than the one provided in the bucket name parameter used below with the create files and create benchmark tables flags 5 Create Files Needed for File Loader and Federated Query Benchmarks Once the resized staging tables are created the next step is to use the resized staging tables to create the files on GCS The resized staging tables already contain combinations of the columnTypes numColumns and targetDataSize parameters Now each of the resized staging tables must be extracted to combinations of files generated from the fileType and compression parameters In each combination the extraction is only done for the first file numFiles 1 For example the resized staging table 100 STRING 10 10MB must be use to create the following files on GCS fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 avro fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 snappy fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 csv fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 gzip fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 json fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 gzip fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 1 tableSize 10MB file1 parquet The method of extracting the resized staging table depends on the combination of parameters BigQuery extract jobs are used if the fileType is csv or json or if the fileType is avro and the resized staging table size is 1 GB If the fileType is avro and the targetDataSize is 1 GB DataFlow is used to generate the file since attempting to extract a staging table of this size to avro causes errors If the fileType is parquet DataFlow is used as well since BigQuery extract jobs don t support the parquet file type Once the first file for each combination is generated numFiles 1 it is copied to create the same combination of files but where numFiles 1 More specifically it is copied 100 times for numFiles 100 1000 times for numFiles 1000 and 10000 times for numFiles 10000 Copying is much faster than extracting each table tens of thousands of times As an example the files listed above are copied to create the following 77 700 files fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 avro file100 avro fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 avro file1000 avro fileType avro compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 avro file10000 avro fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 snappy file100 snappy fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 snappy file1000 snappy fileType avro compression snappy numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 snappy file10000 snappy fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 csv file100 csv fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 csv file1000 csv fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 csv file10000 csv fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 gzip file100 gzip fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 gzip file1000 gzip fileType csv compression gzip numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 gzip file10000 gzip fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 json file100 json fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 json file1000 json fileType json compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 json file10000 json fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 gzip file100 gzip fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 gzip file1000 gzip fileType json compression gzip numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 gzip file10000 gzip fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 100 tableSize 10MB contains file1 parquet file100 parquet fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB contains file1 parquet file1000 parquet fileType parquet compression none numColumns 10 columnTypes 100 STRING numFiles 10000 tableSize 10MB contains file1 parquet file10000 parquet To complete the process of creating files run the following command python bq benchmark py create files gcs project id ID of project holding GCS resources resized staging dataset id ID of dataset holding resized staging tables bucket name name of bucket to hold files dataflow staging location path on GCS to serve as staging location for Dataflow dataflow temp location path on GCS to serve as temp location for Dataflow restart file optional file name to restart with if program is stopped Parameters create files Flag to indicate that files should be created and stored on GCS It has a value of store true so this flag will be set to False unless it is provided in the command gcs project id The ID of the project that will hold the GCS resources for the benchmark including all files and the bucket that holds them resized staging dataset id The ID of the dataset that holds the resized staging tables generated using the create staging tables command bucket name Name of the bucket that will hold the created files Note that the only purpose of this bucket should be to hold the created files and that files used for any other reason should be stored in a different bucket dataflow staging location Staging location for Dataflow on GCS Include the gs prefix the name of the bucket you want to use and any prefix For example gs bucket name staging Note be sure to use a different bucket than the one provided in the bucket name parameter dataflow temp location Temp location for Dataflow on GCS Include the gs prefix the name of the bucket you want to use and any prefix For example gs bucket name temp Note be sure to use a different bucket than the one provided in the bucket name parameter restart file Optional file name to start the file creation process with Creating each file combination can take hours and often a backend error or a timeout will occur preventing all the files from being created If this happens copy the last file that was successfully created from the logs and use it here It should start with fileType and end with the file extension For example fileType csv compression none numColumns 10 columnTypes 100 STRING numFiles 1000 tableSize 10MB file324 csv Running the benchmarks File Loader Benchmark Once the files are created the File Loader Benchmark can be run As a prerequisite for this step a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables If a BigQuery log sink is not already set up follow these steps https github com GoogleCloudPlatform professional services tree master examples bigquery audit log 1 getting the bigquery log data Note that this benchmark will delete tables after recording information on load time Before the tables are deleted the tables and their respective files can be used to run the Federated Query Benchmark If running the two benchmarks independently each file will be used to create a BigQuery table two different times Running the two benchmarks at the same time can save time if results for both benchmarks are desired In this case the include federated query benchmark flag can be added to the below command Be aware that running the queries will add significant time to the benchmark run so leave the flag out of the command if the primary goal is to obtain results for the File Loader Benchmark To run the benchmark use the following command python bq benchmark py run file loader benchmark bq project id ID of the project holding the BigQuery resources gcs project id ID of project holding GCS resources staging project id ID of project holding staging tables staging dataset id ID of dataset holding staging tables benchmark dataset id ID of the dataset holding the benchmark tables bucket name name of bucket to hold files results table name Name of results table results dataset id Name dataset holding results table duplicate benchmark tables bq logs dataset Name of dataset hold BQ logs table include federated query benchmark Parameters run file loader benchmark Flag to initiate process of running the File Loader Benchmark by creating tables from files and storing results for comparison It has a value of store true so this flag will be set to False unless it is provided in the command gcs project id The ID of the project that will hold the GCS resources for the benchmark including all files and the bucket that holds them bq project id The ID of the project that will hold the BigQuery resources for the benchmark including all datasets results tables staging tables and benchmark tables staging project id The ID of the project that holds the first set of staging tables While this will be the same as the bq project id if running the project from scratch it will differ from bq project id if you are using file combinations that have already been created and running benchmarks saving results in your own project staging dataset id The ID of the dataset that will hold the first set of staging tables For the tool to work correctly the staging dataset id must only contain staging tables and it must be different than the resized staging dataset id Do not store tables for any other purposes in this dataset dataset id The ID of the dataset that will hold the benchmark tables bucket name Name of the bucket that will hold the file combinations to be loaded into benchmark tables Note that the only purpose of this bucket should be to hold the file combinations and that files used for any other reason should be stored in a different bucket results table name Name of the results table to hold relevant information about the benchmark loads results dataset id Name of the dataset that holds the results table duplicate benchmark tables Flag to indicate that a benchmark table should be created for a given file combination even if that file combination has a benchmark table already Creating multiple benchmark tables for each file combination can increase the accuracy of the average runtimes calculated from the results If this behavior is desired include the flag However if you want to ensure that you first have at least one benchmark table for each file combination then leave the flag off In that case the benchmark creation process will skip a file combination if it already has a benchmark table bq logs dataset Name of dataset hold BQ logs table This dataset must be in project used for bq project id include federated query benchmark Flag to indicate that the Federated Query Benchmark should be run on the created tables and the files the tables were loaded from before the tables are deleted If results for both benchmarks are desired this will save time when compared to running each benchmark independently since the same tables needed for the File Loader Benchmark are needed for the Federated Query Benchmark It has a value of store true so this flag will be set to False unless it is provided in the command Federated Query Benchmark Once the files are created the Federated Query Benchmark can be run As a prerequisite for this step a log sink in BigQuery that captures logs about BigQuery must be set up in the same project that holds the benchmark tables If a BigQuery log sink is not already set up follow these steps https github com GoogleCloudPlatform professional services tree master examples bigquery audit log 1 getting the bigquery log data As mentioned above the Federated Query Benchmark can be run while running the File Loader Benchmark in addition to using the command below Note though that running federated queries on snappy compressed files is not supported When the File Loader Benchmark encounters a snappy compressed file it still loads the file into a BigQuery table to capture load results but it will skip the Federated Query portion When the Federated Query Benchmark encounters a snappy compressed file it will skip the load all together Therefore if obtaining Federated Query Benchmark results is the primary goal use the command below It should also be noted that since the Federated Query Benchmark loads files into tables the load results for the File Loader Benchmark will also be captured This will not add significant time to the benchmark run since the tables have to be loaded regardless To run the benchmark use the following command python bq benchmark py run federated query benchmark bq project id ID of the project holding the BigQuery resources gcs project id ID of project holding GCS resources staging project id ID of project holding staging tables staging dataset id ID of dataset holding staging tables benchmark dataset id ID of the dataset holding the benchmark tables bucket name name of bucket to hold files results table name Name of results table results dataset id Name dataset holding results table bq logs dataset Name of dataset hold BQ logs table Parameters run federated query benchmark Flag to initiate the process running the Federated Query Benchmark by creating tables from files running queries on both the table and the files and storing performance results It has a value of store true so this flag will be set to False unless it is provided in the command gcs project id The ID of the project that will hold the GCS resources for the benchmark including all files and the bucket that holds them bq project id The ID of the project that will hold the BigQuery resources for the benchmark including all datasets results tables staging tables and benchmark tables staging project id The ID of the project that holds the first set of staging tables While this will be the same as the bq project id if running the project from scratch it will differ from bq project id if you are using file combinations that have already been created and running benchmarks saving results in your own project staging dataset id The ID of the dataset that will hold the first set of staging tables For the tool to work correctly the staging dataset id must only contain staging tables and it must be different than the resized staging dataset id Do not store tables for any other purposes in this dataset dataset id The ID of the dataset that will hold the benchmark tables bucket name Name of the bucket that will hold the file combinations to be loaded into benchmark tables Note that the only purpose of this bucket should be to hold the file combinations and that files used for any other reason should be stored in a different bucket results table name Name of the results table to hold relevant information about the benchmark loads results dataset id Name of the dataset that holds the results table bq logs dataset Name of dataset hold BQ logs table This dataset must be in project used for bq project id Testing Tests can be run by running the following command in the bq file load benchmark directory python m pytest project id ID of project that will hold test resources Note that the tests will create and destroy resources in the project denoted by project id
GCP grpcexample gRPC Example user for a given user id This example creates a gRCP server that connect to spanner to find the name of Application Project Structure
# gRPC Example This example creates a gRCP server that connect to spanner to find the name of user for a given user id. ## Application Project Structure ``` . └── grpc_example └── src └── main β”œβ”€β”€ java └── com.example.grpc β”œβ”€β”€ client └── ConnectClient # Example Client β”œβ”€β”€ server └── ConnectServer # Initializes gRPC Server └── service └── ConnectService # Implementation of rpc services in the proto └── proto └── connect_service.proto # Proto definition of the server β”œβ”€β”€ pom.xml └── README.md ``` ## Technology Stack 1. gRPC 2. Spanner ## Setup Instructions ### Project Setup #### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 1. In the drop-down menu at the top, select **Create a project**. 1. Give your project a name = my-dfdl-project 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ #### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings #### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ #### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ### Spanner Setup from the Console 1. Create an instance called grpc-example 2. Create a database called grpc_example_db 3. Create a table named Users using the following Database Definition Language ( DDL) statement for the database ``` CREATE TABLE Users ( user_id INT64 NOT NULL, name STRING(MAX), ) PRIMARY KEY(user_id); ``` 3. Insert the following record into the table Users ``` INSERT INTO Users (user_id, name) VALUES (1234, -- type: INT64 "MyName" -- type: STRING(MAX) ); ``` ## Usage ### Initialize the server ``` $ mvn -DskipTests package exec:java -Dexec.mainClass=com.example.grpc.server.ConnectServer ``` ### Run the Client ``` $ mvn -DskipTests package exec:java -Dexec.mainClass=com.example.grpc.client.ConnectClient ``
GCP
gRPC Example This example creates a gRCP server that connect to spanner to find the name of user for a given user id Application Project Structure grpc example src main java com example grpc client ConnectClient Example Client server ConnectServer Initializes gRPC Server service ConnectService Implementation of rpc services in the proto proto connect service proto Proto definition of the server pom xml README md Technology Stack 1 gRPC 2 Spanner Setup Instructions Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 1 In the drop down menu at the top select Create a project 1 Give your project a name my dfdl project 1 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Spanner Setup from the Console 1 Create an instance called grpc example 2 Create a database called grpc example db 3 Create a table named Users using the following Database Definition Language DDL statement for the database CREATE TABLE Users user id INT64 NOT NULL name STRING MAX PRIMARY KEY user id 3 Insert the following record into the table Users INSERT INTO Users user id name VALUES 1234 type INT64 MyName type STRING MAX Usage Initialize the server mvn DskipTests package exec java Dexec mainClass com example grpc server ConnectServer Run the Client mvn DskipTests package exec java Dexec mainClass com example grpc client ConnectClient
GCP The popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses We demonstrate how home appliances e g kettle and washing machine working status on off can be inferred from gross power readings collected by a smart meter together with state of art machine learning techniques An end to end demo system is developed entirely on Google Cloud Platform as shown in the following figure It includes Data collection and ingesting through Cloud IoT Core and Cloud Pub Sub Home Appliances Working Status Monitoring Using Gross Power Readings Machine Learning model serving using CMLE together with App Engine as frontend Steps to deploy the demo system Data visualization and exploration using Colab Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine CMLE
# Home Appliances’ Working Status Monitoring Using Gross Power Readings The popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses. We demonstrate how home appliances’ (e.g. kettle and washing machine) working status (on/off) can be inferred from gross power readings collected by a smart meter together with state-of-art machine learning techniques. An end-to-end demo system is developed entirely on Google Cloud Platform as shown in the following figure. It includes * Data collection and ingesting through Cloud IoT Core and Cloud Pub/Sub * Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine (CMLE) * Machine Learning model serving using CMLE together with App Engine as frontend * Data visualization and exploration using Colab ![system architecture](./img/arch.jpg) ## Steps to deploy the demo system ### Step 0. Prerequisite Before you follow the instructions below to deploy our demo system, you need a Google cloud project if you don't have one. You can find detailed instructions [here](https://cloud.google.com/dataproc/docs/guides/setup-project). After you have created a google cloud project, follow the instructions below: ```shell # clone the repository storing all the necessary codes git clone [REPO_URL] cd professional-services/examples/e2e-home-appliance-status-monitoring # remember your project's id in an environment variable GOOGLE_PROJECT_ID=[your-google-project-id] # create and download a service account gcloud --project ${GOOGLE_PROJECT_ID} iam service-accounts create e2e-demo-sc gcloud --project ${GOOGLE_PROJECT_ID} projects add-iam-policy-binding ${GOOGLE_PROJECT_ID} \ --member "serviceAccount:e2e-demo-sc@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com" \ --role "roles/owner" gcloud --project ${GOOGLE_PROJECT_ID} iam service-accounts keys create e2e_demo_credential.json \ --iam-account e2e-demo-sc@${GOOGLE_PROJECT_ID}.iam.gserviceaccount.com GOOGLE_APPLICATION_CREDENTIALS=${PWD}"/e2e_demo_credential.json" # create a new GCS bucket if you don't have one BUCKET_NAME=[your-bucket-name] gsutil mb -p ${GOOGLE_PROJECT_ID} gs://${BUCKET_NAME}/ ``` You also need to enable the following APIs in the APIs & Services menu. * Cloud ML Engine API * Cloud IoT API * Cloud PubSub API ### Step 1. Deploy a trained ML model in Cloud ML Engine. You can download our trained model [from the `data` directory](./data/model.tar.bz2) or you can train your own model using the `ml/start.sh`. Notice: you need to enable CLoud ML Engine API first. If you are using our trained model: ```shell # download our trained model tar jxvf data/model.tar.bz2 # upload the model to your bucket gsutil cp -r model gs://${BUCKET_NAME} ``` If you want to train your own model: ```shell pip install ml/ cd ml # use one of the following commands and your model should be saved in your cloud storage bucket # train locally with default parameter bash start.sh -l # train locally with specified parameters bash start.sh -l learning-rate=0.00001 lstm-size=128 # train on Cloud ML Engine with default parameter bash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME} # train on Cloud ML Engine with specified parameters bash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME} learning-rate=0.00001 lstm-size=128 # run hyper-parameter tuning on Cloud ML Engine bash start.sh -p ${GOOGLE_PROJECT_ID} -b ${BUCKET_NAME} -t ``` Finally let's deploy our model to ML engine: ```shell # Set up an appropriate region # Available regions: https://cloud.google.com/ml-engine/docs/tensorflow/regions REGION="your-application-region" # create a model gcloud ml-engine models create EnergyDisaggregationModel \ --regions ${REGION} \ --project ${GOOGLE_PROJECT_ID} # create a model version gcloud ml-engine versions create v01 \ --model EnergyDisaggregationModel \ --origin gs://${BUCKET_NAME}/model \ --runtime-version 1.12 \ --framework TensorFlow \ --python-version 3.5 \ --project ${GOOGLE_PROJECT_ID} ``` ### Step 2. Deploy server. Type in the following commands to start server in app engine. ```shell cd e2e_demo/server cp ${GOOGLE_APPLICATION_CREDENTIALS} . echo " GOOGLE_APPLICATION_CREDENTIALS: '${GOOGLE_APPLICATION_CREDENTIALS##*/}'" >> app.yaml echo " GOOGLE_CLOUD_PROJECT: '${GOOGLE_PROJECT_ID}'" >> app.yaml # deploy application engine, choose any region that suits and answer yes at the end. gcloud --project ${GOOGLE_PROJECT_ID} app deploy # create a pubsub topic "data" and a subscription in the topic. # this is the pubsub between IoT devices and the server. gcloud --project ${GOOGLE_PROJECT_ID} pubsub topics create data gcloud --project ${GOOGLE_PROJECT_ID} pubsub subscriptions create sub0 \ --topic=data --push-endpoint=https://${GOOGLE_PROJECT_ID}.appspot.com/upload # create a pubsub topic "pred" and a subscription in the topic. # this is the pubsub between the server and a result utilizing client. gcloud --project ${GOOGLE_PROJECT_ID} pubsub topics create pred gcloud --project ${GOOGLE_PROJECT_ID} pubsub subscriptions create sub1 --topic=pred # uncompress the data bunzip data/*csv.bz2 # create BigQuery dataset and tables. bq --project_id ${GOOGLE_PROJECT_ID} mk \ --dataset ${GOOGLE_PROJECT_ID}:EnergyDisaggregation bq --project_id ${GOOGLE_PROJECT_ID} load --autodetect \ --source_format=CSV EnergyDisaggregation.ApplianceInfo \ ./data/appliance_info.csv bq --project_id ${GOOGLE_PROJECT_ID} load --autodetect \ --source_format=CSV EnergyDisaggregation.ApplianceStatusGroundTruth \ ./data/appliance_status_ground_truth.csv bq --project_id ${GOOGLE_PROJECT_ID} mk \ --table ${GOOGLE_PROJECT_ID}:EnergyDisaggregation.ActivePower \ time:TIMESTAMP,device_id:STRING,power:INTEGER bq --project_id ${GOOGLE_PROJECT_ID} mk \ --table ${GOOGLE_PROJECT_ID}:EnergyDisaggregation.Predictions \ time:TIMESTAMP,device_id:STRING,appliance_id:INTEGER,pred_status:INTEGER,pred_prob:FLOAT ``` ### Step 3. Setup your Cloud IoT Client(s) Follow the instructions below to set up your client(s). Note: you need to enable the Cloud IoT API first. ```shell # You need to specify the IDs for cloud iot registry and the devices you want. # See more details for permitted characters and sizes for each resource: # https://cloud.google.com/iot/docs/requirements#permitted_characters_and_size_requirements REGISTRY_ID="your-registry-id" DEVICE_IDS=("your-device-id1" "your-device-id2" ...) # create an iot registry with created pubsub topic above gcloud --project ${GOOGLE_PROJECT_ID} iot registries create ${REGISTRY_ID} \ --region ${REGION} --event-notification-config topic=data # generates key pair for setting in Cloud IoT device. # The rs256.key generated by the following command would be used to create JWT. ssh-keygen -t rsa -b 4096 -f ./rs256.key # Press "Enter" twice openssl rsa -in ./rs256.key -pubout -outform PEM -out ./rs256.pub # create multiple devices for device in ${DEVICE_IDS[@]}; do # create an iot device with generated public key and the registry. gcloud --project ${GOOGLE_PROJECT_ID} iot devices create ${device} \ --region ${REGION} --public-key path=./rs256.pub,type=rsa-pem \ --registry ${REGISTRY_ID} --log-level=debug done # download root ca certificates for use in mqtt client communicated with iot server. # download using curl: curl -o roots.pem https://pki.goog/roots.pem wget https://pki.goog/roots.pem --no-check-certificate ``` ### Complete! Try the demo system in colab or locally. If you want to use colab, visit https://colab.research.google.com/ and you can import our notebooks either directly from Github or upload from your cloned repository. Follow the instructions in the notebooks and you should be able to reproduce our results. If you want to run the demo locally: ``` cd notebook virtualenv env source env/bin/activate pip install jupyter jupyter notebook ``` *notebook/EnergyDisaggregationDemo_View.ipynb* allows you to view raw power consumption data and our model's prediction results in almost real time. It pulls data from our server's pubsub topic for visualization. Fill in the necessary information in the *Configuration* block and run all the cells. *notebook/EnergyDisaggregationDemo_Client.ipynb* simulates multiple smart meters by reading in power consumption data from a real world dataset and sends the readings to our server. All Cloud IoT Core related code resides in this notebook. Fill in the necessary information in the *Configuration* block and run all the cells, once you see messages being sent you should be able to see plots like the one shown below in *notebook/EnergyDisaggregationDemo_View.ipynb*. ![Demo system sample output](./img/demo03.gif)
GCP
Home Appliances Working Status Monitoring Using Gross Power Readings The popularization of IoT devices and the evolvement of machine learning technologies have brought tremendous opportunities for new businesses We demonstrate how home appliances e g kettle and washing machine working status on off can be inferred from gross power readings collected by a smart meter together with state of art machine learning techniques An end to end demo system is developed entirely on Google Cloud Platform as shown in the following figure It includes Data collection and ingesting through Cloud IoT Core and Cloud Pub Sub Machine learning model development using Tensorflow and training using Cloud Machine Learning Engine CMLE Machine Learning model serving using CMLE together with App Engine as frontend Data visualization and exploration using Colab system architecture img arch jpg Steps to deploy the demo system Step 0 Prerequisite Before you follow the instructions below to deploy our demo system you need a Google cloud project if you don t have one You can find detailed instructions here https cloud google com dataproc docs guides setup project After you have created a google cloud project follow the instructions below shell clone the repository storing all the necessary codes git clone REPO URL cd professional services examples e2e home appliance status monitoring remember your project s id in an environment variable GOOGLE PROJECT ID your google project id create and download a service account gcloud project GOOGLE PROJECT ID iam service accounts create e2e demo sc gcloud project GOOGLE PROJECT ID projects add iam policy binding GOOGLE PROJECT ID member serviceAccount e2e demo sc GOOGLE PROJECT ID iam gserviceaccount com role roles owner gcloud project GOOGLE PROJECT ID iam service accounts keys create e2e demo credential json iam account e2e demo sc GOOGLE PROJECT ID iam gserviceaccount com GOOGLE APPLICATION CREDENTIALS PWD e2e demo credential json create a new GCS bucket if you don t have one BUCKET NAME your bucket name gsutil mb p GOOGLE PROJECT ID gs BUCKET NAME You also need to enable the following APIs in the APIs Services menu Cloud ML Engine API Cloud IoT API Cloud PubSub API Step 1 Deploy a trained ML model in Cloud ML Engine You can download our trained model from the data directory data model tar bz2 or you can train your own model using the ml start sh Notice you need to enable CLoud ML Engine API first If you are using our trained model shell download our trained model tar jxvf data model tar bz2 upload the model to your bucket gsutil cp r model gs BUCKET NAME If you want to train your own model shell pip install ml cd ml use one of the following commands and your model should be saved in your cloud storage bucket train locally with default parameter bash start sh l train locally with specified parameters bash start sh l learning rate 0 00001 lstm size 128 train on Cloud ML Engine with default parameter bash start sh p GOOGLE PROJECT ID b BUCKET NAME train on Cloud ML Engine with specified parameters bash start sh p GOOGLE PROJECT ID b BUCKET NAME learning rate 0 00001 lstm size 128 run hyper parameter tuning on Cloud ML Engine bash start sh p GOOGLE PROJECT ID b BUCKET NAME t Finally let s deploy our model to ML engine shell Set up an appropriate region Available regions https cloud google com ml engine docs tensorflow regions REGION your application region create a model gcloud ml engine models create EnergyDisaggregationModel regions REGION project GOOGLE PROJECT ID create a model version gcloud ml engine versions create v01 model EnergyDisaggregationModel origin gs BUCKET NAME model runtime version 1 12 framework TensorFlow python version 3 5 project GOOGLE PROJECT ID Step 2 Deploy server Type in the following commands to start server in app engine shell cd e2e demo server cp GOOGLE APPLICATION CREDENTIALS echo GOOGLE APPLICATION CREDENTIALS GOOGLE APPLICATION CREDENTIALS app yaml echo GOOGLE CLOUD PROJECT GOOGLE PROJECT ID app yaml deploy application engine choose any region that suits and answer yes at the end gcloud project GOOGLE PROJECT ID app deploy create a pubsub topic data and a subscription in the topic this is the pubsub between IoT devices and the server gcloud project GOOGLE PROJECT ID pubsub topics create data gcloud project GOOGLE PROJECT ID pubsub subscriptions create sub0 topic data push endpoint https GOOGLE PROJECT ID appspot com upload create a pubsub topic pred and a subscription in the topic this is the pubsub between the server and a result utilizing client gcloud project GOOGLE PROJECT ID pubsub topics create pred gcloud project GOOGLE PROJECT ID pubsub subscriptions create sub1 topic pred uncompress the data bunzip data csv bz2 create BigQuery dataset and tables bq project id GOOGLE PROJECT ID mk dataset GOOGLE PROJECT ID EnergyDisaggregation bq project id GOOGLE PROJECT ID load autodetect source format CSV EnergyDisaggregation ApplianceInfo data appliance info csv bq project id GOOGLE PROJECT ID load autodetect source format CSV EnergyDisaggregation ApplianceStatusGroundTruth data appliance status ground truth csv bq project id GOOGLE PROJECT ID mk table GOOGLE PROJECT ID EnergyDisaggregation ActivePower time TIMESTAMP device id STRING power INTEGER bq project id GOOGLE PROJECT ID mk table GOOGLE PROJECT ID EnergyDisaggregation Predictions time TIMESTAMP device id STRING appliance id INTEGER pred status INTEGER pred prob FLOAT Step 3 Setup your Cloud IoT Client s Follow the instructions below to set up your client s Note you need to enable the Cloud IoT API first shell You need to specify the IDs for cloud iot registry and the devices you want See more details for permitted characters and sizes for each resource https cloud google com iot docs requirements permitted characters and size requirements REGISTRY ID your registry id DEVICE IDS your device id1 your device id2 create an iot registry with created pubsub topic above gcloud project GOOGLE PROJECT ID iot registries create REGISTRY ID region REGION event notification config topic data generates key pair for setting in Cloud IoT device The rs256 key generated by the following command would be used to create JWT ssh keygen t rsa b 4096 f rs256 key Press Enter twice openssl rsa in rs256 key pubout outform PEM out rs256 pub create multiple devices for device in DEVICE IDS do create an iot device with generated public key and the registry gcloud project GOOGLE PROJECT ID iot devices create device region REGION public key path rs256 pub type rsa pem registry REGISTRY ID log level debug done download root ca certificates for use in mqtt client communicated with iot server download using curl curl o roots pem https pki goog roots pem wget https pki goog roots pem no check certificate Complete Try the demo system in colab or locally If you want to use colab visit https colab research google com and you can import our notebooks either directly from Github or upload from your cloned repository Follow the instructions in the notebooks and you should be able to reproduce our results If you want to run the demo locally cd notebook virtualenv env source env bin activate pip install jupyter jupyter notebook notebook EnergyDisaggregationDemo View ipynb allows you to view raw power consumption data and our model s prediction results in almost real time It pulls data from our server s pubsub topic for visualization Fill in the necessary information in the Configuration block and run all the cells notebook EnergyDisaggregationDemo Client ipynb simulates multiple smart meters by reading in power consumption data from a real world dataset and sends the readings to our server All Cloud IoT Core related code resides in this notebook Fill in the necessary information in the Configuration block and run all the cells once you see messages being sent you should be able to see plots like the one shown below in notebook EnergyDisaggregationDemo View ipynb Demo system sample output img demo03 gif
GCP Contains several examples and solutions to common use cases observed in various scenarios The solutions below become more complex as we incorporate more Dataflow features
Contains several examples and solutions to common use cases observed in various scenarios - [Ingesting data from a file into BigQuery](#ingesting-data-from-a-file-into-bigquery) - [Transforming data in Dataflow](#transforming-data-in-dataflow) - [Joining file and BigQuery datasets in Dataflow](#joining-file-and-bigquery-datasets-in-dataflow) - [Ingest data from files into Bigquery reading the file structure from Datastore](#ingest-data-from-files-into-bigquery-reading-the-file-structure-from-datastore) - [Data lake to data mart](#data-lake-to-data-mart) The solutions below become more complex as we incorporate more Dataflow features. ## Ingesting data from a file into BigQuery ![Alt text](img/csv_file_to_bigquery.png?raw=true "CSV file to BigQuery") This example shows how to ingest a raw CSV file into BigQuery with minimal transformation. It is the simplest example and a great one to start with in order to become familiar with Dataflow. There are three main steps: 1. [Read in the file](pipelines/data_ingestion.py#L100-L106). 2. [Transform the CSV format into a dictionary format](pipelines/data_ingestion.py#L107-L113). 3. [Write the data to BigQuery](pipelines/data_ingestion.py#L114-L126). ### Read data in from the file. ![Alt text](img/csv_file.png?raw=true "CSV file") Using the built in TextIO connector allows beam to have several workers read the file in parallel. This allows larger file sizes and large number of input files to scale well within beam. Dataflow will read in each row of data from the file and distribute the data to the next stage of the data pipeline. ### Transform the CSV format into a dictionary format. ![Alt text](img/custom_python_code.png?raw=true "Custom Python code") This is the stage of the code where you would typically put your business logic. In this example we are simply transforming the data from a CSV format into a python dictionary. The dictionary maps column names to the values we want to store in BigQuery. ### Write the data to BigQuery. ![Alt text](img/output_to_bigquery.png?raw=true "Output to BigQuery") Writing the data to BigQuery does not require custom code. Passing the table name and a few other optional arguments into BigQueryIO sets up the final stage of the pipeline. This stage of the pipeline is typically referred to as our sink. The sink is the final destination of data. No more processing will occur in the pipeline after this stage. ### Full code examples Ready to dive deeper? Check out the complete code [here](pipelines/data_ingestion.py). ## Transforming data in Dataflow ![Alt text](img/csv_file_to_bigquery.png?raw=true "CSV file to BigQuery") This example builds upon simple ingestion, and demonstrates some basic data type transformations. In line with the previous example there are 3 steps. The transformation step is made more useful by translating the date format from the source data into a date format BigQuery accepts. 1. [Read in the file](pipelines/data_transformation.py#L136-L142). 2. [Transform the CSV format into a dictionary format and translate the date format](pipelines/data_transformation.py#L143-L149). 3. [Write the data to BigQuery](pipelines/data_transformation.py#L150-L161). ### Read data in from the file. ![Alt text](img/csv_file.png?raw=true "CSV file") Similar to the previous example, this example uses TextIO to read the file from Google Cloud Storage. ### Transform the CSV format into a dictionary format. ![Alt text](img/custom_python_code.png?raw=true "Custom Python code") This example builds upon the simpler ingestion example by introducing data type transformations. ### Write the data to BigQuery. ![Alt text](img/output_to_bigquery.png?raw=true "Output to BigQuery") Just as in our previous example, this example uses BigQuery IO to write out to BigQuery. ### Full code examples Ready to dive deeper? Check out the complete code [here](pipelines/data_transformation.py). ## Joining file and BigQuery datasets in Dataflow ![Alt text](img/csv_join_bigquery_to_bigquery.png?raw=true "CSV file joined with BigQuery data to BigQuery") This example demonstrates how to work with two datasets. A primary dataset is read from a file, and another dataset containing reference data is read from BigQuery. The two datasets are then joined in Dataflow before writing the joined dataset to BigQuery. This pipeline contains 4 steps: 1. [Read in the primary dataset from a file](pipelines/data_enrichment.py#L165-L176). 2. [Read in the reference data from BigQuery](pipelines/data_enrichment.py#L155-L163). 3. [Custom Python code](pipelines/data_enrichment.py#L138-L143) is used to [join the two datasets](pipelines/data_enrichment.py#L177-L180). 4. [The joined dataset is written out to BigQuery](pipelines/data_enrichment.py#L181-L194). ### Read in the primary dataset from a file ![Alt text](img/csv_file.png?raw=true "CSV file") Similar to previous examples, we use TextIO to read the dataset from a CSV file. ### Read in the reference data from BigQuery ![Alt text](img/import_state_name_from_bigquery.png?raw=true "Import state name data from BigQuery") Using BigQueryIO, we can specify a query to read data from. Dataflow then is able to distribute the data from BigQuery to the next stages in the pipeline. In this example the additional dataset is represented as a side input. Side inputs in Dataflow are typically reference datasets that fit into memory. Other examples will explore alternative methods for joining datasets which work well for datasets that do not fit into memory. ### Custom Python code is used to join the two datasets ![Alt text](img/3_custom_python_code.png?raw=true "Custom python code") Using custom python code, we join the two datasets together. Because the two datasets are dictionaries, the python code is the same as it would be for unioning any two python dictionaries. ### The joined dataset is written out to BigQuery ![Alt text](img/4_output_to_bigquery.png?raw=true "Custom python code") Finally the joined dataset is written out to BigQuery. This uses the same BigQueryIO API which is used in previous examples. ### Full code examples Ready to dive deeper? Check out the complete code [here](pipelines/data_enrichment.py). ## Ingest data from files into Bigquery reading the file structure from Datastore In this example we create a Python [Apache Beam](https://beam.apache.org/) pipeline running on [Google Cloud Dataflow](https://cloud.google.com/dataflow/) to import CSV files into BigQuery using the following architecture: ![Apache Beam pipeline to import CSV into BQ](img/data_ingestion_configurable.jpg) The architecture uses: * [Google Cloud Storage]() to store CSV source files * [Google Cloud Datastore](https://cloud.google.com/datastore/docs/concepts/overview) to store CSV file structure and field type * [Google Cloud Dataflow](https://cloud.google.com/dataflow/) to read files from Google Cloud Storage, Transform data base on the structure of the file and import the data into Google BigQuery * [Google BigQuery](https://cloud.google.com/bigquery/) to store data in a Data Lake. You can use this script as a starting point to import your files into Google BigQuery. You'll probably need to adapt the script logic to your file name structure or to your peculiar needs. ### 1. Prerequisites - Up and running GCP project with enabled billing account - gcloud installed and initiated to your project - Google Cloud Datastore enabled - Google Cloud Dataflow API enabled - Google Cloud Storage Bucket containing the file to import (CSV format) using the following naming convention: `TABLENAME_*.csv` - Google Cloud Storage Bucket for tem and staging Google Dataflow files - Google BigQuery dataset - [Python](https://www.python.org/) >= 2.7 and python-dev module - gcc - Google Cloud [Application Default Credentials](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) ### 2. Create virtual environment Create a new virtual environment (recommended) and install requirements: ``` virtualenv env source ./env/bin/activate pip install -r requirements.txt ``` ### 3. Configure Table schema Create a file that contains the structure of the CSVs to be imported. Filename needs to follow convention: `TABLENAME.csv`. Example: ``` name,STRING surname,STRING age,INTEGER ``` You can check parameters accepted by the `datastore_schema_import.py` script with the following command: ``` python pipelines/datastore_schema_import.py --help ``` Run the `datastore_schema_import.py` script to create the entry in Google Cloud Datastore using the following command: ``` python pipelines/datastore_schema_import.py --schema-file=<path_to_TABLENAME.csv> ``` ### 4. Upload files into Google Cloud Storage Upload files to be imported into Google Bigquery in a Google Cloud Storage Bucket. You can use `gsutil` using a command like: ``` gsutil cp [LOCAL_OBJECT_LOCATION] gs://[DESTINATION_BUCKET_NAME]/ ``` To optimize upload of big files see the [documentation](https://cloud.google.com/solutions/transferring-big-data-sets-to-gcp). Files need to be in CSV format, with the name of the column as first row. For example: ``` name,surname,age test_1,test_1,30 test_2,test_2,40 "test_3, jr",surname,50 ``` ### 5. Run pipeline You can check parameters accepted by the `data_ingestion_configurable.py` script with the following command: ``` python pipelines/data_ingestion_configurable --help ``` You can run the pipeline locally with the following command: ``` python pipelines/data_ingestion_configurable.py \ --project=###PUT HERE PROJECT ID### \ --input-bucket=###PUT HERE GCS BUCKET NAME: gs://bucket_name ### \ --input-path=###PUT HERE INPUT FOLDER### \ --input-files=###PUT HERE FILE NAMES### \ --bq-dataset=###PUT HERE BQ DATASET NAME### ``` or you can run the pipeline on Google Dataflow using the following command: ``` python pipelines/data_ingestion_configurable.py \ --runner=DataflowRunner \ --max_num_workers=100 \ --autoscaling_algorithm=THROUGHPUT_BASED \ --region=###PUT HERE REGION### \ --staging_location=###PUT HERE GCS STAGING LOCATION### \ --temp_location=###PUT HERE GCS TMP LOCATION###\ --project=###PUT HERE PROJECT ID### \ --input-bucket=###PUT HERE GCS BUCKET NAME### \ --input-path=###PUT HERE INPUT FOLDER### \ --input-files=###PUT HERE FILE NAMES### \ --bq-dataset=###PUT HERE BQ DATASET NAME### ``` ### 6. Check results You can check data imported into Google BigQuery from the Google Cloud Console UI. ## Data lake to data mart ![Alt text](img/data_lake_to_data_mart.png?raw=true "Data lake to data mart") This example demonstrates joining data from two different datasets in BigQuery, applying transformations to the joined dataset before uploading to BigQuery. Joining two datasets from BigQuery is a common use case when a data lake has been implemented in BigQuery. Creating a data mart with denormalized datasets facilitates better performance when using visualization tools. This pipeline contains 4 steps: 1. [Read in the primary dataset from BigQuery](pipelines/data_lake_to_mart.py#L278-L283). 2. [Read in the reference data from BigQuery](pipelines/data_lake_to_mart.py#L248-L276). 3. [Custom Python code](pipelines/data_lake_to_mart.py#L210-L224) is used to [join the two datasets](pipelines/data_lake_to_mart.py#L284-L287). Alternatively, [CoGroupByKey can be used to join the two datasets](pipelines/data_lake_to_mart_cogroupbykey.py#L300-L310). 4. [The joined dataset is written out to BigQuery](pipelines/data_lake_to_mart.py#L288-L301). ### Read in the primary dataset from BigQuery ![Alt text](img/1_query_orders.png?raw=true "Read from BigQuery") Similar to previous examples, we use BigQueryIO to read the dataset from the results of a query. In this case our main dataset is a fake orders dataset, containing a history of orders and associated data like quantity. ### Read in the reference data from BigQuery ![Alt text](img/2_query_account_details.png?raw=true "Import state name data from BigQuery") In this example we use a fake account details dataset. This represents a common use case for denormalizing a dataset. The account details information contains attributes linked to the accounts in the orders dataset. For example the address and city of the account. ### Custom Python code is used to join the two datasets ![Alt text](img/3_custom_python_code.png?raw=true "Custom python code") Using custom python code, we join the two datasets together. We provide two examples of joining these datasets. The first example uses side inputs, which require the dataset fit into memory. The second example demonstrates how to use CoGroupByKey to join the datasets. CoGroupByKey will facilitate joins between two datesets even if neither fit into memory. Explore the comments in the two code examples for a more in depth explanation. ### The joined dataset is written out to BigQuery ![Alt text](img/4_output_to_bigquery.png?raw=true "Custom python code") Finally the joined dataset is written out to BigQuery. This uses the same BigQueryIO API which is used in previous examples. ### Full code examples Ready to dive deeper? Check out the complete code. The example using side inputs is [here](pipelines/data_lake_to_mart.py) and the example using CoGroupByKey is [here](pipelines/data_lake_to_mart_cogroupbykey.py).
GCP
Contains several examples and solutions to common use cases observed in various scenarios Ingesting data from a file into BigQuery ingesting data from a file into bigquery Transforming data in Dataflow transforming data in dataflow Joining file and BigQuery datasets in Dataflow joining file and bigquery datasets in dataflow Ingest data from files into Bigquery reading the file structure from Datastore ingest data from files into bigquery reading the file structure from datastore Data lake to data mart data lake to data mart The solutions below become more complex as we incorporate more Dataflow features Ingesting data from a file into BigQuery Alt text img csv file to bigquery png raw true CSV file to BigQuery This example shows how to ingest a raw CSV file into BigQuery with minimal transformation It is the simplest example and a great one to start with in order to become familiar with Dataflow There are three main steps 1 Read in the file pipelines data ingestion py L100 L106 2 Transform the CSV format into a dictionary format pipelines data ingestion py L107 L113 3 Write the data to BigQuery pipelines data ingestion py L114 L126 Read data in from the file Alt text img csv file png raw true CSV file Using the built in TextIO connector allows beam to have several workers read the file in parallel This allows larger file sizes and large number of input files to scale well within beam Dataflow will read in each row of data from the file and distribute the data to the next stage of the data pipeline Transform the CSV format into a dictionary format Alt text img custom python code png raw true Custom Python code This is the stage of the code where you would typically put your business logic In this example we are simply transforming the data from a CSV format into a python dictionary The dictionary maps column names to the values we want to store in BigQuery Write the data to BigQuery Alt text img output to bigquery png raw true Output to BigQuery Writing the data to BigQuery does not require custom code Passing the table name and a few other optional arguments into BigQueryIO sets up the final stage of the pipeline This stage of the pipeline is typically referred to as our sink The sink is the final destination of data No more processing will occur in the pipeline after this stage Full code examples Ready to dive deeper Check out the complete code here pipelines data ingestion py Transforming data in Dataflow Alt text img csv file to bigquery png raw true CSV file to BigQuery This example builds upon simple ingestion and demonstrates some basic data type transformations In line with the previous example there are 3 steps The transformation step is made more useful by translating the date format from the source data into a date format BigQuery accepts 1 Read in the file pipelines data transformation py L136 L142 2 Transform the CSV format into a dictionary format and translate the date format pipelines data transformation py L143 L149 3 Write the data to BigQuery pipelines data transformation py L150 L161 Read data in from the file Alt text img csv file png raw true CSV file Similar to the previous example this example uses TextIO to read the file from Google Cloud Storage Transform the CSV format into a dictionary format Alt text img custom python code png raw true Custom Python code This example builds upon the simpler ingestion example by introducing data type transformations Write the data to BigQuery Alt text img output to bigquery png raw true Output to BigQuery Just as in our previous example this example uses BigQuery IO to write out to BigQuery Full code examples Ready to dive deeper Check out the complete code here pipelines data transformation py Joining file and BigQuery datasets in Dataflow Alt text img csv join bigquery to bigquery png raw true CSV file joined with BigQuery data to BigQuery This example demonstrates how to work with two datasets A primary dataset is read from a file and another dataset containing reference data is read from BigQuery The two datasets are then joined in Dataflow before writing the joined dataset to BigQuery This pipeline contains 4 steps 1 Read in the primary dataset from a file pipelines data enrichment py L165 L176 2 Read in the reference data from BigQuery pipelines data enrichment py L155 L163 3 Custom Python code pipelines data enrichment py L138 L143 is used to join the two datasets pipelines data enrichment py L177 L180 4 The joined dataset is written out to BigQuery pipelines data enrichment py L181 L194 Read in the primary dataset from a file Alt text img csv file png raw true CSV file Similar to previous examples we use TextIO to read the dataset from a CSV file Read in the reference data from BigQuery Alt text img import state name from bigquery png raw true Import state name data from BigQuery Using BigQueryIO we can specify a query to read data from Dataflow then is able to distribute the data from BigQuery to the next stages in the pipeline In this example the additional dataset is represented as a side input Side inputs in Dataflow are typically reference datasets that fit into memory Other examples will explore alternative methods for joining datasets which work well for datasets that do not fit into memory Custom Python code is used to join the two datasets Alt text img 3 custom python code png raw true Custom python code Using custom python code we join the two datasets together Because the two datasets are dictionaries the python code is the same as it would be for unioning any two python dictionaries The joined dataset is written out to BigQuery Alt text img 4 output to bigquery png raw true Custom python code Finally the joined dataset is written out to BigQuery This uses the same BigQueryIO API which is used in previous examples Full code examples Ready to dive deeper Check out the complete code here pipelines data enrichment py Ingest data from files into Bigquery reading the file structure from Datastore In this example we create a Python Apache Beam https beam apache org pipeline running on Google Cloud Dataflow https cloud google com dataflow to import CSV files into BigQuery using the following architecture Apache Beam pipeline to import CSV into BQ img data ingestion configurable jpg The architecture uses Google Cloud Storage to store CSV source files Google Cloud Datastore https cloud google com datastore docs concepts overview to store CSV file structure and field type Google Cloud Dataflow https cloud google com dataflow to read files from Google Cloud Storage Transform data base on the structure of the file and import the data into Google BigQuery Google BigQuery https cloud google com bigquery to store data in a Data Lake You can use this script as a starting point to import your files into Google BigQuery You ll probably need to adapt the script logic to your file name structure or to your peculiar needs 1 Prerequisites Up and running GCP project with enabled billing account gcloud installed and initiated to your project Google Cloud Datastore enabled Google Cloud Dataflow API enabled Google Cloud Storage Bucket containing the file to import CSV format using the following naming convention TABLENAME csv Google Cloud Storage Bucket for tem and staging Google Dataflow files Google BigQuery dataset Python https www python org 2 7 and python dev module gcc Google Cloud Application Default Credentials https cloud google com sdk gcloud reference auth application default login 2 Create virtual environment Create a new virtual environment recommended and install requirements virtualenv env source env bin activate pip install r requirements txt 3 Configure Table schema Create a file that contains the structure of the CSVs to be imported Filename needs to follow convention TABLENAME csv Example name STRING surname STRING age INTEGER You can check parameters accepted by the datastore schema import py script with the following command python pipelines datastore schema import py help Run the datastore schema import py script to create the entry in Google Cloud Datastore using the following command python pipelines datastore schema import py schema file path to TABLENAME csv 4 Upload files into Google Cloud Storage Upload files to be imported into Google Bigquery in a Google Cloud Storage Bucket You can use gsutil using a command like gsutil cp LOCAL OBJECT LOCATION gs DESTINATION BUCKET NAME To optimize upload of big files see the documentation https cloud google com solutions transferring big data sets to gcp Files need to be in CSV format with the name of the column as first row For example name surname age test 1 test 1 30 test 2 test 2 40 test 3 jr surname 50 5 Run pipeline You can check parameters accepted by the data ingestion configurable py script with the following command python pipelines data ingestion configurable help You can run the pipeline locally with the following command python pipelines data ingestion configurable py project PUT HERE PROJECT ID input bucket PUT HERE GCS BUCKET NAME gs bucket name input path PUT HERE INPUT FOLDER input files PUT HERE FILE NAMES bq dataset PUT HERE BQ DATASET NAME or you can run the pipeline on Google Dataflow using the following command python pipelines data ingestion configurable py runner DataflowRunner max num workers 100 autoscaling algorithm THROUGHPUT BASED region PUT HERE REGION staging location PUT HERE GCS STAGING LOCATION temp location PUT HERE GCS TMP LOCATION project PUT HERE PROJECT ID input bucket PUT HERE GCS BUCKET NAME input path PUT HERE INPUT FOLDER input files PUT HERE FILE NAMES bq dataset PUT HERE BQ DATASET NAME 6 Check results You can check data imported into Google BigQuery from the Google Cloud Console UI Data lake to data mart Alt text img data lake to data mart png raw true Data lake to data mart This example demonstrates joining data from two different datasets in BigQuery applying transformations to the joined dataset before uploading to BigQuery Joining two datasets from BigQuery is a common use case when a data lake has been implemented in BigQuery Creating a data mart with denormalized datasets facilitates better performance when using visualization tools This pipeline contains 4 steps 1 Read in the primary dataset from BigQuery pipelines data lake to mart py L278 L283 2 Read in the reference data from BigQuery pipelines data lake to mart py L248 L276 3 Custom Python code pipelines data lake to mart py L210 L224 is used to join the two datasets pipelines data lake to mart py L284 L287 Alternatively CoGroupByKey can be used to join the two datasets pipelines data lake to mart cogroupbykey py L300 L310 4 The joined dataset is written out to BigQuery pipelines data lake to mart py L288 L301 Read in the primary dataset from BigQuery Alt text img 1 query orders png raw true Read from BigQuery Similar to previous examples we use BigQueryIO to read the dataset from the results of a query In this case our main dataset is a fake orders dataset containing a history of orders and associated data like quantity Read in the reference data from BigQuery Alt text img 2 query account details png raw true Import state name data from BigQuery In this example we use a fake account details dataset This represents a common use case for denormalizing a dataset The account details information contains attributes linked to the accounts in the orders dataset For example the address and city of the account Custom Python code is used to join the two datasets Alt text img 3 custom python code png raw true Custom python code Using custom python code we join the two datasets together We provide two examples of joining these datasets The first example uses side inputs which require the dataset fit into memory The second example demonstrates how to use CoGroupByKey to join the datasets CoGroupByKey will facilitate joins between two datesets even if neither fit into memory Explore the comments in the two code examples for a more in depth explanation The joined dataset is written out to BigQuery Alt text img 4 output to bigquery png raw true Custom python code Finally the joined dataset is written out to BigQuery This uses the same BigQueryIO API which is used in previous examples Full code examples Ready to dive deeper Check out the complete code The example using side inputs is here pipelines data lake to mart py and the example using CoGroupByKey is here pipelines data lake to mart cogroupbykey py
GCP The test app generates time series with a custom metric called This example demonstrates creating alerts for missing monitoring data with Stackdriver alerts for missing timeseries them is missing If one or two timeseries are missing you want exactly one alert When there is a total outage you still want to get one alert not 100 suppose that you have 100 timeseries and you want to find out when any one of Stackdriver in a way that duplicate alerts are not generated For example alerts
# Stackdriver alerts for missing timeseries This example demonstrates creating alerts for missing monitoring data with Stackdriver in a way that duplicate alerts are not generated. For example, suppose that you have 100 timeseries and you want to find out when any one of them is missing. If one or two timeseries are missing, you want exactly one alert. When there is a total outage you still want to get one alert, not 100 alerts. The test app generates time series with a custom metric called task_latency_distribution with tags based on a partition label. Each partition generates its own time series. The example assumes that you are familiar with Stackdriver Monitoring and Alerting. It builds on the discussion in [Alerting policies in depth](https://cloud.google.com/monitoring/alerts/concepts-indepth). ## Setup Clone this repo and change to this working directory. Enable the Stackdriver Monitoring API ```shell gcloud services enable monitoring.googleapis.com ``` In the GCP Console, go to [Monitoring](https://console.cloud.google.com/monitoring). If you have not already created a workspace for this project before, click New workspace, and then click Add. It takes a few minutes to create the workspace. Click Alerting | Policies overview. The list should be empty at this point unless you have created policies previously. ## Deploy the app The example code is based on the Go code in [Custom metrics with OpenCensus](https://cloud.google.com/monitoring/custom-metrics/open-census). [Download](https://golang.org/dl/) and install the latest version of Go. Build the test app ```shell go build ``` The instructions here are based on the article [Setting up authentication](https://cloud.google.com/monitoring/docs/reference/libraries#setting_up_authentication) for the Stackdriver client library. Note that if you run the test app on a Google Cloud Compute Engine instance, on Google Kubernetes Engine, on App Engine, or on Cloud Run, then you will not need to create a service account or download the credentials. First, set the project id in a shell variable ```shell export GOOGLE_CLOUD_PROJECT=[your project] ``` Create a service account: ```shell SA_NAME=stackdriver-metrics-writer gcloud iam service-accounts create $SA_NAME \ --display-name="Stackdriver Metrics Writer" ``` ```shell SA_ID="$SA_NAME@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ --member "serviceAccount:$SA_ID" --role "roles/monitoring.metricWriter" ``` Generate a credentials file with an exported variable GOOGLE_APPLICATION_CREDENTIALS referring to it. ```shell mkdir -p ~/.auth chmod go-rwx ~/.auth export GOOGLE_APPLICATION_CREDENTIALS=~/.auth/stackdriver_demo_credentials.json gcloud iam service-accounts keys create $GOOGLE_APPLICATION_CREDENTIALS \ --iam-account $SA_ID ``` Run the program with three partitions, labeled "1", "2", and "3" ```shell ./alert-absence-demo --labels "1,2,3" ``` This will write three time series with the given labels. A few minutes after starting the app you should be able to see the time series data in the Stackdriver Metric explorer. ## Create the policy Create a notification channel with the command ```shell EMAIL="your email" CHANNEL=$(gcloud alpha monitoring channels create \ --channel-labels=email_address=$EMAIL \ --display-name="Email to project owner" \ --type=email \ --format='value(name)') ``` Create an alert policy with the command ```shell gcloud alpha monitoring policies create \ --notification-channels=$CHANNEL \ --documentation-from-file=policy_doc.md \ --policy-from-file=alert_policy.json ``` At this point no alerts should be firing. You can check that in the Stackdriver Monitoring console alert policy detail. ## Testing Kill the processes with CTL-C and restart it with only two partitions: ```shell ./alert-absence-demo --labels "1,2" ``` An alert should be generated in about 5 minutes with a subject like β€˜One of the timeseries is absent.’ You should be able to see this in the Stackdriver console and you should also receive an email notification. Restart the process with three partitions: ```shell ./alert-absence-demo --labels "1,2,3" ``` After a few minutes the alert should resolve itself. Stop the process and restart it with only one partition: ```shell ./alert-absence-demo --labels "1" ``` Check that only one alert is fired. Stop the process and do not restart it. You should see an alert that indicates all time series are absent.
GCP
Stackdriver alerts for missing timeseries This example demonstrates creating alerts for missing monitoring data with Stackdriver in a way that duplicate alerts are not generated For example suppose that you have 100 timeseries and you want to find out when any one of them is missing If one or two timeseries are missing you want exactly one alert When there is a total outage you still want to get one alert not 100 alerts The test app generates time series with a custom metric called task latency distribution with tags based on a partition label Each partition generates its own time series The example assumes that you are familiar with Stackdriver Monitoring and Alerting It builds on the discussion in Alerting policies in depth https cloud google com monitoring alerts concepts indepth Setup Clone this repo and change to this working directory Enable the Stackdriver Monitoring API shell gcloud services enable monitoring googleapis com In the GCP Console go to Monitoring https console cloud google com monitoring If you have not already created a workspace for this project before click New workspace and then click Add It takes a few minutes to create the workspace Click Alerting Policies overview The list should be empty at this point unless you have created policies previously Deploy the app The example code is based on the Go code in Custom metrics with OpenCensus https cloud google com monitoring custom metrics open census Download https golang org dl and install the latest version of Go Build the test app shell go build The instructions here are based on the article Setting up authentication https cloud google com monitoring docs reference libraries setting up authentication for the Stackdriver client library Note that if you run the test app on a Google Cloud Compute Engine instance on Google Kubernetes Engine on App Engine or on Cloud Run then you will not need to create a service account or download the credentials First set the project id in a shell variable shell export GOOGLE CLOUD PROJECT your project Create a service account shell SA NAME stackdriver metrics writer gcloud iam service accounts create SA NAME display name Stackdriver Metrics Writer shell SA ID SA NAME GOOGLE CLOUD PROJECT iam gserviceaccount com gcloud projects add iam policy binding GOOGLE CLOUD PROJECT member serviceAccount SA ID role roles monitoring metricWriter Generate a credentials file with an exported variable GOOGLE APPLICATION CREDENTIALS referring to it shell mkdir p auth chmod go rwx auth export GOOGLE APPLICATION CREDENTIALS auth stackdriver demo credentials json gcloud iam service accounts keys create GOOGLE APPLICATION CREDENTIALS iam account SA ID Run the program with three partitions labeled 1 2 and 3 shell alert absence demo labels 1 2 3 This will write three time series with the given labels A few minutes after starting the app you should be able to see the time series data in the Stackdriver Metric explorer Create the policy Create a notification channel with the command shell EMAIL your email CHANNEL gcloud alpha monitoring channels create channel labels email address EMAIL display name Email to project owner type email format value name Create an alert policy with the command shell gcloud alpha monitoring policies create notification channels CHANNEL documentation from file policy doc md policy from file alert policy json At this point no alerts should be firing You can check that in the Stackdriver Monitoring console alert policy detail Testing Kill the processes with CTL C and restart it with only two partitions shell alert absence demo labels 1 2 An alert should be generated in about 5 minutes with a subject like One of the timeseries is absent You should be able to see this in the Stackdriver console and you should also receive an email notification Restart the process with three partitions shell alert absence demo labels 1 2 3 After a few minutes the alert should resolve itself Stop the process and restart it with only one partition shell alert absence demo labels 1 Check that only one alert is fired Stop the process and do not restart it You should see an alert that indicates all time series are absent
GCP An HTTP POST to the airflow endpoint from an on prem system is used as a trigger to initiate the workflow Cloud Composer Ephemeral Dataproc Cluster for Spark Job Workflow Overview
## Cloud Composer: Ephemeral Dataproc Cluster for Spark Job ### Workflow Overview *** ![Alt text](../img/composer-http-post-arch.png "A diagram illustrating the workflow described below.") An HTTP POST to the airflow endpoint from an on-prem system is used as a trigger to initiate the workflow. At a high level the Cloud Composer workflow performs the following steps: 1. Extracts some metadata from the HTTP POST that triggered the workflow. 1. Spins up a Dataproc Cluster. 1. Submits a Spark job that performs the following: * Reads newline delimited json data generated by an export from the [nyc-tlc:yellow.trips public BigQuery table](https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips?pli=1). * Enhances the data with an average_speed column. * Writes the enhanced data as in CSV format to a temporary location in Google Cloud storage. 1. Tear down the Dataproc Cluster Load these files to BigQuery. 1. Clean up the temporary path of enhanced data in GCS. ##### 1. Extract metadata from POST: When there is an HTTP POST to the airflow endpoint it should contain a paylaod of the following structure. ``` payload = { 'run_id': 'post-triggered-run-%s' % datetime.now().strftime('%Y%m%d%H%M%s'), 'conf': "{'raw_path': raw_path, 'transformed_path': transformed_path}" } ``` Where raw_path is a timestamped path to the existing raw files in gcs in newline delimited json format and transformed path is a path with matching time stamp to stage the enhanced file before loading to BigQuery. ##### 2 & 3. Spins up a Dataproc Cluster and submit Spark Job The workflow then provisions a Dataproc Cluster and submits a spark job to enhance the data. ##### 4. Move to processed bucket Based on the status of the Spark job, the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data. A separate folder is created along with a processed date field to hold the files in this bucket. ##### Full code examples Ready to dive deeper? Check out the [complete code](ephemeral_dataproc_spark_dag.py) *** #### Setup and Pre-requisites It is recommended that virtualenv be used to keep everything tidy. The [requirements.txt](requirements.txt) describes the dependencies needed for the code used in this repo. ```bash virtualenv composer-env source composer-env/bin/activate ``` The POST will need to be authenticated with [Identity Aware Proxy](https://cloud.google.com/iap/docs/). We reccomend doing this by copying the latest version of [make_iap_request.py](https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/iap/make_iap_request.py) from the Google Cloud python-docs-samples repo and using the provided [dag_trigger.py](dag_trigger.py). ```bash pip install -r ~/professional-services/examples/cloud-composer-examples/requirements.txt wget https://raw.githubusercontent.com/GoogleCloudPlatform/python-docs-samples/master/iap/requirements.txt -O ~/professional-services/examples/cloud-composer-examples/iap_requirements.txt pip install -r iap_requirements.txt wget https://raw.githubusercontent.com/GoogleCloudPlatform/python-docs-samples/master/iap/make_iap_request.py -O ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/make_iap_request.py ``` (Or if your are on a Mac you can use curl.) ```bash # From the cloud-composer-examples directory pip install -r ~/professional-services/examples/cloud-composer-examples/requirements.txt curl https://raw.githubusercontent.com/GoogleCloudPlatform/python-docs-samples/master/iap/requirements.txt >> ~/professional-services/examples/cloud-composer-examples/iap_requirements.txt pip install -r iap-requirements.txt curl https://raw.githubusercontent.com/GoogleCloudPlatform/python-docs-samples/master/iap/make_iap_request.py >> ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/make_iap_request.py ``` Note that we skipped install pyspark for the purposes of being lighter weight to stand up this example. If you have the need to test pyspark locally you should additionally run: ```bash pip install pyspark>=2.3.1 ``` The following high-level steps describe the setup needed to run this example: 0. set your project information. ```bash export PROJECT=<REPLACE-THIS-WITH-YOUR-PROJECT-ID> gcloud config set project $PROJECT ``` 1. Create a Cloud Storage (GCS) bucket for receiving input files (*input-gcs-bucket*). ```bash gsutil mb -c regional -l us-central1 gs://$PROJECT ``` 2. Export the public BigQuery Table to a new dataset. ```bash bq mk ComposerDemo export EXPORT_TS=`date "+%Y-%m-%dT%H%M%S"`&& bq extract \ --destination_format=NEWLINE_DELIMITED_JSON \ nyc-tlc:yellow.trips \ gs://$PROJECT/cloud-composer-lab/raw-$EXPORT_TS/nyc-tlc-yellow-*.json ``` 3. Create a Cloud Composer environment - Follow [these](https://cloud.google.com/composer/docs/quickstart) steps to create a Cloud Composer environment if needed (*cloud-composer-env*). We will set these variables in the composer environment. | Key | Value |Example | | :--------------------- |:---------------------------------------------- |:--------------------------- | | gcp_project | *your-gcp-project-id* |cloud-comp-http-demo | | gcp_bucket | *gcs-bucket-with-raw-files* |cloud-comp-http-demo | | gce_zone | *compute-engine-zone* |us-central1-b | ```bash gcloud beta composer environments create demo-ephemeral-dataproc \ --location us-central1 \ --zone us-central1-b \ --machine-type n1-standard-2 \ --disk-size 20 # Set Airflow Variables in the Composer Environment we just created. gcloud composer environments run \ demo-ephemeral-dataproc \ --location=us-central1 variables -- \ --set gcp_project $PROJECT gcloud composer environments run demo-ephemeral-dataproc \ --location=us-central1 variables -- \ --set gce_zone us-central1-b gcloud composer environments run demo-ephemeral-dataproc \ --location=us-central1 variables -- \ --set gcs_bucket $PROJECT gcloud composer environments run demo-ephemeral-dataproc \ --location=us-central1 variables -- \ --set bq_output_table $PROJECT:ComposerDemo.nyc-tlc-yellow-trips ``` 6. Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below: ![Alt text](../img/dag-folder-example.png "Screen shot showing where to find the DAG folder in the console.") 7. Upload the PySpark code [spark_avg_speed.py](composer_http_examples/spark_avg_speed.py) into a *spark-jobs* folder in GCS. ```bash gsutil cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/spark_avg_speed.py gs://$PROJECT/spark-jobs/ ``` 8. The DAG folder is essentially a Cloud Storage bucket. Upload the [ephemeral_dataproc_spark_dag.py](composer_http_examples/ephemeral_dataproc_spark_dag.py) file into the folder: ```bash gsutil cp ~/professional-services/examples/cloud-composer-examples/composer_http_post_example/ephemeral_dataproc_spark_dag.py gs://<dag-folder>/dags ``` *** ##### Triggering the workflow Make sure that you have installed the `iap_requirements.txt` in the steps above. You will need to create a service account with the necessary permissions to create an IAP request and use Cloud Composer to use the `dag_trigger.py` script. This can be achieved with the following commands: ```bash gcloud iam service-accounts create dag-trigger # Give service account permissions to create tokens for # iap requests. gcloud projects add-iam-policy-binding $PROJECT \ --member \ serviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \ --role roles/iam.serviceAccountTokenCreator gcloud projects add-iam-policy-binding $PROJECT \ --member \ serviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \ --role roles/iam.serviceAccountActor # Service account also needs to be authorized to use Composer. gcloud projects add-iam-policy-binding $PROJECT \ --member \ serviceAccount:dag-trigger@$PROJECT.iam.gserviceaccount.com \ --role roles/composer.user # We need a service account key to trigger the dag. gcloud iam service-accounts keys create ~/$PROJECT-dag-trigger-key.json \ --iam-account=dag-trigger@$PROJECT.iam.gserviceaccount.com # Finally use this as your application credentials by setting the environment variable on the machine you will run `dag_trigger.py` export GOOGLE_APPLICATION_CREDENTIALS=~/$PROJECT-dag-trigger-key.json ``` To trigger this workflow use [dag_trigger.py](dag_trigger.py) takes 3 arguments as shown below ```bash python dag_trigger.py \ --url=<airflow endpoint url> \ --iapClientId=<client id> \ --raw_path=<path to raw files for enhancement in GCS> ``` The endpoint of triggering the dag had the following structure `https://<airflow web server url>/api/experimental/dags/<dag-id>/dag_runs` in this case our dag-id is average-speed. The airflow webserver can be found once your composer environment is set up by clicking on your environment in the console and checking here: ![Alt text](../img/airflow-ui.png "Screen Shot showing how to get the airflow URL") In oder to obtain your `--iapClientId` Visit the Airflow URL https://YOUR_UNIQUE_ID.appspot.com (which you noted in the last step) in an incognito window, *don't* authenticate or login, and first landing page for IAP Auth has client Id in the url in the address bar: https://accounts.google.com/signin/oauth/identifier?**client_id=00000000000-xxxx0x0xx0xx00xxxx0x00xxx0xxxxx.apps.googleusercontent.com**&as=a6VGEPwFpCL1qIwusi49IQ&destination=https%3A%2F%2Fh0b798498b93687a6-tp.appspot.com&approval_state=!ChRKSmd1TVc1VlQzMDB3MHI2UGI4SxIfWXhaRjJLcWdwcndRVUU3MWpGWk5XazFEbUp6N05SWQ%E2%88%99AB8iHBUAAAAAWvsaqTGCmRazWx9NqQtnYVOllz0r2x_i&xsrfsig=AHgIfE_o0kxXt6N3ch1JH4Fb19CB7wdbMg&flowName=GeneralOAuthFlow ***
GCP
Cloud Composer Ephemeral Dataproc Cluster for Spark Job Workflow Overview Alt text img composer http post arch png A diagram illustrating the workflow described below An HTTP POST to the airflow endpoint from an on prem system is used as a trigger to initiate the workflow At a high level the Cloud Composer workflow performs the following steps 1 Extracts some metadata from the HTTP POST that triggered the workflow 1 Spins up a Dataproc Cluster 1 Submits a Spark job that performs the following Reads newline delimited json data generated by an export from the nyc tlc yellow trips public BigQuery table https bigquery cloud google com table nyc tlc yellow trips pli 1 Enhances the data with an average speed column Writes the enhanced data as in CSV format to a temporary location in Google Cloud storage 1 Tear down the Dataproc Cluster Load these files to BigQuery 1 Clean up the temporary path of enhanced data in GCS 1 Extract metadata from POST When there is an HTTP POST to the airflow endpoint it should contain a paylaod of the following structure payload run id post triggered run s datetime now strftime Y m d H M s conf raw path raw path transformed path transformed path Where raw path is a timestamped path to the existing raw files in gcs in newline delimited json format and transformed path is a path with matching time stamp to stage the enhanced file before loading to BigQuery 2 3 Spins up a Dataproc Cluster and submit Spark Job The workflow then provisions a Dataproc Cluster and submits a spark job to enhance the data 4 Move to processed bucket Based on the status of the Spark job the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data A separate folder is created along with a processed date field to hold the files in this bucket Full code examples Ready to dive deeper Check out the complete code ephemeral dataproc spark dag py Setup and Pre requisites It is recommended that virtualenv be used to keep everything tidy The requirements txt requirements txt describes the dependencies needed for the code used in this repo bash virtualenv composer env source composer env bin activate The POST will need to be authenticated with Identity Aware Proxy https cloud google com iap docs We reccomend doing this by copying the latest version of make iap request py https github com GoogleCloudPlatform python docs samples blob master iap make iap request py from the Google Cloud python docs samples repo and using the provided dag trigger py dag trigger py bash pip install r professional services examples cloud composer examples requirements txt wget https raw githubusercontent com GoogleCloudPlatform python docs samples master iap requirements txt O professional services examples cloud composer examples iap requirements txt pip install r iap requirements txt wget https raw githubusercontent com GoogleCloudPlatform python docs samples master iap make iap request py O professional services examples cloud composer examples composer http post example make iap request py Or if your are on a Mac you can use curl bash From the cloud composer examples directory pip install r professional services examples cloud composer examples requirements txt curl https raw githubusercontent com GoogleCloudPlatform python docs samples master iap requirements txt professional services examples cloud composer examples iap requirements txt pip install r iap requirements txt curl https raw githubusercontent com GoogleCloudPlatform python docs samples master iap make iap request py professional services examples cloud composer examples composer http post example make iap request py Note that we skipped install pyspark for the purposes of being lighter weight to stand up this example If you have the need to test pyspark locally you should additionally run bash pip install pyspark 2 3 1 The following high level steps describe the setup needed to run this example 0 set your project information bash export PROJECT REPLACE THIS WITH YOUR PROJECT ID gcloud config set project PROJECT 1 Create a Cloud Storage GCS bucket for receiving input files input gcs bucket bash gsutil mb c regional l us central1 gs PROJECT 2 Export the public BigQuery Table to a new dataset bash bq mk ComposerDemo export EXPORT TS date Y m dT H M S bq extract destination format NEWLINE DELIMITED JSON nyc tlc yellow trips gs PROJECT cloud composer lab raw EXPORT TS nyc tlc yellow json 3 Create a Cloud Composer environment Follow these https cloud google com composer docs quickstart steps to create a Cloud Composer environment if needed cloud composer env We will set these variables in the composer environment Key Value Example gcp project your gcp project id cloud comp http demo gcp bucket gcs bucket with raw files cloud comp http demo gce zone compute engine zone us central1 b bash gcloud beta composer environments create demo ephemeral dataproc location us central1 zone us central1 b machine type n1 standard 2 disk size 20 Set Airflow Variables in the Composer Environment we just created gcloud composer environments run demo ephemeral dataproc location us central1 variables set gcp project PROJECT gcloud composer environments run demo ephemeral dataproc location us central1 variables set gce zone us central1 b gcloud composer environments run demo ephemeral dataproc location us central1 variables set gcs bucket PROJECT gcloud composer environments run demo ephemeral dataproc location us central1 variables set bq output table PROJECT ComposerDemo nyc tlc yellow trips 6 Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below Alt text img dag folder example png Screen shot showing where to find the DAG folder in the console 7 Upload the PySpark code spark avg speed py composer http examples spark avg speed py into a spark jobs folder in GCS bash gsutil cp professional services examples cloud composer examples composer http post example spark avg speed py gs PROJECT spark jobs 8 The DAG folder is essentially a Cloud Storage bucket Upload the ephemeral dataproc spark dag py composer http examples ephemeral dataproc spark dag py file into the folder bash gsutil cp professional services examples cloud composer examples composer http post example ephemeral dataproc spark dag py gs dag folder dags Triggering the workflow Make sure that you have installed the iap requirements txt in the steps above You will need to create a service account with the necessary permissions to create an IAP request and use Cloud Composer to use the dag trigger py script This can be achieved with the following commands bash gcloud iam service accounts create dag trigger Give service account permissions to create tokens for iap requests gcloud projects add iam policy binding PROJECT member serviceAccount dag trigger PROJECT iam gserviceaccount com role roles iam serviceAccountTokenCreator gcloud projects add iam policy binding PROJECT member serviceAccount dag trigger PROJECT iam gserviceaccount com role roles iam serviceAccountActor Service account also needs to be authorized to use Composer gcloud projects add iam policy binding PROJECT member serviceAccount dag trigger PROJECT iam gserviceaccount com role roles composer user We need a service account key to trigger the dag gcloud iam service accounts keys create PROJECT dag trigger key json iam account dag trigger PROJECT iam gserviceaccount com Finally use this as your application credentials by setting the environment variable on the machine you will run dag trigger py export GOOGLE APPLICATION CREDENTIALS PROJECT dag trigger key json To trigger this workflow use dag trigger py dag trigger py takes 3 arguments as shown below bash python dag trigger py url airflow endpoint url iapClientId client id raw path path to raw files for enhancement in GCS The endpoint of triggering the dag had the following structure https airflow web server url api experimental dags dag id dag runs in this case our dag id is average speed The airflow webserver can be found once your composer environment is set up by clicking on your environment in the console and checking here Alt text img airflow ui png Screen Shot showing how to get the airflow URL In oder to obtain your iapClientId Visit the Airflow URL https YOUR UNIQUE ID appspot com which you noted in the last step in an incognito window don t authenticate or login and first landing page for IAP Auth has client Id in the url in the address bar https accounts google com signin oauth identifier client id 00000000000 xxxx0x0xx0xx00xxxx0x00xxx0xxxxx apps googleusercontent com as a6VGEPwFpCL1qIwusi49IQ destination https 3A 2F 2Fh0b798498b93687a6 tp appspot com approval state ChRKSmd1TVc1VlQzMDB3MHI2UGI4SxIfWXhaRjJLcWdwcndRVUU3MWpGWk5XazFEbUp6N05SWQ E2 88 99AB8iHBUAAAAAWvsaqTGCmRazWx9NqQtnYVOllz0r2x i xsrfsig AHgIfE o0kxXt6N3ch1JH4Fb19CB7wdbMg flowName GeneralOAuthFlow
GCP This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform enrich and load a delimited text file into Cloud BigQuery Workflow Overview Cloud Composer workflow using Cloud Dataflow A Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing The goal of this example is to provide a common pattern to automatically trigger via Google Cloud Function a Dataflow job when a file arrives in Google Cloud Storage process the data and load it into BigQuery
## Cloud Composer workflow using Cloud Dataflow ##### This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform, enrich and load a delimited text file into Cloud BigQuery. The goal of this example is to provide a common pattern to automatically trigger, via Google Cloud Function, a Dataflow job when a file arrives in Google Cloud Storage, process the data and load it into BigQuery. ### Workflow Overview *** ![Alt text](../img/workflow-overview.png "Workflow Overview") A Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing. At a high-level the Cloud Composer workflow performs the following steps: 1. Extracts the location of the input file that triggered the workflow. 2. Executes a Cloud Dataflow job that performs the following: - Parses the delimited input file and adds some useful 'metadata' - 'filename': The name of the file that is proceeded by the Cloud Dataflow job - 'load_dt': The date in YYYY-MM-DD format when the file is processed - Loads the data into an existing Cloud BigQuery table (any existing data is truncated) 3. Moves the input file to a Cloud Storage bucket that is setup for storing processed files. ##### 1. Extract the input file location: When a file is uploaded to the Cloud Storage bucket, a Cloud Function is triggered. This invocation wraps the event information (_bucket and object details_) that triggered this event and passes it to the the Cloud Composer workflow that gets triggered. The workflow extracts this information and passes it to the Cloud Dataflow job. ``` job_args = { 'input': 'gs:///', 'output': models.Variable.get('bq_output_table'), 'fields': models.Variable.get('input_field_names'), 'load_dt': ds_tag } ``` ##### 2. Executes the Cloud Dataflow job The workflow then executes a [Cloud Dataflow job](dataflow/process_delimited.py) to process the delimited file, adds filename and load_dt fields and loads the data into a Cloud BigQuery table. ##### 3. Move to processed bucket ![Alt text](../img/sample-dag.png "DAG Overview") Based on the status of the Cloud Dataflow job, the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data. A separate folder is created along with a processed date field to hold the files in this bucket. ##### Full code examples Ready to dive deeper? Check out the complete code [here](simple_load_dag.py) *** #### Setup and Pre-requisites It is recommended that virtualenv be used to keep everything tidy. The [requirements.txt](requirements.txt) describes the dependencies needed for the code used in this repo. The following high-level steps describe the setup needed to run this example: 1. Create a Cloud Storage (GCS) bucket for receiving input files (*input-gcs-bucket*). 2. Create a GCS bucket for storing processed files (*output-gcs-bucket*). 3. Create a Cloud Composer environment - Follow [these](https://cloud.google.com/composer/docs/quickstart) steps to create a Cloud Composer environment if needed (*cloud-composer-env*). 4. Create a Cloud BigQuery table for the processed output. The following schema is used for this example: |Column Name | Column Type| |:-----------|:-----------| |state |STRING | |gender |STRING | |year |STRING | |name |STRING | |number |STRING | |created_date|STRING | |filename |STRING | |load_dt |DATE | 5. Set the following [Airflow variables](https://airflow.apache.org/docs/stable/concepts.html#variables) needed for this example: | Key | Value |Example | | :--------------------- |:---------------------------------------------- |:--------------------------- | | gcp_project | *your-gcp-project-id* |cloud-comp-df-demo | | gcp_temp_location | *gcs-bucket-for-dataflow-temp-files* |gs://my-comp-df-demo-temp/tmp | | gcs_completion_bucket | *output-gcs-bucket* |my-comp-df-demp-output | | input_field_names | *comma-separated-field-names-for-delimited-file*|state,gender,year,name,number,created_date| | bq_output_table | *bigquery-output-table* |my_dataset.usa_names | | email | *some-email@mycompany.com* |some-email@mycompany.com | The variables can be set as follows: `gcloud composer environments run` **_cloud-composer-env-name_** `variables -- --set` **_key val_** 6. Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below: ![Alt text](../img/dag-folder-example.png "Workflow Overview") 7. The DAG folder is essentially a Cloud Storage bucket. Upload the [simple_load_dag.py](simple_load_dag.py) file into the folder: ![Alt text](../img/bucket-example.png "DAG Bucket") 8. Upload the Python Dataflow code [process_delimited.py](dataflow/process_delimited.py) into a *dataflow* folder created in the base DAG folder. 9. Finally follow [these](https://cloud.google.com/composer/docs/how-to/using/triggering-with-gcf) instructions to create a Cloud Function. - Ensure that the **DAG_NAME** property is set to _**GcsToBigQueryTriggered**_ i.e. The DAG name defined in [simple_load_dag.py](simple_load_dag.py). *** ##### Triggering the workflow The workflow is automatically triggered by Cloud Function that gets invoked when a new file is uploaded into the *input-gcs-bucket* For this example workflow, the [usa_names.csv](resources/usa_names.csv) file can be uploaded into the *input-gcs-bucket* `gsutil cp resources/usa_names.csv gs://` **_input-gcs-bucket_** ***
GCP
Cloud Composer workflow using Cloud Dataflow This repo contains an example Cloud Composer workflow that triggers Cloud Dataflow to transform enrich and load a delimited text file into Cloud BigQuery The goal of this example is to provide a common pattern to automatically trigger via Google Cloud Function a Dataflow job when a file arrives in Google Cloud Storage process the data and load it into BigQuery Workflow Overview Alt text img workflow overview png Workflow Overview A Cloud Function with a Cloud Storage trigger is used to initiate the workflow when a file is uploaded for processing At a high level the Cloud Composer workflow performs the following steps 1 Extracts the location of the input file that triggered the workflow 2 Executes a Cloud Dataflow job that performs the following Parses the delimited input file and adds some useful metadata filename The name of the file that is proceeded by the Cloud Dataflow job load dt The date in YYYY MM DD format when the file is processed Loads the data into an existing Cloud BigQuery table any existing data is truncated 3 Moves the input file to a Cloud Storage bucket that is setup for storing processed files 1 Extract the input file location When a file is uploaded to the Cloud Storage bucket a Cloud Function is triggered This invocation wraps the event information bucket and object details that triggered this event and passes it to the the Cloud Composer workflow that gets triggered The workflow extracts this information and passes it to the Cloud Dataflow job job args input gs output models Variable get bq output table fields models Variable get input field names load dt ds tag 2 Executes the Cloud Dataflow job The workflow then executes a Cloud Dataflow job dataflow process delimited py to process the delimited file adds filename and load dt fields and loads the data into a Cloud BigQuery table 3 Move to processed bucket Alt text img sample dag png DAG Overview Based on the status of the Cloud Dataflow job the workflow will then move the processed files to a Cloud Storage bucket setup to store processed data A separate folder is created along with a processed date field to hold the files in this bucket Full code examples Ready to dive deeper Check out the complete code here simple load dag py Setup and Pre requisites It is recommended that virtualenv be used to keep everything tidy The requirements txt requirements txt describes the dependencies needed for the code used in this repo The following high level steps describe the setup needed to run this example 1 Create a Cloud Storage GCS bucket for receiving input files input gcs bucket 2 Create a GCS bucket for storing processed files output gcs bucket 3 Create a Cloud Composer environment Follow these https cloud google com composer docs quickstart steps to create a Cloud Composer environment if needed cloud composer env 4 Create a Cloud BigQuery table for the processed output The following schema is used for this example Column Name Column Type state STRING gender STRING year STRING name STRING number STRING created date STRING filename STRING load dt DATE 5 Set the following Airflow variables https airflow apache org docs stable concepts html variables needed for this example Key Value Example gcp project your gcp project id cloud comp df demo gcp temp location gcs bucket for dataflow temp files gs my comp df demo temp tmp gcs completion bucket output gcs bucket my comp df demp output input field names comma separated field names for delimited file state gender year name number created date bq output table bigquery output table my dataset usa names email some email mycompany com some email mycompany com The variables can be set as follows gcloud composer environments run cloud composer env name variables set key val 6 Browse to the Cloud Composer widget in Cloud Console and click on the DAG folder icon as shown below Alt text img dag folder example png Workflow Overview 7 The DAG folder is essentially a Cloud Storage bucket Upload the simple load dag py simple load dag py file into the folder Alt text img bucket example png DAG Bucket 8 Upload the Python Dataflow code process delimited py dataflow process delimited py into a dataflow folder created in the base DAG folder 9 Finally follow these https cloud google com composer docs how to using triggering with gcf instructions to create a Cloud Function Ensure that the DAG NAME property is set to GcsToBigQueryTriggered i e The DAG name defined in simple load dag py simple load dag py Triggering the workflow The workflow is automatically triggered by Cloud Function that gets invoked when a new file is uploaded into the input gcs bucket For this example workflow the usa names csv resources usa names csv file can be uploaded into the input gcs bucket gsutil cp resources usa names csv gs input gcs bucket
GCP bigquery analyze realtime reddit data 3 1 5 4 2 Table Of Contents
# bigquery-analyze-realtime-reddit-data ## Table Of Contents 1. [Use Case](#use-case) 2. [About](#about) 3. [Architecture](#architecture) 4. [Guide](#guide) 5. [Sample](#sample) ---- ## Use-case Simple deployment of a ([reddit](https://www.reddit.com)) social media data collection architecture on Google Cloud Platform. ---- ## About This repository contains the resources necessary to deploy a basic data stream and data lake on Google Cloud Platform. Terraform templates deploy the entire infrastructure, which includes a [Google Compute Engine](https://cloud.google.com/compute) VM with a initialization script that clones a [reddit streaming application repository](https://github.com/CYarros10/reddit-streaming-application). The GCE VM executes a python script from that repo. The python script accesses a user's [reddit developer client](https://www.reddit.com/prefs/apps/) and begins to collect reddit comments from a specified list of the top 50 subreddits. As the GCE VM collects reddit comments, it cleans, censors, and analyzes sentiment of each comment. Finally, it pushes the comment to a [Google Cloud Pub/Sub](https://cloud.google.com/pubsub) topic. A [Cloud Dataflow](https://cloud.google.com/dataflow) job subscribes to the PubSub topic, reads the comments, and writes them to a [Cloud Bigquery](https://cloud.google.com/bigquery) table. The user now has access to an ever-increasing dataset of reddit comments + sentiment analysis. ---- ## Architecture ![Stack-Resources](images/architecture.png) ---- ## Guide ### 1. Create your reddit bot account 1. [Register a reddit account](https://www.reddit.com/register/) 2. Follow prompts to create new reddit account: * Provide email address * Choose username and password * Click `Finish` 3. Once your account is created, go to [reddit developer console.](https://www.reddit.com/prefs/apps/) 4. Select **β€œare you a developer? Create an app...”** 5. Give it a name. 6. Select script. <--- **This is important!** 7. For about url and redirect uri, use http://127.0.0.1 8. You will now get a client_id (underneath web app) and secret 9. Keep track of your reddit account username, password, app client_id (in blue box), and app secret (in red box). These will be used in tutorial Step 11 #### Further Learning / References: PRAW * [PRAW Quick start](https://praw.readthedocs.io/en/latest/getting_started/quick_start.html) ### 2. Run setup.sh If you need to allow externalIPs, run this command (or similar) in your project: ```bash echo "{ \"constraint\": \"constraints/compute.vmExternalIpAccess\", \"listPolicy\": { \"allValues\": \"ALLOW\" } }" > external_ip_policy.json gcloud resource-manager org-policies set-policy external_ip_policy.json --project="$projectId" ``` ```bash ./scripts/setup.sh -i <project-id> -r <region> -c <reddit-client-id> -u <reddit-user> ``` ### 4. Wait for data collection The VM will take a minute or two to setup. Then comments will start to flow into Bigquery in near-realtime! ### 5. Query your data using Bigquery **example:** ```sql select subreddit, author, comment_text, sentiment_score from reddit.comments_raw order by sentiment_score desc limit 25; ``` ---- ## Sample ### Example of a Collected+Analyzed reddit Comment: ```json { "comment_id": "fx3wgci", "subreddit": "Fitness", "author": "silverbird666", "comment_text": "well, i dont exactly count my calories, but i run on a competitive base and do kickboxing, that stuff burns quite much calories. i just stick to my established diet, and supplement with protein bars and shakes whenever i fail to hit my daily intake of protein. works for me.", "distinguished": null, "submitter": false, "total_words": 50, "reading_ease_score": 71.44, "reading_ease": "standard", "reading_grade_level": "7th and 8th grade", "sentiment_score": -0.17, "censored": 0, "positive": 0, "neutral": 1, "negative": 0, "subjectivity_score": 0.35, "subjective": 0, "url": "https://reddit.com/r/Fitness/comments/hlk84h/victory_sunday/fx3wgci/", "comment_date": "2020-07-06 15:41:15", "comment_timestamp": "2020/07/06 15:41:15", "comment_hour": 15, "comment_year": 2020, "comment_month": 7, "comment_day": 6 } ```
GCP
bigquery analyze realtime reddit data Table Of Contents 1 Use Case use case 2 About about 3 Architecture architecture 4 Guide guide 5 Sample sample Use case Simple deployment of a reddit https www reddit com social media data collection architecture on Google Cloud Platform About This repository contains the resources necessary to deploy a basic data stream and data lake on Google Cloud Platform Terraform templates deploy the entire infrastructure which includes a Google Compute Engine https cloud google com compute VM with a initialization script that clones a reddit streaming application repository https github com CYarros10 reddit streaming application The GCE VM executes a python script from that repo The python script accesses a user s reddit developer client https www reddit com prefs apps and begins to collect reddit comments from a specified list of the top 50 subreddits As the GCE VM collects reddit comments it cleans censors and analyzes sentiment of each comment Finally it pushes the comment to a Google Cloud Pub Sub https cloud google com pubsub topic A Cloud Dataflow https cloud google com dataflow job subscribes to the PubSub topic reads the comments and writes them to a Cloud Bigquery https cloud google com bigquery table The user now has access to an ever increasing dataset of reddit comments sentiment analysis Architecture Stack Resources images architecture png Guide 1 Create your reddit bot account 1 Register a reddit account https www reddit com register 2 Follow prompts to create new reddit account Provide email address Choose username and password Click Finish 3 Once your account is created go to reddit developer console https www reddit com prefs apps 4 Select are you a developer Create an app 5 Give it a name 6 Select script This is important 7 For about url and redirect uri use http 127 0 0 1 8 You will now get a client id underneath web app and secret 9 Keep track of your reddit account username password app client id in blue box and app secret in red box These will be used in tutorial Step 11 Further Learning References PRAW PRAW Quick start https praw readthedocs io en latest getting started quick start html 2 Run setup sh If you need to allow externalIPs run this command or similar in your project bash echo constraint constraints compute vmExternalIpAccess listPolicy allValues ALLOW external ip policy json gcloud resource manager org policies set policy external ip policy json project projectId bash scripts setup sh i project id r region c reddit client id u reddit user 4 Wait for data collection The VM will take a minute or two to setup Then comments will start to flow into Bigquery in near realtime 5 Query your data using Bigquery example sql select subreddit author comment text sentiment score from reddit comments raw order by sentiment score desc limit 25 Sample Example of a Collected Analyzed reddit Comment json comment id fx3wgci subreddit Fitness author silverbird666 comment text well i dont exactly count my calories but i run on a competitive base and do kickboxing that stuff burns quite much calories i just stick to my established diet and supplement with protein bars and shakes whenever i fail to hit my daily intake of protein works for me distinguished null submitter false total words 50 reading ease score 71 44 reading ease standard reading grade level 7th and 8th grade sentiment score 0 17 censored 0 positive 0 neutral 1 negative 0 subjectivity score 0 35 subjective 0 url https reddit com r Fitness comments hlk84h victory sunday fx3wgci comment date 2020 07 06 15 41 15 comment timestamp 2020 07 06 15 41 15 comment hour 15 comment year 2020 comment month 7 comment day 6
GCP This example provides guidance on how to use to batch records that are published to a Pub Sub topic Using correctly allows us to better utilize the available resources cpu memory network bandwidth on the client machine and to improve throughput In order to demonstrate this process end to end we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub Sub topic and streams those records into a BigQuery table Using Batching in Cloud Pub Sub Java client API In addition to BatchingSettings this code sample also demonstrates the use of to encode the messages instead of using JSON strings Components
# Using Batching in Cloud Pub/Sub Java client API. This example provides guidance on how to use [Pub/Sub's Java client API](https://cloud.google.com/pubsub/docs/reference/libraries) to batch records that are published to a Pub/Sub topic. Using [BatchingSettings](http://googleapis.github.io/gax-java/1.4.1/apidocs/com/google/api/gax/batching/BatchingSettings.html) correctly allows us to better utilize the available resources (cpu, memory, network bandwidth) on the client machine and to improve throughput. In addition to BatchingSettings, this code sample also demonstrates the use of [Avro](http://avro.apache.org/docs/current/) to encode the messages instead of using JSON strings. In order to demonstrate this process end-to-end, we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub/Sub topic and streams those records into a BigQuery table. ## Components [ObjectPublisher](src/main/java/com/google/cloud/pso/pubsub/common/ObjectPublisher.java) - A generic publisher class that can be used to publish any object as a payload to Cloud Pub/Sub. This publisher class provides various configurable parameters for controlling the [BatchingSettings](http://googleapis.github.io/gax-java/1.4.1/apidocs/com/google/api/gax/batching/BatchingSettings.html) for the publishing client. [EmployeePublisher](src/main/java/com/google/cloud/pso/pubsub/EmployeePublisherMain.java) - An implementation of the [ObjectPublisher](src/main/java/com/google/cloud/pso/pubsub/common/ObjectPublisher.java) to publish Avro encoded test records to Cloud Pub/Sub. This will generate random test records using the sample [Employee Avro Schema](src/main/avro/employee.avsc). [IngestionMain](src/main/java/com/google/cloud/pso/pipeline/IngestionMain.java) - A sample Dataflow pipeline to read the Avro encoded test records and stream them into BigQuery. ### Requirements * Java 8 * Maven 3 * Cloud Pub/Sub topic and subscription * The Pub/Sub topic will be used by the client to publish messages. * The Pub/Sub subscription on the same topic will be used by the Dataflow job to read the messages. * BigQuery table to stream records into - The table schema should match the Avro schema of the messages. ### Building the Project Build the entire project using the maven compile command. ```sh mvn clean compile ``` ### Running unit tests Run the unit tests using the maven test command. ```sh mvn clean test ``` ### Publishing sample records to Cloud Pub/Sub Publish sample [Employee](src/main/avro/employee.avsc) records using the maven exec command. ```sh mvn compile exec:java \ -Dexec.mainClass=com.google.cloud.pso.pubsub.EmployeePublisherMain \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --topic <output-pubsub-topic> --numberOfMessages <number-of-messages>" ``` There are several other optional parameters related to batch settings. These parameters can be viewed by passing the help flag. ```sh mvn compile exec:java \ -Dexec.mainClass=com.google.cloud.pso.pubsub.EmployeePublisherMain \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args="--help" Usage: com.google.cloud.pso.pubsub.EmployeePublisherMain [options] * - Required parameters Options: --bytesThreshold, -b Batch threshold bytes. Default: 1024 --delayThreshold, -d Delay threshold in milliseconds. Default: PT0.5S --elementCount, -c The number of elements to be batched in each request. Default: 500 --help, -h --numberOfMessages, -n Number of sample messages to publish to Pub/Sub Default: 100000 * --topic, -t The Pub/Sub topic to write messages to ``` ### Streaming Cloud Pub/Sub messages into BigQuery using Dataflow The [IngestionMain](src/main/java/com/google/cloud/pso/pipeline/IngestionMain.java) pipeline provides a simple example of consuming the Avro records that were published into Cloud Pub/Sub and streaming those records into a BigQuery table. This pipeline example can be compiled and executed using the maven exec command. ```sh mvn compile exec:java \ -Dexec.mainClass=com.google.cloud.pso.pipeline.IngestionMain \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --project=<my-gcp-project> \ --stagingLocation=<my-gcs-staging-bucket> \ --tempLocation=<my-gcs-temp-bucket> \ --runner=DataflowRunner \ --autoscalingAlgorithm=THROUGHPUT_BASED \ --maxNumWorkers=<max-num-workers> \ --subscription=<my-input-pubsub-subscription> \ --tableId=<my-output-bigquery-table>" ``` ### Authentication These examples use the [Google client libraries to implicitly determine the credentials used][1]. It is strongly recommended that a Service Account with appropriate permissions be used for accessing the resources in Google Cloud Platform Project. [1]: https://cloud.google.com/docs/authentication/getting-started
GCP
Using Batching in Cloud Pub Sub Java client API This example provides guidance on how to use Pub Sub s Java client API https cloud google com pubsub docs reference libraries to batch records that are published to a Pub Sub topic Using BatchingSettings http googleapis github io gax java 1 4 1 apidocs com google api gax batching BatchingSettings html correctly allows us to better utilize the available resources cpu memory network bandwidth on the client machine and to improve throughput In addition to BatchingSettings this code sample also demonstrates the use of Avro http avro apache org docs current to encode the messages instead of using JSON strings In order to demonstrate this process end to end we also provided a simple Dataflow pipeline that reads these Avro messages from a Pub Sub topic and streams those records into a BigQuery table Components ObjectPublisher src main java com google cloud pso pubsub common ObjectPublisher java A generic publisher class that can be used to publish any object as a payload to Cloud Pub Sub This publisher class provides various configurable parameters for controlling the BatchingSettings http googleapis github io gax java 1 4 1 apidocs com google api gax batching BatchingSettings html for the publishing client EmployeePublisher src main java com google cloud pso pubsub EmployeePublisherMain java An implementation of the ObjectPublisher src main java com google cloud pso pubsub common ObjectPublisher java to publish Avro encoded test records to Cloud Pub Sub This will generate random test records using the sample Employee Avro Schema src main avro employee avsc IngestionMain src main java com google cloud pso pipeline IngestionMain java A sample Dataflow pipeline to read the Avro encoded test records and stream them into BigQuery Requirements Java 8 Maven 3 Cloud Pub Sub topic and subscription The Pub Sub topic will be used by the client to publish messages The Pub Sub subscription on the same topic will be used by the Dataflow job to read the messages BigQuery table to stream records into The table schema should match the Avro schema of the messages Building the Project Build the entire project using the maven compile command sh mvn clean compile Running unit tests Run the unit tests using the maven test command sh mvn clean test Publishing sample records to Cloud Pub Sub Publish sample Employee src main avro employee avsc records using the maven exec command sh mvn compile exec java Dexec mainClass com google cloud pso pubsub EmployeePublisherMain Dexec cleanupDaemonThreads false Dexec args topic output pubsub topic numberOfMessages number of messages There are several other optional parameters related to batch settings These parameters can be viewed by passing the help flag sh mvn compile exec java Dexec mainClass com google cloud pso pubsub EmployeePublisherMain Dexec cleanupDaemonThreads false Dexec args help Usage com google cloud pso pubsub EmployeePublisherMain options Required parameters Options bytesThreshold b Batch threshold bytes Default 1024 delayThreshold d Delay threshold in milliseconds Default PT0 5S elementCount c The number of elements to be batched in each request Default 500 help h numberOfMessages n Number of sample messages to publish to Pub Sub Default 100000 topic t The Pub Sub topic to write messages to Streaming Cloud Pub Sub messages into BigQuery using Dataflow The IngestionMain src main java com google cloud pso pipeline IngestionMain java pipeline provides a simple example of consuming the Avro records that were published into Cloud Pub Sub and streaming those records into a BigQuery table This pipeline example can be compiled and executed using the maven exec command sh mvn compile exec java Dexec mainClass com google cloud pso pipeline IngestionMain Dexec cleanupDaemonThreads false Dexec args project my gcp project stagingLocation my gcs staging bucket tempLocation my gcs temp bucket runner DataflowRunner autoscalingAlgorithm THROUGHPUT BASED maxNumWorkers max num workers subscription my input pubsub subscription tableId my output bigquery table Authentication These examples use the Google client libraries to implicitly determine the credentials used 1 It is strongly recommended that a Service Account with appropriate permissions be used for accessing the resources in Google Cloud Platform Project 1 https cloud google com docs authentication getting started
GCP The Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs to reuse or create clusters accordingly cluster proposed configuration includes a scalability policy usage Each DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs DAGs Writing DAGs isn t practical when having multiple DAGs that run similar Dataproc Jobs and want to share clusters efficiently with just some parameters changing between them Here makes sense to dynamically generate DAGs Using this project you can deploy multiple Composer Airflow DAGs from a single configuration it means you will create configuration files for DAGs to be automatically generated during deployment you can also use it to generate and upload DAGs manually Dataproc lifecycle management orchestrated by Composer
# Dataproc lifecycle management orchestrated by Composer Writing DAGs isn’t practical when having multiple DAGs that run similar Dataproc Jobs, and want to share clusters efficiently, with just some parameters changing between them. Here makes sense to dynamically generate DAGs. Using this project you can deploy multiple Composer/Airflow DAGs from a single configuration, it means you will create configuration files for DAGs to be automatically generated during deployment _(you can also use it to generate and upload DAGs manually)_. Each DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file, and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs/DAGs. The Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs, to reuse or create clusters accordingly _(cluster proposed configuration includes a scalability policy usage)_. This approach aims to use resources efficiently meanwhile minimize provisioning and execution time. This is the high level diagram: ![](images/dataproc_lifecycle.png) ## Prerequisites 1. This blueprint will deploy all its resources into the project defined by the `project_id` variable. Please note that we assume this project already exists. 2. The user deploying the project _(executing terraform plan/apply)_ should have admin permissions in the selected project, or permission to create all the resources defined in the Terraform scripts. ## Project Folder Structure ```bash main.tf ... dags/ (Autogenerated on Terraform Plan/Apply from /dag_config/ files) β”œβ”€β”€ ephemeral_cluster_job_1.py β”œβ”€β”€ ephemeral_cluster_job_2.py jobs/ β”œβ”€β”€ hello_world_spark.py β”œβ”€β”€ ... (Add your dataproc jobs here) include/ β”œβ”€β”€ dag_template.py β”œβ”€β”€ generate_dag_files.py └── dag_config β”œβ”€β”€ dag1_config.json └── dag2_config.json └── ... (Add your Composer/Airflow DAGs configuration here) ... ``` ## Adding Jobs #### Prepare Dataproc Jobs to be executed 1. Clone this repository 2. Locate your Dataproc jobs in the **/jobs/** folder in your local environment #### Prepare Composer Dags to be deployed 3. Locate your DAG configuration files in the **/include/dag_config/** folder in your local environment. DAG configuration files have the following variables: ```json { "DagId": "ephemeral_cluster_job_1", --DAG name you will see in Airflow environment "Schedule": "'@daily'", --DAG Schedule "ClusterName":"ephemeral-cluster-test", --Dataproc Cluster to be Used/created for this DAG/Job to be executed in "StartYear":"2022", --DAG start year "StartMonth":"9", --DAG start month "StartDay":"13", --DAG start day "Catchup":"False", --DAG backfill catchup "ClusterMachineType":"n1-standard-4", --Dataproc machine type to be used by master and worker cluster nodes "ClusterIdleDeleteTtl":"300", --Time in seconds to delete unused Dataproc cluster "SparkJob":"hello_world_spark.py" --Spark Job to be executed by DAG, should be placed in /jobs/ folder of this project. (if other type of Dataproc jobs modify dag_template.py) } ``` 4. (Optional) You can run `python3 include/generate_dag_files.py` in your local environment, if you want to review generated DAGs before deploying(TF plan/apply) those. ## Deployment 1. Set Google Cloud Platform credentials on local environment: https://cloud.google.com/source-repositories/docs/authentication 2. You must supply the `project_id` variable as minimum in order to deploy the project. Default Terraform variables and example values in `varibles.tf` file. 3. Run Terraform Plan/Apply ```bash $ cd terraform/ $ terraform init $ terraform plan $ terraform apply ##Optionally variables could be used $ terraform apply -var 'project_id=<PROJECT_ID>' \ -var 'region=<REGION>' ``` Once you deploy terraform plan for the first time, and composer environment is running, you can _terraform plan/apply_ after adding new DAG configuration files, to generate and deploy DAGs, to the existing environment. First time it is deployed, resource creation will take several minutes(up to 40) because of Composer Environment provisioning, you should expect for successful completion along with a list of the created resources. ## Running DAGs DAGs will run as per **Schedule**, **StartDate**, and **Catchup** configuration in DAG configuration file, or can be triggered manually trough Airflow web console after deployment. ![](images/dag_execution_example.png) ## Shared VPC The example supports the configuration of a Shared VPC as an input variable. To deploy the solution on a Shared VPC, you have to configure the `network_config` variable: ``` network_config = { host_project = "PROJECT_ID" network_self_link = "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/VPC_NAME" subnet_self_link = "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/$REGION/subnetworks/SUBNET_NAME" name = "VPC_NAME" } ``` If Shared VPC is used [firewall rules](https://cloud.google.com/composer/docs/composer-2/create-environments) would be needed in the host project. ### TODO - Add support for CMEK (Composer and dataproc)
GCP
Dataproc lifecycle management orchestrated by Composer Writing DAGs isn t practical when having multiple DAGs that run similar Dataproc Jobs and want to share clusters efficiently with just some parameters changing between them Here makes sense to dynamically generate DAGs Using this project you can deploy multiple Composer Airflow DAGs from a single configuration it means you will create configuration files for DAGs to be automatically generated during deployment you can also use it to generate and upload DAGs manually Each DAG will execute a single Dataproc cluster Job referenced in a DAG configuration file and that Job will be executed in a Dataproc Cluster that can be reused for multiple Jobs DAGs The Dataproc cluster lifecycle management will be done by the automatically generated Airflow DAGs to reuse or create clusters accordingly cluster proposed configuration includes a scalability policy usage This approach aims to use resources efficiently meanwhile minimize provisioning and execution time This is the high level diagram images dataproc lifecycle png Prerequisites 1 This blueprint will deploy all its resources into the project defined by the project id variable Please note that we assume this project already exists 2 The user deploying the project executing terraform plan apply should have admin permissions in the selected project or permission to create all the resources defined in the Terraform scripts Project Folder Structure bash main tf dags Autogenerated on Terraform Plan Apply from dag config files ephemeral cluster job 1 py ephemeral cluster job 2 py jobs hello world spark py Add your dataproc jobs here include dag template py generate dag files py dag config dag1 config json dag2 config json Add your Composer Airflow DAGs configuration here Adding Jobs Prepare Dataproc Jobs to be executed 1 Clone this repository 2 Locate your Dataproc jobs in the jobs folder in your local environment Prepare Composer Dags to be deployed 3 Locate your DAG configuration files in the include dag config folder in your local environment DAG configuration files have the following variables json DagId ephemeral cluster job 1 DAG name you will see in Airflow environment Schedule daily DAG Schedule ClusterName ephemeral cluster test Dataproc Cluster to be Used created for this DAG Job to be executed in StartYear 2022 DAG start year StartMonth 9 DAG start month StartDay 13 DAG start day Catchup False DAG backfill catchup ClusterMachineType n1 standard 4 Dataproc machine type to be used by master and worker cluster nodes ClusterIdleDeleteTtl 300 Time in seconds to delete unused Dataproc cluster SparkJob hello world spark py Spark Job to be executed by DAG should be placed in jobs folder of this project if other type of Dataproc jobs modify dag template py 4 Optional You can run python3 include generate dag files py in your local environment if you want to review generated DAGs before deploying TF plan apply those Deployment 1 Set Google Cloud Platform credentials on local environment https cloud google com source repositories docs authentication 2 You must supply the project id variable as minimum in order to deploy the project Default Terraform variables and example values in varibles tf file 3 Run Terraform Plan Apply bash cd terraform terraform init terraform plan terraform apply Optionally variables could be used terraform apply var project id PROJECT ID var region REGION Once you deploy terraform plan for the first time and composer environment is running you can terraform plan apply after adding new DAG configuration files to generate and deploy DAGs to the existing environment First time it is deployed resource creation will take several minutes up to 40 because of Composer Environment provisioning you should expect for successful completion along with a list of the created resources Running DAGs DAGs will run as per Schedule StartDate and Catchup configuration in DAG configuration file or can be triggered manually trough Airflow web console after deployment images dag execution example png Shared VPC The example supports the configuration of a Shared VPC as an input variable To deploy the solution on a Shared VPC you have to configure the network config variable network config host project PROJECT ID network self link https www googleapis com compute v1 projects PROJECT ID global networks VPC NAME subnet self link https www googleapis com compute v1 projects PROJECT ID regions REGION subnetworks SUBNET NAME name VPC NAME If Shared VPC is used firewall rules https cloud google com composer docs composer 2 create environments would be needed in the host project TODO Add support for CMEK Composer and dataproc
GCP 3 1 4 dataproc job optimization guide 2 Table Of Contents
# dataproc-job-optimization-guide ---- ## Table Of Contents 1. [About](#About) 2. [Guide](#Guide) 3. [Results](#Results) 4. [Next Steps](#Next-steps) ---- ## About This guide is designed to optimize performance and cost of applications running on Dataproc clusters. Because Dataproc supports many big data technologies - each with their own intricacies - this guide is intended to be trial-and-error experimentation. Initially it will begin with a generic dataproc cluster with defaults set. As you proceed through the guide, you’ll increasingly customize Dataproc cluster configurations to fit your specific workload. Plan to separate Dataproc Jobs into different clusters - they use resources differently and can impact each other’s performances when run simultaneously. Even better, isolating single jobs to single clusters can set you up for ephemeral clusters, where jobs can run in parallel on their own dedicated resources. Once your job is running successfully, you can safely iterate on the configuration to improve runtime and cost, falling back to the last successful run whenever experimental changes have a negative impact. ---- ## Guide ### 1. Getting Started Fill in your environment variables and run the following code in a terminal to set up your Google Cloud project. ```bash export PROJECT_ID="" export REGION="" export CLUSTER_NAME="" export BUCKET_NAME="" TIMESTAMP=$(date "+%Y-%m-%dT%H%M%S") export TIMESTAMP ./scripts/setup.sh -p $PROJECT_ID -r $REGION -c $CLUSTER_NAME -b $BUCKET_NAME -t $TIMESTAMP ``` This script will: 1. Setup project and enable APIs 2. Remove any old infrastructure related to this project (in case of previous runs) 3. Create a Google Cloud Storage bucket and a BigQuery Dataset 4. Load public data into a personal BigQuery Dataset 5. Import autoscaling policies 6. Create the first Dataproc sizing cluster **Monitoring Dataproc Jobs** ![Stack-Resources](images/monitoring-jobs.png) ### 2. Calculate Dataproc cluster size A sizing cluster can help determine the right number of workers for your application. This cluster will have an autoscaling policy attached. Set the autoscaling policy min/max values to whatever is allowed in your project. Run your jobs on this cluster. Autoscaling will continue to add nodes until the YARN pending memory metric is zero. A perfectly sized cluster should never have YARN pending memory. ```bash gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-sizing scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` ![Stack-Resources](images/monitoring-nodemanagers.png) ### 3. Optimize Dataproc cluster configuration Using a non-autoscaling cluster during this experimentation phase can lead to the discovery of more accurate machine-types, persistent disks, application properties, etc. For now, build an isolated non-autoscaling cluster for your job that has the optimized number of primary workers. Run your job on this appropriately-sized non-autoscaling cluster. If the CPU is maxing out, consider using C2 machine type. If memory is maxing out, consider using N2D-highmem machine types. Also consider increasing the machine cores (while maintaining a consistent overall core count observed during sizing phase). **Monitoring CPU Utilization** ![Stack-Resources](images/monitoring-cpu.png) **Monitoring YARN Memory** ![Stack-Resources](images/monitoring-yarn-memory.png) **8 x n2-standard-2 = 1 min 53 seconds** ```bash gcloud dataproc clusters create $CLUSTER_NAME-testing-2x8-standard \ --master-machine-type=n2-standard-2 \ --worker-machine-type=n2-standard-2 \ --num-workers=8 \ --master-boot-disk-type=pd-standard \ --master-boot-disk-size=1000GB \ --worker-boot-disk-type=pd-standard \ --worker-boot-disk-size=1000GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-2x8-standard scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` **4 x n2-standard-4 = 1 min 48 seconds** ```bash gcloud dataproc clusters delete $CLUSTER_NAME-testing-2x8-standard \ --region=$REGION gcloud dataproc clusters create $CLUSTER_NAME-testing-4x4-standard \ --master-machine-type=n2-standard-4 \ --worker-machine-type=n2-standard-4 \ --num-workers=4 \ --master-boot-disk-type=pd-standard \ --master-boot-disk-size=1000GB \ --worker-boot-disk-type=pd-standard \ --worker-boot-disk-size=1000GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-4x4-standard scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` **2 x n2-standard-8 = 1 min 31 seconds** ```bash gcloud dataproc clusters delete $CLUSTER_NAME-testing-4x4-standard \ --region=$REGION gcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-standard \ --master-machine-type=n2-standard-8 \ --worker-machine-type=n2-standard-8 \ --num-workers=2 \ --master-boot-disk-type=pd-standard \ --master-boot-disk-size=1000GB \ --worker-boot-disk-type=pd-standard \ --worker-boot-disk-size=1000GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-standard scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` If you’re still observing performance issues, consider moving from pd-standard to pd-balanced or pd-ssd. - Standard persistent disks (pd-standard) are suited for large data processing workloads that primarily use sequential I/Os. - Balanced persistent disks (pd-balanced) are an alternative to SSD persistent disks that balance performance and cost. With the same maximum IOPS as SSD persistent disks and lower IOPS per GB, a balanced persistent disk offers performance levels suitable for most general-purpose applications at a price point between that of standard and SSD persistent disks. - SSD persistent disks (pd-ssd) are suited for enterprise applications and high-performance database needs that require lower latency and more IOPS than standard persistent disks provide. For similar costs, pd-standard 1000GB == pd-balanced 500GB == pd-ssd 250 GB. ![Stack-Resources](images/monitoring-hdfs.png) **2 x n2-standard-8-balanced = 1 min 26 seconds** ```bash gcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-standard \ --region=$REGION gcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-balanced \ --master-machine-type=n2-standard-8 \ --worker-machine-type=n2-standard-8 \ --num-workers=2 \ --master-boot-disk-type=pd-balanced \ --master-boot-disk-size=500GB \ --worker-boot-disk-type=pd-balanced \ --worker-boot-disk-size=500GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-balanced scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` **2 x n2-standard-8-ssd = 1 min 21 seconds** ```bash gcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-balanced \ --region=$REGION gcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-ssd \ --master-machine-type=n2-standard-8 \ --worker-machine-type=n2-standard-8 \ --num-workers=2 \ --master-boot-disk-type=pd-ssd \ --master-boot-disk-size=250GB \ --worker-boot-disk-type=pd-ssd \ --worker-boot-disk-size=250GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` Monitor HDFS Capacity to determine disk size. If this ever drops to zero, you’ll need to increase the persistent disk size. If HDFS Capacity is too large for this job, consider lowering the disk size to save on storage costs. **2 x n2-standard-8-ssd-costop = 1 min 18 seconds** ```bash gcloud dataproc clusters delete $CLUSTER_NAME-testing-8x2-ssd \ --region=$REGION gcloud dataproc clusters create $CLUSTER_NAME-testing-8x2-ssd-costop \ --master-machine-type=n2-standard-8 \ --worker-machine-type=n2-standard-8 \ --num-workers=2 \ --master-boot-disk-type=pd-ssd \ --master-boot-disk-size=30GB \ --worker-boot-disk-type=pd-ssd \ --worker-boot-disk-size=30GB \ --region=$REGION gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd-costop scripts/spark_average_speed.py -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` ### 4. Optimize application-specific properties If you’re still observing performance issues, you can begin to adjust application properties. Ideally these properties are set on the job submission. This isolates properties to their respective jobs. Since this job runs on Spark, view the [tuning guide here.](https://cloud.google.com/dataproc/docs/support/spark-job-tuning) Since this guide uses a simple spark application and small amount of data, you may not see job performance improvement. This section is more applicable for larger use-cases. sample job submit: **2 x n2-standard-8-ssd-costop-appop = 1 min 15 seconds** ```bash gsutil -m rm -r gs://$BUCKET_NAME/transformed-$TIMESTAMP gcloud dataproc jobs submit pyspark --region=$REGION --cluster=$CLUSTER_NAME-testing-8x2-ssd-costop scripts/spark_average_speed.py --properties='spark.executor.cores=5,spark.driver.cores=5,spark.executor.instances=1,spark.executor.memory=25459m,spark.driver.memory=25459m,spark.executor.memoryOverhead=2829m,spark.default.parallelism=10,spark.sql.shuffle.partitions=10,spark.shuffle.spill.compress=true,spark.checkpoint.compress=true,spark.io.compresion.codex=snappy,spark.dynamicAllocation=true,spark.shuffle.service.enabled=true' -- gs://$BUCKET_NAME/raw-$TIMESTAMP/ gs://$BUCKET_NAME/transformed-$TIMESTAMP/ ``` ### 5. Handle edge-case workload spikes via an autoscaling policy Now that you have an optimally sized, configured, tuned cluster, you can choose to introduce autoscaling. This should NOT be seen as a cost-optimization technique. But it can improve performance during the edge-cases that require more worker nodes. Use ephemeral clusters (see step 6) to allow clusters to scale up, and delete them when the job or workflow is complete. Downscaling may not be necessary on ephemeral, job/workflow scoped clusters. - Ensure primary workers make up >50% of your cluster. Do not scale primary workers. - This does increase cost versus a smaller number of primary workers, but this is a tradeoff you can make; stability versus cost. - Note: Having too many secondary workers can create job instability. This is a tradeoff you can choose to make as you see fit, but best practice is to avoid having the majority of your workers be secondary. - Prefer ephemeral clusters where possible. - Allow these to scale up, but not down, and delete them when jobs are complete. - Set scaleDownFactor to 0.0 for ephemeral clusters. [Autoscaling clusters | Dataproc Documentation | Google Cloud](https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/autoscaling#how_autoscaling_works) sample template: ```yaml workerConfig: minInstances: 2 maxInstances: 2 secondaryWorkerConfig: minInstances: 0 maxInstances: 10 basicAlgorithm: cooldownPeriod: 5m yarnConfig: scaleUpFactor: 1.0 scaleDownFactor: 0 gracefulDecommissionTimeout: 0s ``` ### 6. Optimize cost and reusability via ephemeral Dataproc clusters There are several key advantages of using ephemeral clusters: - You can use different cluster configurations for individual jobs, eliminating the administrative burden of managing tools across jobs. - You can scale clusters to suit individual jobs or groups of jobs. - You only pay for resources when your jobs are using them. - You don't need to maintain clusters over time, because they are freshly configured every time you use them. - You don't need to maintain separate infrastructure for development, testing, and production. You can use the same definitions to create as many different versions of a cluster as you need when you need them. sample workflow template: ```yaml jobs: - pysparkJob: args: - "gs://%%BUCKET_NAME%%/raw-%%TIMESTAMP%%/" - "gs://%%BUCKET_NAME%%/transformed-%%TIMESTAMP%%/" mainPythonFileUri: gs://%%BUCKET_NAME%%/scripts/spark_average_speed.py stepId: spark_average_speed placement: managedCluster: clusterName: final-cluster-wft config: gceClusterConfig: zoneUri: %%REGION%%-a masterConfig: diskConfig: bootDiskSizeGb: 30 bootDiskType: pd-ssd machineTypeUri: n2-standard-8 minCpuPlatform: AUTOMATIC numInstances: 1 preemptibility: NON_PREEMPTIBLE workerConfig: diskConfig: bootDiskSizeGb: 30 bootDiskType: pd-ssd machineTypeUri: n2-standard-8 minCpuPlatform: AUTOMATIC numInstances: 2 preemptibility: NON_PREEMPTIBLE ``` sample workflow template execution ```bash gcloud dataproc workflow-templates instantiate-from-file \ --file templates/final-cluster-wft.yml \ --region $REGION ``` ---- ## Results Even in this small scale example. job performance was optimized by 71% (265 seconds -> 75 seconds). And with a properly sized ephemeral cluster, you only pay for what is necessary. ![Stack-Resources](images/monitoring-job-progress.png) ---- ## Next-Steps To continue striving for maximum optimal performance, please review and consider the guidance laid out in the Google Cloud Blog. - [Dataproc best practices | Google Cloud Blog](https://cloud.google.com/blog/topics/developers-practitioners/dataproc-best-practices-guide) - [7 best practices for running Cloud Dataproc in production | Google Cloud Blog](https://cloud.google.com/blog/products/data-analytics/7-best-practices-for-running-cloud-dataproc-in-production)
GCP
dataproc job optimization guide Table Of Contents 1 About About 2 Guide Guide 3 Results Results 4 Next Steps Next steps About This guide is designed to optimize performance and cost of applications running on Dataproc clusters Because Dataproc supports many big data technologies each with their own intricacies this guide is intended to be trial and error experimentation Initially it will begin with a generic dataproc cluster with defaults set As you proceed through the guide you ll increasingly customize Dataproc cluster configurations to fit your specific workload Plan to separate Dataproc Jobs into different clusters they use resources differently and can impact each other s performances when run simultaneously Even better isolating single jobs to single clusters can set you up for ephemeral clusters where jobs can run in parallel on their own dedicated resources Once your job is running successfully you can safely iterate on the configuration to improve runtime and cost falling back to the last successful run whenever experimental changes have a negative impact Guide 1 Getting Started Fill in your environment variables and run the following code in a terminal to set up your Google Cloud project bash export PROJECT ID export REGION export CLUSTER NAME export BUCKET NAME TIMESTAMP date Y m dT H M S export TIMESTAMP scripts setup sh p PROJECT ID r REGION c CLUSTER NAME b BUCKET NAME t TIMESTAMP This script will 1 Setup project and enable APIs 2 Remove any old infrastructure related to this project in case of previous runs 3 Create a Google Cloud Storage bucket and a BigQuery Dataset 4 Load public data into a personal BigQuery Dataset 5 Import autoscaling policies 6 Create the first Dataproc sizing cluster Monitoring Dataproc Jobs Stack Resources images monitoring jobs png 2 Calculate Dataproc cluster size A sizing cluster can help determine the right number of workers for your application This cluster will have an autoscaling policy attached Set the autoscaling policy min max values to whatever is allowed in your project Run your jobs on this cluster Autoscaling will continue to add nodes until the YARN pending memory metric is zero A perfectly sized cluster should never have YARN pending memory bash gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME sizing scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP Stack Resources images monitoring nodemanagers png 3 Optimize Dataproc cluster configuration Using a non autoscaling cluster during this experimentation phase can lead to the discovery of more accurate machine types persistent disks application properties etc For now build an isolated non autoscaling cluster for your job that has the optimized number of primary workers Run your job on this appropriately sized non autoscaling cluster If the CPU is maxing out consider using C2 machine type If memory is maxing out consider using N2D highmem machine types Also consider increasing the machine cores while maintaining a consistent overall core count observed during sizing phase Monitoring CPU Utilization Stack Resources images monitoring cpu png Monitoring YARN Memory Stack Resources images monitoring yarn memory png 8 x n2 standard 2 1 min 53 seconds bash gcloud dataproc clusters create CLUSTER NAME testing 2x8 standard master machine type n2 standard 2 worker machine type n2 standard 2 num workers 8 master boot disk type pd standard master boot disk size 1000GB worker boot disk type pd standard worker boot disk size 1000GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 2x8 standard scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP 4 x n2 standard 4 1 min 48 seconds bash gcloud dataproc clusters delete CLUSTER NAME testing 2x8 standard region REGION gcloud dataproc clusters create CLUSTER NAME testing 4x4 standard master machine type n2 standard 4 worker machine type n2 standard 4 num workers 4 master boot disk type pd standard master boot disk size 1000GB worker boot disk type pd standard worker boot disk size 1000GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 4x4 standard scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP 2 x n2 standard 8 1 min 31 seconds bash gcloud dataproc clusters delete CLUSTER NAME testing 4x4 standard region REGION gcloud dataproc clusters create CLUSTER NAME testing 8x2 standard master machine type n2 standard 8 worker machine type n2 standard 8 num workers 2 master boot disk type pd standard master boot disk size 1000GB worker boot disk type pd standard worker boot disk size 1000GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 8x2 standard scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP If you re still observing performance issues consider moving from pd standard to pd balanced or pd ssd Standard persistent disks pd standard are suited for large data processing workloads that primarily use sequential I Os Balanced persistent disks pd balanced are an alternative to SSD persistent disks that balance performance and cost With the same maximum IOPS as SSD persistent disks and lower IOPS per GB a balanced persistent disk offers performance levels suitable for most general purpose applications at a price point between that of standard and SSD persistent disks SSD persistent disks pd ssd are suited for enterprise applications and high performance database needs that require lower latency and more IOPS than standard persistent disks provide For similar costs pd standard 1000GB pd balanced 500GB pd ssd 250 GB Stack Resources images monitoring hdfs png 2 x n2 standard 8 balanced 1 min 26 seconds bash gcloud dataproc clusters delete CLUSTER NAME testing 8x2 standard region REGION gcloud dataproc clusters create CLUSTER NAME testing 8x2 balanced master machine type n2 standard 8 worker machine type n2 standard 8 num workers 2 master boot disk type pd balanced master boot disk size 500GB worker boot disk type pd balanced worker boot disk size 500GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 8x2 balanced scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP 2 x n2 standard 8 ssd 1 min 21 seconds bash gcloud dataproc clusters delete CLUSTER NAME testing 8x2 balanced region REGION gcloud dataproc clusters create CLUSTER NAME testing 8x2 ssd master machine type n2 standard 8 worker machine type n2 standard 8 num workers 2 master boot disk type pd ssd master boot disk size 250GB worker boot disk type pd ssd worker boot disk size 250GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 8x2 ssd scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP Monitor HDFS Capacity to determine disk size If this ever drops to zero you ll need to increase the persistent disk size If HDFS Capacity is too large for this job consider lowering the disk size to save on storage costs 2 x n2 standard 8 ssd costop 1 min 18 seconds bash gcloud dataproc clusters delete CLUSTER NAME testing 8x2 ssd region REGION gcloud dataproc clusters create CLUSTER NAME testing 8x2 ssd costop master machine type n2 standard 8 worker machine type n2 standard 8 num workers 2 master boot disk type pd ssd master boot disk size 30GB worker boot disk type pd ssd worker boot disk size 30GB region REGION gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 8x2 ssd costop scripts spark average speed py gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP 4 Optimize application specific properties If you re still observing performance issues you can begin to adjust application properties Ideally these properties are set on the job submission This isolates properties to their respective jobs Since this job runs on Spark view the tuning guide here https cloud google com dataproc docs support spark job tuning Since this guide uses a simple spark application and small amount of data you may not see job performance improvement This section is more applicable for larger use cases sample job submit 2 x n2 standard 8 ssd costop appop 1 min 15 seconds bash gsutil m rm r gs BUCKET NAME transformed TIMESTAMP gcloud dataproc jobs submit pyspark region REGION cluster CLUSTER NAME testing 8x2 ssd costop scripts spark average speed py properties spark executor cores 5 spark driver cores 5 spark executor instances 1 spark executor memory 25459m spark driver memory 25459m spark executor memoryOverhead 2829m spark default parallelism 10 spark sql shuffle partitions 10 spark shuffle spill compress true spark checkpoint compress true spark io compresion codex snappy spark dynamicAllocation true spark shuffle service enabled true gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP 5 Handle edge case workload spikes via an autoscaling policy Now that you have an optimally sized configured tuned cluster you can choose to introduce autoscaling This should NOT be seen as a cost optimization technique But it can improve performance during the edge cases that require more worker nodes Use ephemeral clusters see step 6 to allow clusters to scale up and delete them when the job or workflow is complete Downscaling may not be necessary on ephemeral job workflow scoped clusters Ensure primary workers make up 50 of your cluster Do not scale primary workers This does increase cost versus a smaller number of primary workers but this is a tradeoff you can make stability versus cost Note Having too many secondary workers can create job instability This is a tradeoff you can choose to make as you see fit but best practice is to avoid having the majority of your workers be secondary Prefer ephemeral clusters where possible Allow these to scale up but not down and delete them when jobs are complete Set scaleDownFactor to 0 0 for ephemeral clusters Autoscaling clusters Dataproc Documentation Google Cloud https cloud google com dataproc docs concepts configuring clusters autoscaling how autoscaling works sample template yaml workerConfig minInstances 2 maxInstances 2 secondaryWorkerConfig minInstances 0 maxInstances 10 basicAlgorithm cooldownPeriod 5m yarnConfig scaleUpFactor 1 0 scaleDownFactor 0 gracefulDecommissionTimeout 0s 6 Optimize cost and reusability via ephemeral Dataproc clusters There are several key advantages of using ephemeral clusters You can use different cluster configurations for individual jobs eliminating the administrative burden of managing tools across jobs You can scale clusters to suit individual jobs or groups of jobs You only pay for resources when your jobs are using them You don t need to maintain clusters over time because they are freshly configured every time you use them You don t need to maintain separate infrastructure for development testing and production You can use the same definitions to create as many different versions of a cluster as you need when you need them sample workflow template yaml jobs pysparkJob args gs BUCKET NAME raw TIMESTAMP gs BUCKET NAME transformed TIMESTAMP mainPythonFileUri gs BUCKET NAME scripts spark average speed py stepId spark average speed placement managedCluster clusterName final cluster wft config gceClusterConfig zoneUri REGION a masterConfig diskConfig bootDiskSizeGb 30 bootDiskType pd ssd machineTypeUri n2 standard 8 minCpuPlatform AUTOMATIC numInstances 1 preemptibility NON PREEMPTIBLE workerConfig diskConfig bootDiskSizeGb 30 bootDiskType pd ssd machineTypeUri n2 standard 8 minCpuPlatform AUTOMATIC numInstances 2 preemptibility NON PREEMPTIBLE sample workflow template execution bash gcloud dataproc workflow templates instantiate from file file templates final cluster wft yml region REGION Results Even in this small scale example job performance was optimized by 71 265 seconds 75 seconds And with a properly sized ephemeral cluster you only pay for what is necessary Stack Resources images monitoring job progress png Next Steps To continue striving for maximum optimal performance please review and consider the guidance laid out in the Google Cloud Blog Dataproc best practices Google Cloud Blog https cloud google com blog topics developers practitioners dataproc best practices guide 7 best practices for running Cloud Dataproc in production Google Cloud Blog https cloud google com blog products data analytics 7 best practices for running cloud dataproc in production
GCP Why is Survival Analysis Helpful for Churn Prediction Survival Analysis is used to predict the time to event when the event in question has not necessarily occurred yet In this case the event is a customer churning This model uses Survival Analysis to classify customers into time to churn buckets The model output can be used to calculate each user s churn score for different durations Churn Prediction with Survival Analysis If a customer is still active or is censored using Survival Analysis terminology we do not know their final lifetime or when they will churn If we assume that the customer s lifetime ended at the time of prediction or training the results will be biased underestimating lifetime Throwing out active users will also bias results through information loss The same methodology can be used used to predict customers total lifetime from their birth initial signup or t 0 and from the current state t 0
# Churn Prediction with Survival Analysis This model uses Survival Analysis to classify customers into time-to-churn buckets. The model output can be used to calculate each user's churn score for different durations. The same methodology can be used used to predict customers' total lifetime from their "birth" (initial signup, or t = 0) and from the current state (t > 0). ## Why is Survival Analysis Helpful for Churn Prediction? Survival Analysis is used to predict the time-to-event, when the event in question has not necessarily occurred yet. In this case, the event is a customer churning. If a customer is still active, or is "censored" using Survival Analysis terminology, we do not know their final lifetime or when they will churn. If we assume that the customer's lifetime ended at the time of prediction (or training), the results will be biased (underestimating lifetime). Throwing out active users will also bias results through information loss. By using a Survival Analysis approach to churn prediction, the entire population (regardless of current tenure or status) can be included. ## Dataset This example uses the public [Google Analytics Sample Dataset](https://support.google.com/analytics/answer/7586738?hl=en) on BigQuery and artificially generated subscription start and end dates as input. To create a churn model with real data, omit the 'Generate Data' step in the Beam pipeline in preprocessor/preprocessor/preprocess.py. Instead of randomly generating values, the BigQuery results should include the following fields: start_date, end_date, and active. These values correspond to the user's subscription lifetime and their censorship status. ## Setup ### Set up GCP credentials ```shell gcloud auth login gcloud auth application-default login ``` ### Set up Python environment ```shell virtualenv venv source ./venv/bin/activate pip install -r requirements.txt ``` ## Preprocessing Using Dataflow, the data preprocessing script reads user data from BigQuery, generates random (fake) time-to-churn labels, creates TFRecords, and adds them to Google Cloud Storage. Each record should have three labels before preprocessing: 1. **active**: indicator for censorship. It is 0 if user is inactive (uncensored) and 1 if the user is active (censored). 2. **start_date**: Date when user began their lifetime. 3. **end_date**: Date when user ends their lifetime. It is None if the user is still active. `_generateFakeData` randomly generates these three fields in order to create fake sample data. In practice, these fields should be available in some form in the historical data. During preprocessing, the aforementioned fields are combined into a single `2*n-dimensional indicator array`, where n is the number of bounded lifetime buckets (i.e. n = 2 for 0-2 months, 2-3 months, 3+ months): + indicator array = [survival array | failure array] + survival array = 1 if individual has survived interval, 0 otherwise (for each of the n intervals) + failure array = 1 if individual failed during interval, 0 otherwise + If an individual is censored (still active), their failure array contains only 0s ### Set Constants ```shell BUCKET="gs://[GCS Bucket]" NOW="$(date +%Y%m%d%H%M%S)" OUTPUT_DIR="${BUCKET}/output_data/${NOW}" PROJECT="[PROJECT ID]" ``` ### Run locally with Dataflow ```shell cd preprocessor python -m run_preprocessing \ --output_dir "${OUTPUT_DIR}" \ --project_id "${PROJECT}" cd .. ``` ### Run on the Cloud with Dataflow The top-level preprocessor directory should be the working directory for running the preprocessing script. The setup.py file should be located in the working directory. ```shell cd preprocessor python -m run_preprocessing \ --cloud \ --output_dir "${OUTPUT_DIR}" \ --project_id "${PROJECT}" cd .. ``` ## Model Training Model training minimizes the negative of the log likelihood function for a statistical Survival Analysis model with discrete-time intervals. The loss function is based off the paper [A scalable discrete-time survival model for neural networks](https://peerj.com/articles/6257.pdf). For each record, the conditional hazard probability is the probability of failure in an interval, given that individual has survived at least to the beginning of the interval. Therefore, the probability that a user survives the given interval, or the likelihood, is the product of (1 - hazard) for all of the earlier (and current) intervals. So, the log likelihood is: ln(current hazard) + sum(ln(1 - earlier hazards)) summed over all time intervals. Equivalently, each individual's log likelihood is: `ln(1 - (1 if survived 0 if not)*(Prob of failure)) + ln(1 - (1 if failed 0 if not)*(Prob of survival))` summed over all time intervals. ### Set Constants The TFRecord output of the preprocessing job should be used as input to the training job. Make sure to navigate back to the top-level directory. ```shell INPUT_DIR="${OUTPUT_DIR}" MODEL_DIR="${BUCKET}/model/$(date +%Y%m%d%H%M%S)" ``` ### Train locally with AI Platform ```shell gcloud ai-platform local train \ --module-name trainer.task \ --package-path trainer/trainer \ --job-dir ${MODEL_DIR} \ -- \ --input-dir "${INPUT_DIR}" ``` ### Train on the Cloud with AI Platform ```shell JOB_NAME="train_$(date +%Y%m%d%H%M%S)" gcloud ai-platform jobs submit training ${JOB_NAME} \ --job-dir ${MODEL_DIR} \ --config trainer/config.yaml \ --module-name trainer.task \ --package-path trainer/trainer \ --region us-east1 \ --python-version 3.5 \ --runtime-version 1.13 \ -- \ --input-dir ${INPUT_DIR} ``` ### Hyperparameter Tuning with AI Platform ```shell JOB_NAME="hptuning_$(date +%Y%m%d%H%M%S)" gcloud ai-platform jobs submit training ${JOB_NAME} \ --job-dir ${MODEL_DIR} \ --module-name trainer.task \ --package-path trainer/trainer \ --config trainer/hptuning_config.yaml \ --python-version 3.5 \ --runtime-version 1.13 \ -- \ --input-dir ${INPUT_DIR} ``` ### Launch Tensorboard ```shell tensorboard --logdir ${MODEL_DIR} ``` ## Predictions The model predicts the conditional likelihood that a user survived an interval given that the user reached the interval. It outputs an n-dimensional vector, where each element corresponds to predicted conditional probability of surviving through end of time interval (1 - hazard). In order to determine the predicted class, the cumulative product of the conditional probabilities must be compared to some threshold. ### Deploy model on AI Platform The SavedModel was saved in a timestamped subdirectory of model_dir. ```shell MODEL_NAME="survival_model" VERSION_NAME="demo_version" SAVED_MODEL_DIR=$(gsutil ls $MODEL_DIR/export/export | tail -1) gcloud ai-platform models create $MODEL_NAME \ --regions us-east1 gcloud ai-platform versions create $VERSION_NAME \ --model $MODEL_NAME \ --origin $SAVED_MODEL_DIR \ --runtime-version=1.13 \ --framework TENSORFLOW \ --python-version=3.5 ``` ### Running batch predictions ```shell INPUT_PATHS=$INPUT_DIR/data/test/* OUTPUT_PATH=<GCS directory for predictions> JOB_NAME="predict_$(date +%Y%m%d%H%M%S)" gcloud ai-platform jobs submit prediction $JOB_NAME \ --model $MODEL_NAME \ --input-paths $INPUT_PATHS \ --output-path $OUTPUT_PATH \ --region us-east1 \ --data-format TF_RECORD ```
GCP
Churn Prediction with Survival Analysis This model uses Survival Analysis to classify customers into time to churn buckets The model output can be used to calculate each user s churn score for different durations The same methodology can be used used to predict customers total lifetime from their birth initial signup or t 0 and from the current state t 0 Why is Survival Analysis Helpful for Churn Prediction Survival Analysis is used to predict the time to event when the event in question has not necessarily occurred yet In this case the event is a customer churning If a customer is still active or is censored using Survival Analysis terminology we do not know their final lifetime or when they will churn If we assume that the customer s lifetime ended at the time of prediction or training the results will be biased underestimating lifetime Throwing out active users will also bias results through information loss By using a Survival Analysis approach to churn prediction the entire population regardless of current tenure or status can be included Dataset This example uses the public Google Analytics Sample Dataset https support google com analytics answer 7586738 hl en on BigQuery and artificially generated subscription start and end dates as input To create a churn model with real data omit the Generate Data step in the Beam pipeline in preprocessor preprocessor preprocess py Instead of randomly generating values the BigQuery results should include the following fields start date end date and active These values correspond to the user s subscription lifetime and their censorship status Setup Set up GCP credentials shell gcloud auth login gcloud auth application default login Set up Python environment shell virtualenv venv source venv bin activate pip install r requirements txt Preprocessing Using Dataflow the data preprocessing script reads user data from BigQuery generates random fake time to churn labels creates TFRecords and adds them to Google Cloud Storage Each record should have three labels before preprocessing 1 active indicator for censorship It is 0 if user is inactive uncensored and 1 if the user is active censored 2 start date Date when user began their lifetime 3 end date Date when user ends their lifetime It is None if the user is still active generateFakeData randomly generates these three fields in order to create fake sample data In practice these fields should be available in some form in the historical data During preprocessing the aforementioned fields are combined into a single 2 n dimensional indicator array where n is the number of bounded lifetime buckets i e n 2 for 0 2 months 2 3 months 3 months indicator array survival array failure array survival array 1 if individual has survived interval 0 otherwise for each of the n intervals failure array 1 if individual failed during interval 0 otherwise If an individual is censored still active their failure array contains only 0s Set Constants shell BUCKET gs GCS Bucket NOW date Y m d H M S OUTPUT DIR BUCKET output data NOW PROJECT PROJECT ID Run locally with Dataflow shell cd preprocessor python m run preprocessing output dir OUTPUT DIR project id PROJECT cd Run on the Cloud with Dataflow The top level preprocessor directory should be the working directory for running the preprocessing script The setup py file should be located in the working directory shell cd preprocessor python m run preprocessing cloud output dir OUTPUT DIR project id PROJECT cd Model Training Model training minimizes the negative of the log likelihood function for a statistical Survival Analysis model with discrete time intervals The loss function is based off the paper A scalable discrete time survival model for neural networks https peerj com articles 6257 pdf For each record the conditional hazard probability is the probability of failure in an interval given that individual has survived at least to the beginning of the interval Therefore the probability that a user survives the given interval or the likelihood is the product of 1 hazard for all of the earlier and current intervals So the log likelihood is ln current hazard sum ln 1 earlier hazards summed over all time intervals Equivalently each individual s log likelihood is ln 1 1 if survived 0 if not Prob of failure ln 1 1 if failed 0 if not Prob of survival summed over all time intervals Set Constants The TFRecord output of the preprocessing job should be used as input to the training job Make sure to navigate back to the top level directory shell INPUT DIR OUTPUT DIR MODEL DIR BUCKET model date Y m d H M S Train locally with AI Platform shell gcloud ai platform local train module name trainer task package path trainer trainer job dir MODEL DIR input dir INPUT DIR Train on the Cloud with AI Platform shell JOB NAME train date Y m d H M S gcloud ai platform jobs submit training JOB NAME job dir MODEL DIR config trainer config yaml module name trainer task package path trainer trainer region us east1 python version 3 5 runtime version 1 13 input dir INPUT DIR Hyperparameter Tuning with AI Platform shell JOB NAME hptuning date Y m d H M S gcloud ai platform jobs submit training JOB NAME job dir MODEL DIR module name trainer task package path trainer trainer config trainer hptuning config yaml python version 3 5 runtime version 1 13 input dir INPUT DIR Launch Tensorboard shell tensorboard logdir MODEL DIR Predictions The model predicts the conditional likelihood that a user survived an interval given that the user reached the interval It outputs an n dimensional vector where each element corresponds to predicted conditional probability of surviving through end of time interval 1 hazard In order to determine the predicted class the cumulative product of the conditional probabilities must be compared to some threshold Deploy model on AI Platform The SavedModel was saved in a timestamped subdirectory of model dir shell MODEL NAME survival model VERSION NAME demo version SAVED MODEL DIR gsutil ls MODEL DIR export export tail 1 gcloud ai platform models create MODEL NAME regions us east1 gcloud ai platform versions create VERSION NAME model MODEL NAME origin SAVED MODEL DIR runtime version 1 13 framework TENSORFLOW python version 3 5 Running batch predictions shell INPUT PATHS INPUT DIR data test OUTPUT PATH GCS directory for predictions JOB NAME predict date Y m d H M S gcloud ai platform jobs submit prediction JOB NAME model MODEL NAME input paths INPUT PATHS output path OUTPUT PATH region us east1 data format TF RECORD
GCP The DFDL definitions are stored in a Firestore database The processor service subscribes to the topic processes every message The application send a request with the binary to process to a pubsub topic Project Structure Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition applies the definition and publishes the json result to a topic in pubsub
# Data Format Description Language ([DFDL](https://en.wikipedia.org/wiki/Data_Format_Description_Language)) Processor Example This module is a example how to process a binary using a DFDL definition. The DFDL definitions are stored in a Firestore database. The application send a request with the binary to process to a pubsub topic. The processor service subscribes to the topic, processes every message, applies the definition and publishes the json result to a topic in pubsub. ## Project Structure ``` . └── dfdl_example β”œβ”€β”€ examples # Contain a binary and dfdl definition to be used to run this example └── src └── main └── java └── com.example.dfdl β”œβ”€β”€ DfdlDef # Embedded entiites β”œβ”€β”€ DfdlDefRepository # A Firestore Reactive Repository β”œβ”€β”€ DfdlService # Processes the binary using a dfdl definition and output a json β”œβ”€β”€ FirestoreService # Reads dfdl definitons from a firestore database β”œβ”€β”€ MessageController # Publishes message to a topic with a binary to be processed. β”œβ”€β”€ ProcessorService # Initializes components, configurations and services. β”œβ”€β”€ PubSubServer # Publishes and subscribes to topics using channels adapters. └── README.md └── resources └── application.properties └── pom.xml ``` ## Technology Stack 1. Cloud Firestore 2. Cloud Pubsub ## Frameworks 1. Spring Boot 2. [Spring Data Cloud Firestore](https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/firestore.html) * [Reactive Repository](https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/firestore.html#_reactive_repositories) ## Libraries 1. [Apache Daffodil](https://daffodil.apache.org/) ## Setup Instructions ### Project Setup #### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 1. In the drop-down menu at the top, select **Create a project**. 1. Give your project a name = my-dfdl-project 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ #### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings #### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ #### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ### Firestore Setup How to create a Firestore database instance can be found [here](https://cloud.google.com/firestore/docs/quickstart-servers#create_a_in_native_mode_database) #### How to add data to firestore The following doc, [Managing firestore using the console](https://cloud.google.com/firestore/docs/using-console), can be used to add data to firestore to run the example. This example connects to a Cloud Firestore with a collection with the following specification. The configuration can be changed by changing the application.properties file. ``` Root collection dfdl-schemas => document_id binary_example => { 'definiton': "<?xml version"1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:dfdl="http://www.ogf.org/dfdl/dfdl-1.0/" targetNamespace="http://example.com/dfdl/helloworld/"> <xs:include schemaLocation="org/apache/daffodil/xsd/DFDLGeneralFormat.dfdl.xsd" /> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:format ref="GeneralFormat" /> </xs:appinfo> </xs:annotation> <xs:element name="binary_example"> <xs:complexType> <xs:sequence> <xs:element name="w" type="xs:int"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryNumberRep="binary" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="x" type="xs:int"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryNumberRep="binary" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="y" type="xs:double"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" binaryFloatRep="ieee" byteOrder="bigEndian" lengthKind="implicit" /> </xs:appinfo> </xs:annotation> </xs:element> <xs:element name="z" type="xs:float"> <xs:annotation> <xs:appinfo source="http://www.ogf.org/dfdl/"> <dfdl:element representation="binary" byteOrder="bigEndian" lengthKind="implicit" binaryFloatRep="ieee" /> </xs:appinfo> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>"; } ``` This dfdl definition example can be found in the binary_example.dfdl.xsd file. ### Pubsub Setup The following [doc](https://cloud.google.com/pubsub/docs/quickstart-console) can be used to set up the topics and subscriptions needed to run this example. #### Topics To run this example two topics need to be created: 1. A topic to publish the final json output: "data-output-json-topic" 2. A topic to publish the binary to be processed: "data-input-binary-topic" #### Subscription The following subscriptions need to be created: 1. A subscription to pull the binary data: data-input-binary-sub ## Usage ### Initialize the application Reference: [Building an Application with Spring Boot](https://spring.io/guides/gs/spring-boot/) ``` ./mvnw spring-boot:run ``` ### Send a request ``` curl --data "message=0000000500779e8c169a54dd0a1b4a3fce2946f6" localhost:8081/publish ``
GCP
Data Format Description Language DFDL https en wikipedia org wiki Data Format Description Language Processor Example This module is a example how to process a binary using a DFDL definition The DFDL definitions are stored in a Firestore database The application send a request with the binary to process to a pubsub topic The processor service subscribes to the topic processes every message applies the definition and publishes the json result to a topic in pubsub Project Structure dfdl example examples Contain a binary and dfdl definition to be used to run this example src main java com example dfdl DfdlDef Embedded entiites DfdlDefRepository A Firestore Reactive Repository DfdlService Processes the binary using a dfdl definition and output a json FirestoreService Reads dfdl definitons from a firestore database MessageController Publishes message to a topic with a binary to be processed ProcessorService Initializes components configurations and services PubSubServer Publishes and subscribes to topics using channels adapters README md resources application properties pom xml Technology Stack 1 Cloud Firestore 2 Cloud Pubsub Frameworks 1 Spring Boot 2 Spring Data Cloud Firestore https docs spring io spring cloud gcp docs current reference html firestore html Reactive Repository https docs spring io spring cloud gcp docs current reference html firestore html reactive repositories Libraries 1 Apache Daffodil https daffodil apache org Setup Instructions Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 1 In the drop down menu at the top select Create a project 1 Give your project a name my dfdl project 1 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Firestore Setup How to create a Firestore database instance can be found here https cloud google com firestore docs quickstart servers create a in native mode database How to add data to firestore The following doc Managing firestore using the console https cloud google com firestore docs using console can be used to add data to firestore to run the example This example connects to a Cloud Firestore with a collection with the following specification The configuration can be changed by changing the application properties file Root collection dfdl schemas document id binary example definiton xml version 1 0 encoding UTF 8 xs schema xmlns xs http www w3 org 2001 XMLSchema xmlns dfdl http www ogf org dfdl dfdl 1 0 targetNamespace http example com dfdl helloworld xs include schemaLocation org apache daffodil xsd DFDLGeneralFormat dfdl xsd xs annotation xs appinfo source http www ogf org dfdl dfdl format ref GeneralFormat xs appinfo xs annotation xs element name binary example xs complexType xs sequence xs element name w type xs int xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryNumberRep binary byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name x type xs int xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryNumberRep binary byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name y type xs double xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary binaryFloatRep ieee byteOrder bigEndian lengthKind implicit xs appinfo xs annotation xs element xs element name z type xs float xs annotation xs appinfo source http www ogf org dfdl dfdl element representation binary byteOrder bigEndian lengthKind implicit binaryFloatRep ieee xs appinfo xs annotation xs element xs sequence xs complexType xs element xs schema This dfdl definition example can be found in the binary example dfdl xsd file Pubsub Setup The following doc https cloud google com pubsub docs quickstart console can be used to set up the topics and subscriptions needed to run this example Topics To run this example two topics need to be created 1 A topic to publish the final json output data output json topic 2 A topic to publish the binary to be processed data input binary topic Subscription The following subscriptions need to be created 1 A subscription to pull the binary data data input binary sub Usage Initialize the application Reference Building an Application with Spring Boot https spring io guides gs spring boot mvnw spring boot run Send a request curl data message 0000000500779e8c169a54dd0a1b4a3fce2946f6 localhost 8081 publish
GCP tl dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above Often the Google Cloud Functions are used for automation tasks e g cloud infrastrcuture automation In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level This example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity Cloud Function Act As Caller
# Cloud Function "Act As" Caller > tl;dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above. --- This example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity. Often the Google Cloud Functions are used for automation tasks, e.g. cloud infrastrcuture automation. In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level. Following the Principle of Least Privilege the Service Account that the Cloud Function executes under needs to have only IAM permissions required for its successful execution. In the case when a common Cloud Function automation is called by multiple tenants for performing operations on their individual tenant Google Cloud resources it would only require IAM permissions to the GCP resources of that tenant at a time. Those are typically permissions that the tenant is already having or can obtain for its tenant Service Accounts or Workload Identity. This example illustrates a possible approach of reducing the set of Cloud Function IAM permissions to the temporary set of permissions defined by the context of the Cloud Function caller and its identity in particular. The example contains a solution for the GitHub based CI/CD workflow caller of a Google Cloud Function based on the Workload Idenitty Federation mechanism supported by GitHub and [google-github-actions/auth](https://github.com/google-github-actions/auth) GitHub Action. The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above. ## Caller Workload Identity One of the typical security concerns in Cloud SaaS based CI/CD pipelines that manage and manipulate Google Cloud resources is the need to securely authenticate the caller process on the Google Cloud side. A typical way of authenticating is using Google Cloud Serivce Account keys stored on the caller side that effectively are being used as long lived client secrets that need to be protected and kept in secret. The need to manage and protect the long lived Service Account keys is a security challenge for the client. The recommended way to improve the secruity posture of the CI/CD automation is to remove the need to authenticate with Google Cloud using Service Account keys altogether. The Workload Identity Federation is the mechanism that allows secure authentication with no long lived secrets managed by the client based on the Open ID Connect authentication flow. The following diagram describes the authentication process that does not require Google Cloud Service Account key management on the client side. ![Workflow Identity Federation Authentication](images/wif-authentication.png?raw=true "Workflow Identity Federation Authentication") ## Solution The Cloud Function Service Account is not granted any IAM permissions in the tenant GCP project. The only permissions it requires is read access to the ephemeal temporary Google Secret Manager Secret resource in its own project where the Cloud Function is defined and running. The Caller is the GitHub Runner that executes the GitHub Workflow defined in this source repository in the [.github/workflows/call-function.yml](./.github/workflows/call-function.yml) file. It authenticates to GCP as a Workload Identity using Workflow Identity Fedederation set up by the Terraform project in this repository. The Service Account "mapped" to the Workload Identity of the GitHub Workflow run has needed read/write permissions to the GCP resources that the Cloud Function needs to manipulate. There are several variations possible in regards to the location of the ephemeral secrets to be stored (application project vs central automation project). ### Simple Invocation The following diagram shows the simplest case of the Cloud Function invocation in which the GCP access token gets passed in the call to the Cloud Function, e.g. in its payload or better HTTP header. ![Invocation with the secret in the application project](images/cf-act-as0.png?raw=true "Invocation with the secret in the application project") 1. The GitHub Workflow (the Caller) authenticates to the GCP using the [google-github-actions/auth](https://github.com/google-github-actions/auth) GitHub Action. After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function. 2. The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header. 3. The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller. ### Invocation with Secret In some cases, such as when the logic represented by a Cloud Function is implemented by several components, such as Cloud Functions chained together, it would be needed to pass the caller's access token between those components securely. In that situation it is preferrable to apply a [Claim Check pattern](https://www.enterpriseintegrationpatterns.com/StoreInLibrary.html) and pass the resource name of the secret containing the access token between the solution components as it is illustrated in the following diagram. ![Invocation with the secret in the application project](images/cf-act-as3.png?raw=true "Invocation with the secret in the application project") 1. The GitHub Workflow (the Caller) authenticates to the GCP using the [google-github-actions/auth](https://github.com/google-github-actions/auth) GitHub Action. After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function. 2. The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header. 3. The first Cloud Function extracts the GCP access token obtained on the previous step from the incoming message payload and stores it in an ephemeral Secret Manager Secret in the central project location. 4. The Cloud Function extracts the access token from the ephemeral Secret Manager Secret 5. The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller. 6. (Optionally) The Cloud Function drops the ephemeral Secret Manager Secret resource. 7. (Optionally) The Caller double checks and drops the ephemeral Secret Manager Secret resource. ## Service Accounts and IAM Permissions This project creates two GCP Service Accounts: * Cloud Function Service Account – replaces the default Cloud Function [runtime service account](https://cloud.google.com/functions/docs/securing/function-identity#runtime_service_account) with an explicit customer managed service account * Workload Identity Service Account – the GCP service account that represents the external GitHub Workload Identity. When the GitHub workflow authenticates to the GCP it is this service account's IAM permissions that the GitHub Workload Identity is granted. | Service Account | Role | Description | |---------------------|---------------------------------------------|---------------------------------------------------| | `cf-sample-account` | `roles/secretmanager.secretVersionManager` | To create the Secret Manager secret for access token for the "Invocation with central secret" case | | | `roles/secretmanager.secretAccessor` | To read the access token from Secret Manager secret | | `wi-sample-account` | `roles/secretmanager.secretVersionManager` | To create the Secret Manager secret for access token for the "Invocation with application owned secret" case | | | | To store the access token in the Secret Manager secret in the "Invocation with application owned secret" case | | | `roles/iam.workloadIdentityUser` | Maps the external GitHub Workload Identity to the Workload Identity Service Account | | | `roles/cloudfunctions.developer` | `cloudfunctions.functions.call` permission to invoke the sample Cloud Function | | | `roles/viewer` | Sample IAM permissions to list GCE VM instances granted to the Workload Identity Service Account but not to the Cloud Function Service Account | ## Deployment The Terraform project in this repository defines the following input variables that can either be edited in the `variables.tf` file directly or passed over the Terraform command line. | Name | Description | |---------------|-------------------------------------------------| | `project_id` | Target GCP project id. All resources will be deployed here. | | `project_number` | Target GCP project number. | | `location` | The name of the GCP region to deploy resources | | `zone` | The GCP Zone for the sample Cloud Function to list GCE VM instances in | | `github_repo` | The name of the GitHub repository in the format `organization/repository`. | | `github_ref` | The git reference to the source code repository. Usually a reference to the branch which is getting built | To deploy the example with Cloud Function and all required GCP components including Workload Idepntity Pool and Provider use the usual ``` terraform init terraform plan terraform apply ``` in the root folder of this repository. The project deploys the GCP resources by default into the `europe-west3` region. You can change that by passing alternative value to the `location` input variable by copying the `terraform.tfvars.sample` to the `terraform.tfvars` file and updating values there. ## Call Function After the example is provisioned through Terraform, you can test and call the deployed function from the command line with gcloud ``` gcloud functions call sample-function --region=${REGION} --data '{}' ``` The sample Cloud Function calls Google Compute Engine API v1 and [lists](https://cloud.google.com/compute/docs/reference/rest/v1/instances/list) Google Compte Engine instances in the specified region. The Cloud Function deployed by this project runs as `cf-sample-account@${PROJECT_ID}.iam.gserviceaccount.com` service account. This service account doesn't have any granted permissions in GCP except for the read access to the Secret Manager Secret. Hence, the Cloud Function cannot reach the GCE API and list the VMs by default. For this action to successfully complete from the command line as illustrated above, the Cloud Function service account `cf-sample-account@${PROJECT_ID}.iam.gserviceaccount.com` needs to have `compute.instances.list` permission in the target GCP project. If the execution succeeds, the command line output will be similar to the following ``` $ gcloud functions call sample-function --region=${REGION} --data '{}' executionId: 84e2bkg5717v result: ', jumpbox (https://www.googleapis.com/compute/v1/projects/${PROJECT_ID}/zones/europe-west3-c/machineTypes/e2-micro)' ``` ## GitHub Workflow The sample GitHub [workflow](.github/workflows/call-function.yml) in this repository illustrates the way of calling the sample Cloud Function from a GitHub workflow. For the workflow to succeed, a dedicated service account `wi-sample-account` is mapped to the authenticated GitHub Workload Identity. It needs to have `cloudfunctions.functions.call` permission for the deployed Sample Cloud Function in order to be able to invoke it. The `roles/cloudfunctions.developer` built-in role grants that permission. ## Running Example Copy `terraform.tfvar.sample` file to `terraform.tfvar` and adjust settings inside for your project, location, etc. Deploy the GCP resources with Terraform: ``` terraform init terraform plan terraform apply ``` To invoke the Cloud Function on behalf of the GitHub workload idenity, it is needed to create GitHub Actions workflow from the `.github/workflows/call-function.yml` file. Copy this file with relative folders to the root of your GitHub repository for GitHub to pick up the workflow. Please note that the GitHub workflow reads the parameters during the run from the `terraform.tfvars.sample` file in the root repository folder. You'd need to either modify the workflow file or check in correct values to the `terraform.tfvars.sample` file. After the GCP resources are provisioned, and given that the parameters in the `terraform.tfvars.sample` are correct, the GitHub Actions run should succeed. It is the [last step](.github/workflows/call-function.yml#L68) is the sample Cloud Function call. This is because the Workload Identity Service Account that the project associates with the GitHub identity has permissions to list GCE VM instances, which is what the sample Python Cloud Function is doing. At this point it is now possible to ensure that the direct Cloud Function invocation, e.g. using an interactive user account, with no access token supplied, fails because the Cloud Function Service Account itself does not have permissions to list GCE VM instances: ``` gcloud functions call sample-function --region=${REGION} --data '{}' ``` That call should fail no matter which permissions for GCE your current user account is having: ``` result: "Error: <HttpError 403 when requesting https://compute.googleapis.com/compute/v1/projects/$PROJECT_ID/zones/$ZONE/instances?alt=json\ \ returned \"Required 'compute.instances.list' permission for 'projects/$PROJECT_ID'\"\ . Details: \"[{'message': \"Required 'compute.instances.list' permission for 'projects/$PROJECT_ID'\"\ , 'domain': 'global', 'reason': 'forbidden'}]\">" ``` Now you can try to pass the access token that represents an account that has GCE VM instances list permissions. E.g. if your current user account has that permission: ``` gcloud functions call sample-function --region=${REGION} --data "{ \"access_token\": \"$(gcloud auth print-access-token)\" }" ``` This time the call should succeed and show the list of GCE VMs, e.g. ``` executionId: 76wkh0r8yhjf result: 'jumpbox' ``` Alternatively, you can pass the access token via Secret Manager Secret in the same way as GitHub workflow does. Save the access token to the Secret Manager Secret created by this Terraform project and pass the Secret Manager secret resource name in the call to the sample Cloud Function instead of the access token: ``` gcloud functions call sample-function --region=${REGION} \ --data "{ \"secret_resource\": \"$(echo -n $(gcloud auth print-access-token) | \ gcloud secrets versions add access-token-secret --data-file=- --format json | \ jq -r .name)\" }" ``` This call should succeed as well. The sample Cloud Function will extract the access token of the current user account from the Secret Manager secret and call GCE API provding that access token for authentication. ## Cleaning up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can use Terraform to delete most of the resources. If you created a new project for deploying the resources, you can also delete the entire project. To delete resources using Terraform, run the following command: terraform destroy To delete the project, do the following: 1. In the Cloud Console, go to the [Projects page](https://console.cloud.google.com/iam-admin/projects). 1. In the project list, select the project you want to delete and click **Delete**. 1. In the dialog, type the project ID, and then click **Shut down** to delete the project. --- ## Useful Commands ### Read current access token using gcloud [Getting the access token using gcloud](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/print-access-token) ``` gcloud auth application-default print-access-token ``` ### Store access token in Secret Manager ``` echo -n "$(gcloud auth print-access-token)" | \ gcloud secrets versions add access-token-secret --data-file=- ``` ### Develop and Debugg the Cloud Function locally Within the [function](./function) folder run following commds to start the function framework locally: ``` pip install -r requirements.txt functions-framework --target main --debug ``
GCP
Cloud Function Act As Caller tl dr The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above This example describes one possible solution of reducing the set of Google Cloud IAM permissions required by a Service Account running a Google Cloud Function by executing the Cloud Function on behalf of and with IAM permissions of the caller identity Often the Google Cloud Functions are used for automation tasks e g cloud infrastrcuture automation In big Google Cloud organizations such automation needs access to all organization tenants and because of that the Cloud Function Service Account needs wide scope IAM permissions at the organization or similar level Following the Principle of Least Privilege the Service Account that the Cloud Function executes under needs to have only IAM permissions required for its successful execution In the case when a common Cloud Function automation is called by multiple tenants for performing operations on their individual tenant Google Cloud resources it would only require IAM permissions to the GCP resources of that tenant at a time Those are typically permissions that the tenant is already having or can obtain for its tenant Service Accounts or Workload Identity This example illustrates a possible approach of reducing the set of Cloud Function IAM permissions to the temporary set of permissions defined by the context of the Cloud Function caller and its identity in particular The example contains a solution for the GitHub based CI CD workflow caller of a Google Cloud Function based on the Workload Idenitty Federation mechanism supported by GitHub and google github actions auth https github com google github actions auth GitHub Action The Terraform project in this repository deploys a single simple Python Cloud Function that executes a simple action against Google Cloud API using identity of the function caller illustrating the full flow described above Caller Workload Identity One of the typical security concerns in Cloud SaaS based CI CD pipelines that manage and manipulate Google Cloud resources is the need to securely authenticate the caller process on the Google Cloud side A typical way of authenticating is using Google Cloud Serivce Account keys stored on the caller side that effectively are being used as long lived client secrets that need to be protected and kept in secret The need to manage and protect the long lived Service Account keys is a security challenge for the client The recommended way to improve the secruity posture of the CI CD automation is to remove the need to authenticate with Google Cloud using Service Account keys altogether The Workload Identity Federation is the mechanism that allows secure authentication with no long lived secrets managed by the client based on the Open ID Connect authentication flow The following diagram describes the authentication process that does not require Google Cloud Service Account key management on the client side Workflow Identity Federation Authentication images wif authentication png raw true Workflow Identity Federation Authentication Solution The Cloud Function Service Account is not granted any IAM permissions in the tenant GCP project The only permissions it requires is read access to the ephemeal temporary Google Secret Manager Secret resource in its own project where the Cloud Function is defined and running The Caller is the GitHub Runner that executes the GitHub Workflow defined in this source repository in the github workflows call function yml github workflows call function yml file It authenticates to GCP as a Workload Identity using Workflow Identity Fedederation set up by the Terraform project in this repository The Service Account mapped to the Workload Identity of the GitHub Workflow run has needed read write permissions to the GCP resources that the Cloud Function needs to manipulate There are several variations possible in regards to the location of the ephemeral secrets to be stored application project vs central automation project Simple Invocation The following diagram shows the simplest case of the Cloud Function invocation in which the GCP access token gets passed in the call to the Cloud Function e g in its payload or better HTTP header Invocation with the secret in the application project images cf act as0 png raw true Invocation with the secret in the application project 1 The GitHub Workflow the Caller authenticates to the GCP using the google github actions auth https github com google github actions auth GitHub Action After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function 2 The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header 3 The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller Invocation with Secret In some cases such as when the logic represented by a Cloud Function is implemented by several components such as Cloud Functions chained together it would be needed to pass the caller s access token between those components securely In that situation it is preferrable to apply a Claim Check pattern https www enterpriseintegrationpatterns com StoreInLibrary html and pass the resource name of the secret containing the access token between the solution components as it is illustrated in the following diagram Invocation with the secret in the application project images cf act as3 png raw true Invocation with the secret in the application project 1 The GitHub Workflow the Caller authenticates to the GCP using the google github actions auth https github com google github actions auth GitHub Action After this step the GitHub Workflow has short lived GCP access token available to the subsequent workflow steps to execute and connect to the GCP to call the Cloud Function 2 The Caller passes the GCP access token obtained on the previous step in the Cloud Function invocation HTTPS request payload or header 3 The first Cloud Function extracts the GCP access token obtained on the previous step from the incoming message payload and stores it in an ephemeral Secret Manager Secret in the central project location 4 The Cloud Function extracts the access token from the ephemeral Secret Manager Secret 5 The Cloud Function authenticates to the Google API using the access token extracted from the Secret Manager Secret on the previous step and accesses the target GCP resource on behalf and with IAM permissions of the Caller 6 Optionally The Cloud Function drops the ephemeral Secret Manager Secret resource 7 Optionally The Caller double checks and drops the ephemeral Secret Manager Secret resource Service Accounts and IAM Permissions This project creates two GCP Service Accounts Cloud Function Service Account replaces the default Cloud Function runtime service account https cloud google com functions docs securing function identity runtime service account with an explicit customer managed service account Workload Identity Service Account the GCP service account that represents the external GitHub Workload Identity When the GitHub workflow authenticates to the GCP it is this service account s IAM permissions that the GitHub Workload Identity is granted Service Account Role Description cf sample account roles secretmanager secretVersionManager To create the Secret Manager secret for access token for the Invocation with central secret case roles secretmanager secretAccessor To read the access token from Secret Manager secret wi sample account roles secretmanager secretVersionManager To create the Secret Manager secret for access token for the Invocation with application owned secret case To store the access token in the Secret Manager secret in the Invocation with application owned secret case roles iam workloadIdentityUser Maps the external GitHub Workload Identity to the Workload Identity Service Account roles cloudfunctions developer cloudfunctions functions call permission to invoke the sample Cloud Function roles viewer Sample IAM permissions to list GCE VM instances granted to the Workload Identity Service Account but not to the Cloud Function Service Account Deployment The Terraform project in this repository defines the following input variables that can either be edited in the variables tf file directly or passed over the Terraform command line Name Description project id Target GCP project id All resources will be deployed here project number Target GCP project number location The name of the GCP region to deploy resources zone The GCP Zone for the sample Cloud Function to list GCE VM instances in github repo The name of the GitHub repository in the format organization repository github ref The git reference to the source code repository Usually a reference to the branch which is getting built To deploy the example with Cloud Function and all required GCP components including Workload Idepntity Pool and Provider use the usual terraform init terraform plan terraform apply in the root folder of this repository The project deploys the GCP resources by default into the europe west3 region You can change that by passing alternative value to the location input variable by copying the terraform tfvars sample to the terraform tfvars file and updating values there Call Function After the example is provisioned through Terraform you can test and call the deployed function from the command line with gcloud gcloud functions call sample function region REGION data The sample Cloud Function calls Google Compute Engine API v1 and lists https cloud google com compute docs reference rest v1 instances list Google Compte Engine instances in the specified region The Cloud Function deployed by this project runs as cf sample account PROJECT ID iam gserviceaccount com service account This service account doesn t have any granted permissions in GCP except for the read access to the Secret Manager Secret Hence the Cloud Function cannot reach the GCE API and list the VMs by default For this action to successfully complete from the command line as illustrated above the Cloud Function service account cf sample account PROJECT ID iam gserviceaccount com needs to have compute instances list permission in the target GCP project If the execution succeeds the command line output will be similar to the following gcloud functions call sample function region REGION data executionId 84e2bkg5717v result jumpbox https www googleapis com compute v1 projects PROJECT ID zones europe west3 c machineTypes e2 micro GitHub Workflow The sample GitHub workflow github workflows call function yml in this repository illustrates the way of calling the sample Cloud Function from a GitHub workflow For the workflow to succeed a dedicated service account wi sample account is mapped to the authenticated GitHub Workload Identity It needs to have cloudfunctions functions call permission for the deployed Sample Cloud Function in order to be able to invoke it The roles cloudfunctions developer built in role grants that permission Running Example Copy terraform tfvar sample file to terraform tfvar and adjust settings inside for your project location etc Deploy the GCP resources with Terraform terraform init terraform plan terraform apply To invoke the Cloud Function on behalf of the GitHub workload idenity it is needed to create GitHub Actions workflow from the github workflows call function yml file Copy this file with relative folders to the root of your GitHub repository for GitHub to pick up the workflow Please note that the GitHub workflow reads the parameters during the run from the terraform tfvars sample file in the root repository folder You d need to either modify the workflow file or check in correct values to the terraform tfvars sample file After the GCP resources are provisioned and given that the parameters in the terraform tfvars sample are correct the GitHub Actions run should succeed It is the last step github workflows call function yml L68 is the sample Cloud Function call This is because the Workload Identity Service Account that the project associates with the GitHub identity has permissions to list GCE VM instances which is what the sample Python Cloud Function is doing At this point it is now possible to ensure that the direct Cloud Function invocation e g using an interactive user account with no access token supplied fails because the Cloud Function Service Account itself does not have permissions to list GCE VM instances gcloud functions call sample function region REGION data That call should fail no matter which permissions for GCE your current user account is having result Error HttpError 403 when requesting https compute googleapis com compute v1 projects PROJECT ID zones ZONE instances alt json returned Required compute instances list permission for projects PROJECT ID Details message Required compute instances list permission for projects PROJECT ID domain global reason forbidden Now you can try to pass the access token that represents an account that has GCE VM instances list permissions E g if your current user account has that permission gcloud functions call sample function region REGION data access token gcloud auth print access token This time the call should succeed and show the list of GCE VMs e g executionId 76wkh0r8yhjf result jumpbox Alternatively you can pass the access token via Secret Manager Secret in the same way as GitHub workflow does Save the access token to the Secret Manager Secret created by this Terraform project and pass the Secret Manager secret resource name in the call to the sample Cloud Function instead of the access token gcloud functions call sample function region REGION data secret resource echo n gcloud auth print access token gcloud secrets versions add access token secret data file format json jq r name This call should succeed as well The sample Cloud Function will extract the access token of the current user account from the Secret Manager secret and call GCE API provding that access token for authentication Cleaning up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial you can use Terraform to delete most of the resources If you created a new project for deploying the resources you can also delete the entire project To delete resources using Terraform run the following command terraform destroy To delete the project do the following 1 In the Cloud Console go to the Projects page https console cloud google com iam admin projects 1 In the project list select the project you want to delete and click Delete 1 In the dialog type the project ID and then click Shut down to delete the project Useful Commands Read current access token using gcloud Getting the access token using gcloud https cloud google com sdk gcloud reference auth application default print access token gcloud auth application default print access token Store access token in Secret Manager echo n gcloud auth print access token gcloud secrets versions add access token secret data file Develop and Debugg the Cloud Function locally Within the function function folder run following commds to start the function framework locally pip install r requirements txt functions framework target main debug
GCP Scikit learn pipeline trainer for AI Platform This is a example for building a scikit learn based machine learning pipeline trainer 3 Estimator exact machine learning model e g RandomForestClassifier 2 Pre processer handle typical standard pre processing e g scaling imputation one hot encoding and etc to serve online traffic The entire pipeline includes three major components The pipeline can be trained locally or remotely on AI platform The trained model can be further deployed on AI platform that can be run on AI Platform which is built on top of the 1 Transformer generate new features from rawdata with user defined logic function
# Scikit-learn pipeline trainer for AI Platform This is a example for building a scikit-learn-based machine learning pipeline trainer that can be run on AI Platform, which is built on top of the [scikit-learn template](https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/sklearn/sklearn-template/template). The pipeline can be trained locally or remotely on AI platform. The trained model can be further deployed on AI platform to serve online traffic. The entire pipeline includes three major components: 1. Transformer: generate new features from raw_data with user defined logic (function). 2. Pre-processer: handle typical standard pre-processing e.g. scaling, imputation, one-hot-encoding and etc. 3. Estimator: exact machine learning model e.g. RandomForestClassifier. Compared with the [scikit-learn template](https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/sklearn/sklearn-template/template), this example has the following additional feature: 1. Support both Classification and Regression, which can be specified in the configuration 2. Support serving for both JSON and List of Value formats 3. Support additional custom transformation logics besides typical pre-processing provided by scikit-learn Google Cloud tools used: - [Google Cloud Platform](https://cloud.google.com/) (GCP) lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. - [Cloud ML Engine](https://cloud.google.com/ml-engine/) is a managed service that enables you to easily build machine learning models that work on any type of data, of any size. This is now part of [AI Platform](https://cloud.google.com/ai-platform/). - [Google Cloud Storage](https://cloud.google.com/storage/) (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving. - [Cloud SDK](https://cloud.google.com/sdk/) is a set of tools for Google Cloud Platform, which contains e.g. gcloud, gsutil, and bq command-line tools to interact with Google Cloud products and services. - [Google BigQuery](https://cloud.google.com/bigquery/) A fast, highly scalable, cost-effective, and fully managed cloud data warehouse for analytics, with even built-in machine learning. ## Pipeline overview The overall flow of the pipeline can be summarized as follows and illustrated in the flowchart: **Raw Data -> Transformer -> Pre-processor -> Estimator -> Trained Pipeline** ![Flowchart](./sklearn.png) ## Repository structure ``` template |__ config |__ config.yaml # for running normal training job on AI Platform |__ hptuning_config.yaml # for running hyperparameter tunning job on AI Platform |__ scripts |__ train.sh # convenience script for running machine learning training jobs |__ deploy.sh # convenience script for deploying trained scikit-learn model |__ predict.sh # convenience script for requesting online prediction |__ predict.py # helper function for requesting online prediction using python |__ trainer # trainer package |__ util # utility functions |__ utils.py # utility functions including e.g. loading data from bigquery and cloud storage |__ preprocess_utils.py # utility functions for constructing preprocessing pipeline |__ transform_utils.py # utility functions for constructing transform pipeline |__ metadata.py # dataset metadata and feature columns definitions |__ constants.py # constants used in the project |__ model.py # pre-processing and machine learning model pipeline definition |__ task.py # training job entry point, handling the parameters passed from command line |__ transform_config.py # configuration for transform pipeline construction" |__ predictor.py # define custom prediction behavior |__ setup.py # specify necessary dependency for running job on AI Platform |__ requirements.txt # specify necessary dependency, helper for setup environment for local development ``` ## Using the template ### Step 0. Prerequisites Before you follow the instructions below to adapt the template to your machine learning job, you need a Google cloud project if you don't have one. You can find detailed instructions [here](https://cloud.google.com/dataproc/docs/guides/setup-project). - Make sure the following API & Services are enabled. * Cloud Storage * Cloud Machine Learning Engine * BigQuery API * Cloud Build API (for CI/CD integration) * Cloud Source Repositories API (for CI/CD integration) - Configure project id and bucket id as environment variable. ```bash $ export PROJECT_ID=[your-google-project-id] $ export BUCKET_ID=[your-google-cloud-storage-bucket-name] ``` - Set up a service account for calls to GCP APIs. More information on setting up a service account can be found [here](https://cloud.google.com/docs/authentication/getting-started). ### Step 1. Tailor the scikit-learn trainer to your data `metadata.py` is where the dataset's metadata is defined. By default, the file is configured to train on the Census dataset, which can be found at [`bigquery-public-data.ml_datasets.census_adult_income`](https://bigquery.cloud.google.com/table/bigquery-public-data:ml_datasets.census_adult_income). ```python # Usage: Modify below based on the dataset used. CSV_COLUMNS = None # Schema of the data. Necessary for data stored in GCS # In the following, I provided an example based on census dataset. NUMERIC_FEATURES = [ 'age', 'hours_per_week', ] CATEGORICAL_FEATURES = [ 'workclass', 'education', 'marital_status', 'occupation', 'relationship', 'race', 'sex', 'native_country' ] FEATURE_COLUMNS = NUMERIC_FEATURES + CATEGORICAL_FEATURES LABEL = 'income_bracket' PROBLEM_TYPE = 'classification' # 'regression' or 'classification' ``` In most cases, only the following items need to be modified, in order to adapt to the target dataset. - **COLUMNS**: the schema of ths data, only required for data stored in GCS - **NUMERIC_FEATURES**: columns those will be treated as numerical features - **CATEGORICAL_FEATURES**: columns those will be treated as categorical features - **LABEL**: column that will be treated as label ### Step 2. Add new features with domain knowledge `transform_config.py` is where the logic of generating new features out of raw dataset is defined. There are two parts need to be provided for each new feature generating logic: * User defined function that handle the generation of new feature. There would be four cases in terms of the combinations of the dimensions of input and output as listed below: * ()->(): scalar to scalar * (n) -> (): multi-inputs to scalar * () -> (n): scalar to multi-outputs * (n) -> (n): multi-inputs to multi-outputs The example below takes in `age` and converts it into age bucket, which is an example of "scalar to scalar" function. ```python def _age_class(age): """Example scalar processing function Args: age: (int), age in integer Returns: """ if age < 10: return 1 elif 10 <= age < 18: return 2 elif 18 <= age < 30: return 3 elif 30 <= age < 50: return 4 else: return 5 ``` * An entry in `TRANSFORM_CONFIG`. After the user defined function is done, to incorporate the transformation into the entire pipeline, an additional entry need to be added to `TRANSFORM_CONFIG` with * input_columns: name of columns needed for as inputs to the transform function * process_functions: transform function * output_columns: names assigned to the output columns, data type indicator (N: for numerical, C: for categorical) The example below is the counter part of the user defined function in previous section. ```python # this is an example for generating new categorical feature using single # column from the raw data { 'input_columns': ['age'], 'process_function': _age_class, 'output_columns': [('age_class', constants.CATEGORICAL_INDICATOR)] }, ``` For more examples, please refer to the [Appendix](#appendix). ### Step 3. Modify YAML config files for training on AI Platform The files are located in `config`: - `config.yaml`: for running normal training job on AI Platform. - `hptuning_config.yaml`: for running hyperparameter tuning job on AI Platform. The YAML files share some configuration parameters. In particular, `runtimeVersion` and `pythonVersion` should correspond in both files. Note that both Python 2.7 and Python 3.5 are supported, but Python 3.5 is the recommended one since Python 2.7 is [deprecated](https://pythonclock.org/) soon. ```yaml trainingInput: scaleTier: STANDARD_1 # Machine type runtimeVersion: "1.13" # Scikit-learn version # Note that both Python 2.7 and Python 3.5 are supported, but Python 3.5 is the # recommended one since 2.7 is deprecated soon pythonVersion: "3.5" ``` More information on supported runtime version can be found [here](https://cloud.google.com/ml-engine/docs/tensorflow/runtime-version-list). ### Step 4. Submit scikit-learn training job You can run ML training jobs through the `train.sh` Bash script. ```shell bash scripts/train.sh [INPUT] [RUN_ENV] [RUN_TYPE] [EXTRA_TRAINER_ARGS] ``` - INPUT: Dataset to use for training and evaluation, which can be BigQuery table or a file (CSV). BigQuery table should be specified as `PROJECT_ID.DATASET.TABLE_NAME`. - RUN_ENV: (Optional), whether to run `local` (on-prem) or `remote` (GCP). Default value is `local`. - RUN_TYPE: (Optional), whether to run `train` or `hptuning`. Default value is `train`. - EXTRA_TRAINER_ARGS: (Optional), additional arguments to pass to the trainer. **Note**: Please make sure the REGION is set to a supported Cloud region for your project in `train.sh` ```shell REGION=us-central1 ``` ### Step 5. Deploy the trained model The trained model can then be deployed to AI Platform for online serving using the `deploy.sh` script. ```shell bash scripts/deploy.sh [MODEL_NAME] [VERSION_NAME] [MODEL_DIR] ``` where: - MODEL_NAME: Name of the model to be deployed. - VERSION_NAME: Version of the model to be deployed. - MODEL_DIR: Path to directory containing trained and exported scikit-learn model. **Note**: Please make sure the following parameters are properly set in deploy.sh ```shell REGION=us-central1 # The following two parameters should be aligned with those used during # training job, i.e., specified in the yaml files under config/ RUN_TIME=1.13 # Note that both Python 2.7 and Python 3.5 are supported, # but Python 3.5 is the recommended one since 2.7 is deprecated soon PYTHON_VERSION=3.5 ``` ### Step 6. Run predictions using the deployed model After the model is successfully deployed, you can send small samples of new data to the API associated with the model, and it would return predictions in the response. There are two helper scripts available, `predict.sh` and `predict.py`, which use gcloud and Python API for requesting predictions respectively. ```shell bash scripts/predict.sh [INPUT_DATA_FILE] [MODEL_NAME] [VERSION_NAME] ``` where: - INPUT_DATA_FILE: Path to sample file contained data in line-delimited JSON format. See `sample_data/sample_list.txt` or `sample_data/sample_json.txt` for an example. More information can be found [here](https://cloud.google.com/ml-engine/docs/scikit/online-predict#formatting_instances_as_lists). - MODEL_NAME: Name of the deployed model to use. - VERSION_NAME: Version of the deployed model to use. Note that two data formats are supported for online prediction: * List of values: ```python [39,34," Private"," 9th"," Married-civ-spouse"," Other-service"," Wife"," Black"," Female"," United-States"] ``` * JSON: ```python { "age": 39, "hours_per_week": 34, "workclass": " Private", "education": " 9th", "marital_status": " Married-civ-spouse", "occupation": " Other-service", "relationship": " Wife", "race": " Black", "sex": " Female", "native_country": " United-States" } ``` ### Appendix In this section, I have provided a complete example for iris dataset and demonstrate all four cases of user defined functions. ```python import numpy as np def _numeric_square(num): """Example scalar processing function Args: num: (float) Returns: float """ return np.power(num, 2) def _numeric_square_root(num): """Example scalar processing function Args: num: (float) Returns: float """ return np.sqrt(num) def _numeric_sq_sr(num): """Example function that take scala and return an array Args: num: (float) Returns: numpy.array """ return np.array([_numeric_square(num), _numeric_square_root(num)]) def _area(args): """Examples function that take an array and return a scalar Args: args: (numpy.array), args[0] -> length, args[1] -> width Returns: float """ return args[0] * args[1] def _area_class(args): """Examples function that take an array and return a scalar Args: args: (numpy.array), args[0] -> length, args[1] -> width Returns: int """ area = args[0] * args[1] cl = 1 if area > 2 else 0 return cl def _area_and_class(args): """Examples function that take an array and return an array Args: args: (numpy.array), args[0] -> length, args[1] -> width Returns: numpy.array """ area = args[0] * args[1] cl = 1 if area > 2 else 0 return np.array([area, cl]) TRANSFORM_CONFIG = [ # this is an example for pass through features, # i.e., those doesn't need any processing { 'input_columns': ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'], # the raw feature types are defined in the metadata, # no need to do it here 'process_function': None, 'output_columns': ['sepal_length', 'sepal_width', 'petal_length', 'petal_width'] }, # this is an example for generating new numerical feature using a single # column from the raw data { 'input_columns': ['sepal_length'], 'process_function': _numeric_square, 'output_columns': [('sepal_length_sq', constants.NUMERICAL_INDICATOR)] # 'N' stands for numerical feature }, # this is an example for generating new numerical feature using a single # column from the raw data { 'input_columns': ['sepal_width'], 'process_function': _numeric_square_root, 'output_columns': [('sepal_width_sr', constants.NUMERICAL_INDICATOR)] }, # this is an example for generating new numerical feature using multiple # columns from the raw data { 'input_columns': ['petal_length', 'petal_width'], 'process_function': _area, 'output_columns': [('petal_area', constants.NUMERICAL_INDICATOR)] }, # this is an example for generating new categorical feature using multiple # columns from the raw data { 'input_columns': ['petal_length', 'petal_width'], 'process_function': _area_class, 'output_columns': [('petal_area_cl', constants.CATEGORICAL_INDICATOR)] # 'C' stands for categorical feature }, # this is an example for generating multiple features using a single column # from the raw data { 'input_columns': ['petal_length'], 'process_function': _numeric_sq_sr, 'output_columns': [('petal_length_sq', constants.NUMERICAL_INDICATOR), ('petal_width_sr', constants.NUMERICAL_INDICATOR)] }, # this is an example for generating multiple features using multiple columns # from the raw data { 'input_columns': ['sepal_length', 'sepal_width'], 'process_function': _area_and_class, 'output_columns': [('sepal_area', constants.NUMERICAL_INDICATOR), ('sepal_area_cl', constants.CATEGORICAL_INDICATOR)] }, ] ``
GCP
Scikit learn pipeline trainer for AI Platform This is a example for building a scikit learn based machine learning pipeline trainer that can be run on AI Platform which is built on top of the scikit learn template https github com GoogleCloudPlatform cloudml samples tree master sklearn sklearn template template The pipeline can be trained locally or remotely on AI platform The trained model can be further deployed on AI platform to serve online traffic The entire pipeline includes three major components 1 Transformer generate new features from raw data with user defined logic function 2 Pre processer handle typical standard pre processing e g scaling imputation one hot encoding and etc 3 Estimator exact machine learning model e g RandomForestClassifier Compared with the scikit learn template https github com GoogleCloudPlatform cloudml samples tree master sklearn sklearn template template this example has the following additional feature 1 Support both Classification and Regression which can be specified in the configuration 2 Support serving for both JSON and List of Value formats 3 Support additional custom transformation logics besides typical pre processing provided by scikit learn Google Cloud tools used Google Cloud Platform https cloud google com GCP lets you build and host applications and websites store data and analyze data on Google s scalable infrastructure Cloud ML Engine https cloud google com ml engine is a managed service that enables you to easily build machine learning models that work on any type of data of any size This is now part of AI Platform https cloud google com ai platform Google Cloud Storage https cloud google com storage GCS is a unified object storage for developers and enterprises from live data serving to data analytics ML to data archiving Cloud SDK https cloud google com sdk is a set of tools for Google Cloud Platform which contains e g gcloud gsutil and bq command line tools to interact with Google Cloud products and services Google BigQuery https cloud google com bigquery A fast highly scalable cost effective and fully managed cloud data warehouse for analytics with even built in machine learning Pipeline overview The overall flow of the pipeline can be summarized as follows and illustrated in the flowchart Raw Data Transformer Pre processor Estimator Trained Pipeline Flowchart sklearn png Repository structure template config config yaml for running normal training job on AI Platform hptuning config yaml for running hyperparameter tunning job on AI Platform scripts train sh convenience script for running machine learning training jobs deploy sh convenience script for deploying trained scikit learn model predict sh convenience script for requesting online prediction predict py helper function for requesting online prediction using python trainer trainer package util utility functions utils py utility functions including e g loading data from bigquery and cloud storage preprocess utils py utility functions for constructing preprocessing pipeline transform utils py utility functions for constructing transform pipeline metadata py dataset metadata and feature columns definitions constants py constants used in the project model py pre processing and machine learning model pipeline definition task py training job entry point handling the parameters passed from command line transform config py configuration for transform pipeline construction predictor py define custom prediction behavior setup py specify necessary dependency for running job on AI Platform requirements txt specify necessary dependency helper for setup environment for local development Using the template Step 0 Prerequisites Before you follow the instructions below to adapt the template to your machine learning job you need a Google cloud project if you don t have one You can find detailed instructions here https cloud google com dataproc docs guides setup project Make sure the following API Services are enabled Cloud Storage Cloud Machine Learning Engine BigQuery API Cloud Build API for CI CD integration Cloud Source Repositories API for CI CD integration Configure project id and bucket id as environment variable bash export PROJECT ID your google project id export BUCKET ID your google cloud storage bucket name Set up a service account for calls to GCP APIs More information on setting up a service account can be found here https cloud google com docs authentication getting started Step 1 Tailor the scikit learn trainer to your data metadata py is where the dataset s metadata is defined By default the file is configured to train on the Census dataset which can be found at bigquery public data ml datasets census adult income https bigquery cloud google com table bigquery public data ml datasets census adult income python Usage Modify below based on the dataset used CSV COLUMNS None Schema of the data Necessary for data stored in GCS In the following I provided an example based on census dataset NUMERIC FEATURES age hours per week CATEGORICAL FEATURES workclass education marital status occupation relationship race sex native country FEATURE COLUMNS NUMERIC FEATURES CATEGORICAL FEATURES LABEL income bracket PROBLEM TYPE classification regression or classification In most cases only the following items need to be modified in order to adapt to the target dataset COLUMNS the schema of ths data only required for data stored in GCS NUMERIC FEATURES columns those will be treated as numerical features CATEGORICAL FEATURES columns those will be treated as categorical features LABEL column that will be treated as label Step 2 Add new features with domain knowledge transform config py is where the logic of generating new features out of raw dataset is defined There are two parts need to be provided for each new feature generating logic User defined function that handle the generation of new feature There would be four cases in terms of the combinations of the dimensions of input and output as listed below scalar to scalar n multi inputs to scalar n scalar to multi outputs n n multi inputs to multi outputs The example below takes in age and converts it into age bucket which is an example of scalar to scalar function python def age class age Example scalar processing function Args age int age in integer Returns if age 10 return 1 elif 10 age 18 return 2 elif 18 age 30 return 3 elif 30 age 50 return 4 else return 5 An entry in TRANSFORM CONFIG After the user defined function is done to incorporate the transformation into the entire pipeline an additional entry need to be added to TRANSFORM CONFIG with input columns name of columns needed for as inputs to the transform function process functions transform function output columns names assigned to the output columns data type indicator N for numerical C for categorical The example below is the counter part of the user defined function in previous section python this is an example for generating new categorical feature using single column from the raw data input columns age process function age class output columns age class constants CATEGORICAL INDICATOR For more examples please refer to the Appendix appendix Step 3 Modify YAML config files for training on AI Platform The files are located in config config yaml for running normal training job on AI Platform hptuning config yaml for running hyperparameter tuning job on AI Platform The YAML files share some configuration parameters In particular runtimeVersion and pythonVersion should correspond in both files Note that both Python 2 7 and Python 3 5 are supported but Python 3 5 is the recommended one since Python 2 7 is deprecated https pythonclock org soon yaml trainingInput scaleTier STANDARD 1 Machine type runtimeVersion 1 13 Scikit learn version Note that both Python 2 7 and Python 3 5 are supported but Python 3 5 is the recommended one since 2 7 is deprecated soon pythonVersion 3 5 More information on supported runtime version can be found here https cloud google com ml engine docs tensorflow runtime version list Step 4 Submit scikit learn training job You can run ML training jobs through the train sh Bash script shell bash scripts train sh INPUT RUN ENV RUN TYPE EXTRA TRAINER ARGS INPUT Dataset to use for training and evaluation which can be BigQuery table or a file CSV BigQuery table should be specified as PROJECT ID DATASET TABLE NAME RUN ENV Optional whether to run local on prem or remote GCP Default value is local RUN TYPE Optional whether to run train or hptuning Default value is train EXTRA TRAINER ARGS Optional additional arguments to pass to the trainer Note Please make sure the REGION is set to a supported Cloud region for your project in train sh shell REGION us central1 Step 5 Deploy the trained model The trained model can then be deployed to AI Platform for online serving using the deploy sh script shell bash scripts deploy sh MODEL NAME VERSION NAME MODEL DIR where MODEL NAME Name of the model to be deployed VERSION NAME Version of the model to be deployed MODEL DIR Path to directory containing trained and exported scikit learn model Note Please make sure the following parameters are properly set in deploy sh shell REGION us central1 The following two parameters should be aligned with those used during training job i e specified in the yaml files under config RUN TIME 1 13 Note that both Python 2 7 and Python 3 5 are supported but Python 3 5 is the recommended one since 2 7 is deprecated soon PYTHON VERSION 3 5 Step 6 Run predictions using the deployed model After the model is successfully deployed you can send small samples of new data to the API associated with the model and it would return predictions in the response There are two helper scripts available predict sh and predict py which use gcloud and Python API for requesting predictions respectively shell bash scripts predict sh INPUT DATA FILE MODEL NAME VERSION NAME where INPUT DATA FILE Path to sample file contained data in line delimited JSON format See sample data sample list txt or sample data sample json txt for an example More information can be found here https cloud google com ml engine docs scikit online predict formatting instances as lists MODEL NAME Name of the deployed model to use VERSION NAME Version of the deployed model to use Note that two data formats are supported for online prediction List of values python 39 34 Private 9th Married civ spouse Other service Wife Black Female United States JSON python age 39 hours per week 34 workclass Private education 9th marital status Married civ spouse occupation Other service relationship Wife race Black sex Female native country United States Appendix In this section I have provided a complete example for iris dataset and demonstrate all four cases of user defined functions python import numpy as np def numeric square num Example scalar processing function Args num float Returns float return np power num 2 def numeric square root num Example scalar processing function Args num float Returns float return np sqrt num def numeric sq sr num Example function that take scala and return an array Args num float Returns numpy array return np array numeric square num numeric square root num def area args Examples function that take an array and return a scalar Args args numpy array args 0 length args 1 width Returns float return args 0 args 1 def area class args Examples function that take an array and return a scalar Args args numpy array args 0 length args 1 width Returns int area args 0 args 1 cl 1 if area 2 else 0 return cl def area and class args Examples function that take an array and return an array Args args numpy array args 0 length args 1 width Returns numpy array area args 0 args 1 cl 1 if area 2 else 0 return np array area cl TRANSFORM CONFIG this is an example for pass through features i e those doesn t need any processing input columns sepal length sepal width petal length petal width the raw feature types are defined in the metadata no need to do it here process function None output columns sepal length sepal width petal length petal width this is an example for generating new numerical feature using a single column from the raw data input columns sepal length process function numeric square output columns sepal length sq constants NUMERICAL INDICATOR N stands for numerical feature this is an example for generating new numerical feature using a single column from the raw data input columns sepal width process function numeric square root output columns sepal width sr constants NUMERICAL INDICATOR this is an example for generating new numerical feature using multiple columns from the raw data input columns petal length petal width process function area output columns petal area constants NUMERICAL INDICATOR this is an example for generating new categorical feature using multiple columns from the raw data input columns petal length petal width process function area class output columns petal area cl constants CATEGORICAL INDICATOR C stands for categorical feature this is an example for generating multiple features using a single column from the raw data input columns petal length process function numeric sq sr output columns petal length sq constants NUMERICAL INDICATOR petal width sr constants NUMERICAL INDICATOR this is an example for generating multiple features using multiple columns from the raw data input columns sepal length sepal width process function area and class output columns sepal area constants NUMERICAL INDICATOR sepal area cl constants CATEGORICAL INDICATOR
GCP Transpose a BigQuery table using Dataflow img src img simplesqlbasedpivot png alt Simple SQL based pivot height 150 width 650 sql Transposing Pivoting Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow While some relational databases provide a built in pivot function of some sort it can also be done via standard SQL As an example the following table can be pivoted using
# Transpose a BigQuery table using Dataflow Transposing/Pivoting/Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow. While some relational databases provide a built-in *pivot* function of some sort, it can also be done via standard SQL. As an example, the following table can be pivoted using [BigQuery Standard SQL](https://cloud.google.com/bigquery/docs/reference/standard-sql/): <img src="img/simple_sql_based_pivot.png" alt="Simple SQL based pivot" height=150 width=650/> ```sql SELECT id, MAX(CASE WHEN class = 'HVAC' THEN SALES END) AS HVAC_SALES, MAX(CASE WHEN class = 'GENERATORS' THEN SALES END) AS GENERATORS_SALES FROM `project-id.dataset_id.table_id` GROUP BY id; ``` However, this can get significantly more complicated as: * Number of pivot fields increase (single pivot field _**class**_ in the above example). * Number of distinct values in pivot fields increase (two distinct values _**HVAC**_ and _**GENERATORS**_ in the above example). * Number of pivot values increase (single pivot value _**sales**_ in the above example). The most common approach to pivoting a complex table would be a two step approach: 1. Run a custom script to analyze the table and generate a SQL statement such as the one above. 2. Run the dynamically generated SQL to pivot the table and write the output to another table. This could also be done using a convenient Dataflow pipeline as described below. ## [Pivot Dataflow Pipeline](src/main/java/com/google/cloud/pso/pipeline/Pivot.java) [Pivot](src/main/java/com/google/cloud/pso/pipeline/Pivot.java) - A Dataflow pipeline that can be used to pivot a BigQuery table across any number of pivot fields and values. This pipeline allows the user to specify a comma separated list of fields across which the table should be rotated in addition to a comma separated list of fields that are rotated. i.e. The user can specify: * Key fields along which the table is rotated (_**id**_ in the above example). * Pivot fields that should be rotated (_**class**_ in the above example). * Pivot values that should be rotated (_**sales**_ in the above example). The pipeline will perform various steps to complete the pivot process: 1. Validate that the fields are valid and have the correct datatypes. 2. Read the data from an input BigQuery table. 3. Analyze the pivot fields and dynamically generate the correct schema. 4. Pivot every record based on the dynamically generated schema. 5. Write the pivoted records into a target BigQuery table. <img src="img/pipeline_graph.png" alt="Pipeline Graph" height=650 width=500/> ## Getting Started ### Requirements * Java 8 * Maven 3 ### Building the Project Build the entire project using the maven compile command. ```sh mvn clean compile ``` ### Running unit tests Run all unit tests. ```sh mvn clean test ``` ### Running the Pipeline <img src="img/example_raw_table.png" alt="Raw input table" height=200 width=550/> The above input table shows a slightly more complex example. In order to pivot this table, we have the following inputs: * keyFields = id,locid * pivotFields = class,on_sale,state * pivotValues = sale_price,count The _**desc**_ field is ignored and will not be in the output table. The [Pivot](src/main/java/com/google/cloud/pso/pipeline/Pivot.java) pipeline will create a new pivot table based on the inputs. <img src="img/example_pivoted_table.png" alt="Raw input table" height=175 width=850/> Execute the pipeline using the maven exec:java command. ```sh MY_PROJECT=my-project-id MY_STAGING_BUCKET=my-staging-bucket-name MY_DATASET_ID=my-dataset-id MY_SOURCE_TABLE_ID=my-source-table-id MY_TARGET_TABLE_ID=my-target-table-id mvn compile exec:java -Dexec.mainClass=com.google.cloud.pso.pipeline.Pivot -Dexec.cleanupDaemonThreads=false -Dexec.args=" \ --project=$MY_PROJECT \ --runner=DataflowRunner \ --stagingLocation=gs://${MY_STAGING_BUCKET}/staging \ --tempLocation=gs://${MY_STAGING_BUCKET}/tmp \ --inputTableSpec=${MY_PROJECT}:${MY_DATASET_ID}.${MY_SOURCE_TABLE_ID} \ --outputTableSpec=${MY_PROJECT}:${MY_DATASET_ID}.${MY_TARGET_TABLE_ID} \ --keyFields=id,locid \ --pivotFields=class,on_sale,state \ --valueFields=sale_price,count" ``
GCP
Transpose a BigQuery table using Dataflow Transposing Pivoting Rotating the orientation of a table is a very common task that is performed as part of a standard report generation workflow While some relational databases provide a built in pivot function of some sort it can also be done via standard SQL As an example the following table can be pivoted using BigQuery Standard SQL https cloud google com bigquery docs reference standard sql img src img simple sql based pivot png alt Simple SQL based pivot height 150 width 650 sql SELECT id MAX CASE WHEN class HVAC THEN SALES END AS HVAC SALES MAX CASE WHEN class GENERATORS THEN SALES END AS GENERATORS SALES FROM project id dataset id table id GROUP BY id However this can get significantly more complicated as Number of pivot fields increase single pivot field class in the above example Number of distinct values in pivot fields increase two distinct values HVAC and GENERATORS in the above example Number of pivot values increase single pivot value sales in the above example The most common approach to pivoting a complex table would be a two step approach 1 Run a custom script to analyze the table and generate a SQL statement such as the one above 2 Run the dynamically generated SQL to pivot the table and write the output to another table This could also be done using a convenient Dataflow pipeline as described below Pivot Dataflow Pipeline src main java com google cloud pso pipeline Pivot java Pivot src main java com google cloud pso pipeline Pivot java A Dataflow pipeline that can be used to pivot a BigQuery table across any number of pivot fields and values This pipeline allows the user to specify a comma separated list of fields across which the table should be rotated in addition to a comma separated list of fields that are rotated i e The user can specify Key fields along which the table is rotated id in the above example Pivot fields that should be rotated class in the above example Pivot values that should be rotated sales in the above example The pipeline will perform various steps to complete the pivot process 1 Validate that the fields are valid and have the correct datatypes 2 Read the data from an input BigQuery table 3 Analyze the pivot fields and dynamically generate the correct schema 4 Pivot every record based on the dynamically generated schema 5 Write the pivoted records into a target BigQuery table img src img pipeline graph png alt Pipeline Graph height 650 width 500 Getting Started Requirements Java 8 Maven 3 Building the Project Build the entire project using the maven compile command sh mvn clean compile Running unit tests Run all unit tests sh mvn clean test Running the Pipeline img src img example raw table png alt Raw input table height 200 width 550 The above input table shows a slightly more complex example In order to pivot this table we have the following inputs keyFields id locid pivotFields class on sale state pivotValues sale price count The desc field is ignored and will not be in the output table The Pivot src main java com google cloud pso pipeline Pivot java pipeline will create a new pivot table based on the inputs img src img example pivoted table png alt Raw input table height 175 width 850 Execute the pipeline using the maven exec java command sh MY PROJECT my project id MY STAGING BUCKET my staging bucket name MY DATASET ID my dataset id MY SOURCE TABLE ID my source table id MY TARGET TABLE ID my target table id mvn compile exec java Dexec mainClass com google cloud pso pipeline Pivot Dexec cleanupDaemonThreads false Dexec args project MY PROJECT runner DataflowRunner stagingLocation gs MY STAGING BUCKET staging tempLocation gs MY STAGING BUCKET tmp inputTableSpec MY PROJECT MY DATASET ID MY SOURCE TABLE ID outputTableSpec MY PROJECT MY DATASET ID MY TARGET TABLE ID keyFields id locid pivotFields class on sale state valueFields sale price count
GCP Assumptions Through the use of machine learning you can build an automated categorization pipeline This solution describes an approach to automating the review process for audio files using machine learning APIs Introduction Many consumer facing applications allow creators to upload audio files as a part of the creative experience If you re running an application with a similar use case you may want to extract the text from the audio file and then classify based on the content For example you may want to categorize content or add appropriate tags for search indexing The process of having humans listening to content is problematic if you have a large volume of content Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately
Introduction ============ Many consumer-facing applications allow creators to upload audio files as a part of the creative experience. If you’re running an application with a similar use case, you may want to extract the text from the audio file and then classify based on the content. For example, you may want to categorize content or add appropriate tags for search indexing. The process of having humans listening to content is problematic if you have a large volume of content. Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately. Through the use of machine learning, you can build an automated categorization pipeline. This solution describes an approach to automating the review process for audio files using machine learning APIs. Assumptions ============ This example only supports the encoding formats currently supported by the [Speech API](https://cloud.google.com/speech-to-text/docs/encoding). If you try to use .mp3 or another file type, then you may need to perform preprocessing to convert your file into an accepted encoding type. <h2>Background on Serverless Data Processing Pipelines</h2> The solution involves creating five GCS buckets using default configuration settings. Because of this, no [object lifecycle management](https://cloud.google.com/storage/docs/lifecycle) policies are configured. If you would like to specify different retention policies you can [enable](https://cloud.google.com/storage/docs/managing-lifecycles#enable) this using `gsutil` while following the deployment process. During processing, audio files are moved between buckets as they progress through various stages of the pipeline. Specifically, the audio file should first be moved to the `staging` bucket. After the Speech API completes processing the file, the audio file is moved from the `staging` bucket to either the `processed` or `error` bucket, depending on whether the Speech API returned a success or error response. <h2>Installation/Set-up</h2> 1. [Install and initialize the Cloud SDK](https://cloud.google.com/sdk/docs/how-to) <h3>Create a GCP Project</h3> 1. Open a Linux terminal window or [Cloud Shell](https://console.cloud.google.com/home/dashboard?cloudshell=true), and enter the following to configure your desired project id. ```` export PROJECT=[project_id] ```` 2. Create a new GCP project ```` gcloud config create project $PROJECT ```` 3. Set your terminal to use that project ```` gcloud config set project $PROJECT ```` Deployment ========== Step 1: [Deploy the App Engine frontend](#step-1) Step 2a: [Use gcloud for remaining resources](#step-2a) Step 2b: [Use terraform for remaining resources](#step2-b) Step 3: [View Results](#view-results) ### Step 1 <h3>Clone the repository and change your directory into the root folder</h3> 1. In your terminal type the following to clone the professional-services repository: ```` git clone https://github.com/GoogleCloudPlatform/professional-services ```` 2. Change directories into this project ````` cd examples/ml-audio-content-profiling/ ````` <h3>Create Project Resources</h3> 1. Change directories into the frontend App Engine application folder. ```` cd app ```` 2a. Compile the angular frontend application. Note that this requires [installing Angular](https://angular.io/cli) on your device and compiling the output. ```` npm install -g @angular/cli ```` ```` cd angular/ ```` ```` npm install ```` ```` ng build --prod ```` ```` cd .. ```` 2b. There is also an [Open Sourced moderator UI](https://github.com/conversationai/conversationai-moderator) which contains detailed features for sorting through results from the Perspective API. Note that this will not display any results from the NLP API or a transcript itself, but you may add in these additional features if you wish. This is an alternative to the frontend in the `app` folder in this repository. 3. Deploy the application. ```` gcloud app deploy ```` You will be prompted to use a region to serve the location from. You may pick any region, but you cannot change this value later. You can verify that the app was deployed correctly by navigating to https://`$PROJECT`.appspot.google.com. You should see the following UI: ![text](img/app_home.png?raw=true) ### Step 2a <h3>Enable APIs for your Project</h3> 1. Enable APIs ```` gcloud services enable language.googleapis.com gcloud services enable speech.googleapis.com gcloud services enable cloudfunctions.googleapis.com gcloud services enable commentanalyzer.googleapis.com gcloud services enable cloudscheduler.googleapis.com ```` 2. Change directories to be at the root directory again. ```` cd .. ```` <h3>Create PubSub resources</h3> 1. Create PubSub topic named stt_queue ```` export TOPIC_NAME=stt_queue gcloud pubsub topics create $TOPIC_NAME ```` 2. Create subscription to stt_queue topic ```` export SUBSCRIPTION_NAME=pull_stt_ids gcloud pubsub subscriptions create $SUBSCRIPTION_NAME --topic=$TOPIC_NAME ```` 3. Generate a static UUID that you will need in each of your bucket names to ensure that they are unique. 3a. First install uuidgen. If it is already installed or if you are using Cloud Shell, skip this step. ```` sudo apt-get install uuid-runtime ```` 3b. Then generate the random UUID ```` export STATIC_UUID=$(echo $(uuidgen | tr '[:upper:]' '[:lower:]') | cut -c1-20) ```` 4. Create five GCP buckets to hold the output files ```` export staging_audio_bucket=staging-audio-files-$STATIC_UUID gsutil mb gs://$staging_audio_bucket ```` ```` export processed_audio_bucket=processed-audio-files-$STATIC_UUID gsutil mb gs://$processed_audio_bucket ```` ```` export error_audio_bucket=error-audio-files-$STATIC_UUID gsutil mb gs://$error_audio_bucket ```` ```` export transcription_bucket=transcription-files-$STATIC_UUID gsutil mb gs://$transcription_bucket ```` ```` export output_bucket=output-files-$STATIC_UUID gsutil mb gs://$output_bucket ```` 4. Deploy first Cloud Function to Send STT API Change directories into the send_stt_api_function directory ```` cd send_stt_api_function/ ```` Deploy function ```` gcloud functions deploy send_stt_api \ --entry-point main \ --runtime python37 \ --trigger-resource $staging_audio_bucket \ --trigger-event google.storage.object.finalize \ --timeout 540s \ --set-env-vars topic_name=$TOPIC_NAME,error_bucket=$error_audio_bucket ```` 5. Deploy second Cloud Function to Read STT API Output Change directories into the read_stt_api_function ```` cd ../read_stt_api_function/ ```` ```` gcloud functions deploy read_stt_api \ --entry-point main \ --runtime python37 \ --trigger-resource cron_topic \ --trigger-event google.pubsub.topic.publish \ --timeout 540s \ --set-env-vars topic_name=$TOPIC_NAME,subscription_name=$SUBSCRIPTION_NAME,transcription_bucket=$transcription_bucket,staging_audio_bucket=$staging_audio_bucket,processed_audio_bucket=$processed_audio_bucket,error_audio_bucket=$error_audio_bucket ```` 6. Deploy Cloud Scheduler Job ```` gcloud scheduler jobs create pubsub check_stt_job \ --schedule "*/10 * * * *" \ --topic cron_topic \ --message-body "Check Speech-to-text results" ```` Note that you can edit the `schedule` flag to be any interval in UNIX cron. By default, our solution uses every 10 minutes. 7. Deploy Perspective API Function Change directories into the perspective_api_function directory ```` cd ../perspective_api_function/ ```` ```` gcloud functions deploy perspective_api \ --entry-point main \ --runtime python37 \ --trigger-resource $transcription_bucket \ --trigger-event google.storage.object.finalize \ --timeout 540s \ --set-env-vars output_bucket=$output_bucket ```` 8. Deploy NLP Function Change directories into the nlp_api_function directory ```` cd ../nlp_api_function/ ```` ```` gcloud functions deploy nlp_api \ --entry-point main \ --runtime python37 \ --trigger-resource $transcription_bucket \ --trigger-event google.storage.object.finalize \ --timeout 540s \ --set-env-vars output_bucket=$output_bucket ```` ### Step 2b 1. [Install Terraform](https://learn.hashicorp.com/terraform/getting-started/install) 2. Copy `terraform.tfvars.sample` to create the `terraform.tfvars` file. You must input the `project_id` inside of the quotes. If you wish to edit any of the other default values for the other variables specified in `variables.tf`, you may add them in your `terraform.tfvars`. 3. In your terminal, cd into the `terraform/` directory. ```` cd terraform/ ```` 4. Enter the following commands, ensuring that there are no errors: ```` terraform init ```` ```` terraform plan ```` ```` terraform apply ```` ```` yes ```` All of the resources should be deployed. ### View Results <h3>Test it out</h3> 1. You can start by trying to upload an audio file in GCS. You can do this using `gsutil` or in the UI under the <b>staging bucket</b>. This will trigger `send_stt_api_function`. This submits the request to the Speech API and publishes the job id to PubSub. 2. By default, `read_stt_api_function` is scheduled to run every ten minutes, as configured by Cloud Scheduler. If you want to test it earlier, you can navigate to Cloud Scheduler in the console and click 'Run Now'. This will pull from the PubSub topic to grab any job ids. It then calls the Speech API to see if the job is complete. If it is not complete, it repushes the id back to PubSub. If it is complete, it extracts the transcript from the Speech API response. Finally, it then saves this transcript in GCS in the transcription files bucket. It also moves the audio file from the staging staging bucket to the processed audio bucket. If there were any errors, it moves the audio file instead to the error audio bucket. ![](img/cloud_scheduler.png) 3. The upload in the previous step to the transcription files bucket then triggers the other two functions: `perpsective_api_function` and `nlp_api_function`. Each of these downloads the transcription file from GCS and then calls its respective API with it to receive insight about the probability of toxic content in the file as well as entities mentioned. Each then publishes the response into its respective GCS bucket. 4. You can view the result of the entire pipeline by using the deployed App Engine app. Navigate to: https://`[PROJECT_ID]`.appspot.com. This will present a table of all files that have been uploaded and which already have completed processing through the pipeline. You can click through each file to view the resulting transcript, toxicity, and entity/sentiment analysis. ![](img/app_home_files.png)
GCP
Introduction Many consumer facing applications allow creators to upload audio files as a part of the creative experience If you re running an application with a similar use case you may want to extract the text from the audio file and then classify based on the content For example you may want to categorize content or add appropriate tags for search indexing The process of having humans listening to content is problematic if you have a large volume of content Having users supplying their own tags may also be problematic because they may not include all useful tags or they may tag inaccurately Through the use of machine learning you can build an automated categorization pipeline This solution describes an approach to automating the review process for audio files using machine learning APIs Assumptions This example only supports the encoding formats currently supported by the Speech API https cloud google com speech to text docs encoding If you try to use mp3 or another file type then you may need to perform preprocessing to convert your file into an accepted encoding type h2 Background on Serverless Data Processing Pipelines h2 The solution involves creating five GCS buckets using default configuration settings Because of this no object lifecycle management https cloud google com storage docs lifecycle policies are configured If you would like to specify different retention policies you can enable https cloud google com storage docs managing lifecycles enable this using gsutil while following the deployment process During processing audio files are moved between buckets as they progress through various stages of the pipeline Specifically the audio file should first be moved to the staging bucket After the Speech API completes processing the file the audio file is moved from the staging bucket to either the processed or error bucket depending on whether the Speech API returned a success or error response h2 Installation Set up h2 1 Install and initialize the Cloud SDK https cloud google com sdk docs how to h3 Create a GCP Project h3 1 Open a Linux terminal window or Cloud Shell https console cloud google com home dashboard cloudshell true and enter the following to configure your desired project id export PROJECT project id 2 Create a new GCP project gcloud config create project PROJECT 3 Set your terminal to use that project gcloud config set project PROJECT Deployment Step 1 Deploy the App Engine frontend step 1 Step 2a Use gcloud for remaining resources step 2a Step 2b Use terraform for remaining resources step2 b Step 3 View Results view results Step 1 h3 Clone the repository and change your directory into the root folder h3 1 In your terminal type the following to clone the professional services repository git clone https github com GoogleCloudPlatform professional services 2 Change directories into this project cd examples ml audio content profiling h3 Create Project Resources h3 1 Change directories into the frontend App Engine application folder cd app 2a Compile the angular frontend application Note that this requires installing Angular https angular io cli on your device and compiling the output npm install g angular cli cd angular npm install ng build prod cd 2b There is also an Open Sourced moderator UI https github com conversationai conversationai moderator which contains detailed features for sorting through results from the Perspective API Note that this will not display any results from the NLP API or a transcript itself but you may add in these additional features if you wish This is an alternative to the frontend in the app folder in this repository 3 Deploy the application gcloud app deploy You will be prompted to use a region to serve the location from You may pick any region but you cannot change this value later You can verify that the app was deployed correctly by navigating to https PROJECT appspot google com You should see the following UI text img app home png raw true Step 2a h3 Enable APIs for your Project h3 1 Enable APIs gcloud services enable language googleapis com gcloud services enable speech googleapis com gcloud services enable cloudfunctions googleapis com gcloud services enable commentanalyzer googleapis com gcloud services enable cloudscheduler googleapis com 2 Change directories to be at the root directory again cd h3 Create PubSub resources h3 1 Create PubSub topic named stt queue export TOPIC NAME stt queue gcloud pubsub topics create TOPIC NAME 2 Create subscription to stt queue topic export SUBSCRIPTION NAME pull stt ids gcloud pubsub subscriptions create SUBSCRIPTION NAME topic TOPIC NAME 3 Generate a static UUID that you will need in each of your bucket names to ensure that they are unique 3a First install uuidgen If it is already installed or if you are using Cloud Shell skip this step sudo apt get install uuid runtime 3b Then generate the random UUID export STATIC UUID echo uuidgen tr upper lower cut c1 20 4 Create five GCP buckets to hold the output files export staging audio bucket staging audio files STATIC UUID gsutil mb gs staging audio bucket export processed audio bucket processed audio files STATIC UUID gsutil mb gs processed audio bucket export error audio bucket error audio files STATIC UUID gsutil mb gs error audio bucket export transcription bucket transcription files STATIC UUID gsutil mb gs transcription bucket export output bucket output files STATIC UUID gsutil mb gs output bucket 4 Deploy first Cloud Function to Send STT API Change directories into the send stt api function directory cd send stt api function Deploy function gcloud functions deploy send stt api entry point main runtime python37 trigger resource staging audio bucket trigger event google storage object finalize timeout 540s set env vars topic name TOPIC NAME error bucket error audio bucket 5 Deploy second Cloud Function to Read STT API Output Change directories into the read stt api function cd read stt api function gcloud functions deploy read stt api entry point main runtime python37 trigger resource cron topic trigger event google pubsub topic publish timeout 540s set env vars topic name TOPIC NAME subscription name SUBSCRIPTION NAME transcription bucket transcription bucket staging audio bucket staging audio bucket processed audio bucket processed audio bucket error audio bucket error audio bucket 6 Deploy Cloud Scheduler Job gcloud scheduler jobs create pubsub check stt job schedule 10 topic cron topic message body Check Speech to text results Note that you can edit the schedule flag to be any interval in UNIX cron By default our solution uses every 10 minutes 7 Deploy Perspective API Function Change directories into the perspective api function directory cd perspective api function gcloud functions deploy perspective api entry point main runtime python37 trigger resource transcription bucket trigger event google storage object finalize timeout 540s set env vars output bucket output bucket 8 Deploy NLP Function Change directories into the nlp api function directory cd nlp api function gcloud functions deploy nlp api entry point main runtime python37 trigger resource transcription bucket trigger event google storage object finalize timeout 540s set env vars output bucket output bucket Step 2b 1 Install Terraform https learn hashicorp com terraform getting started install 2 Copy terraform tfvars sample to create the terraform tfvars file You must input the project id inside of the quotes If you wish to edit any of the other default values for the other variables specified in variables tf you may add them in your terraform tfvars 3 In your terminal cd into the terraform directory cd terraform 4 Enter the following commands ensuring that there are no errors terraform init terraform plan terraform apply yes All of the resources should be deployed View Results h3 Test it out h3 1 You can start by trying to upload an audio file in GCS You can do this using gsutil or in the UI under the b staging bucket b This will trigger send stt api function This submits the request to the Speech API and publishes the job id to PubSub 2 By default read stt api function is scheduled to run every ten minutes as configured by Cloud Scheduler If you want to test it earlier you can navigate to Cloud Scheduler in the console and click Run Now This will pull from the PubSub topic to grab any job ids It then calls the Speech API to see if the job is complete If it is not complete it repushes the id back to PubSub If it is complete it extracts the transcript from the Speech API response Finally it then saves this transcript in GCS in the transcription files bucket It also moves the audio file from the staging staging bucket to the processed audio bucket If there were any errors it moves the audio file instead to the error audio bucket img cloud scheduler png 3 The upload in the previous step to the transcription files bucket then triggers the other two functions perpsective api function and nlp api function Each of these downloads the transcription file from GCS and then calls its respective API with it to receive insight about the probability of toxic content in the file as well as entities mentioned Each then publishes the response into its respective GCS bucket 4 You can view the result of the entire pipeline by using the deployed App Engine app Navigate to https PROJECT ID appspot com This will present a table of all files that have been uploaded and which already have completed processing through the pipeline You can click through each file to view the resulting transcript toxicity and entity sentiment analysis img app home files png
GCP Prepare the infrastructure e g datasets tables etc needed by the pipeline by referring to the Creating infrastructure components Note the BigQuery dataset name that you crate for late steps Usage dataflow production ready Python
# dataflow-production-ready (Python) ## Usage ### Creating infrastructure components Prepare the infrastructure (e.g. datasets, tables, etc) needed by the pipeline by referring to the [Terraform module](/terraform/README.MD) Note the BigQuery dataset name that you crate for late steps. ### Creating Python Virtual Environment for development In the module root directory, run the following: ``` python3 -m venv /tmp/venv/dataflow-production-ready-env source /tmp/venv/dataflow-production-ready-env/bin/activate pip install -r python/requirements.txt ``` ### Setting the GCP Project In the repo root directory, set the environment variables ``` export GCP_PROJECT=<PROJECT_ID> export REGION=<GCP_REGION> export BUCKET_NAME=<DEMO_BUCKET_NAME> ``` Then set the GCP project ``` gcloud config set project $GCP_PROJECT ``` Then, create a GCS bucket for this demo ``` gsutil mb -l $REGION -p $GCP_PROJECT gs://$BUCKET_NAME ``` ### Running a full build The build is defined by [cloudbuild.yaml](cloudbuild.yaml) and runs on Cloud Build. It applies the following steps: * Run unit tests * Build a container image as defined in [Dockerfile](Dockerfile) * Create a Dataflow flex template based on the container image * Run automated system integration test using the Flex template (including test resources provisioning) Set the following variables: ``` export TARGET_GCR_IMAGE="dataflow_flex_ml_preproc" export TARGET_GCR_IMAGE_TAG="python" export TEMPLATE_GCS_LOCATION="gs://$BUCKET_NAME/template/spec.json" ``` Run the following command in the root folder ``` gcloud builds submit --config=python/cloudbuild.yaml --substitutions=_IMAGE_NAME=${TARGET_GCR_IMAGE},_IMAGE_TAG=${TARGET_GCR_IMAGE_TAG},_TEMPLATE_GCS_LOCATION=${TEMPLATE_GCS_LOCATION},_REGION=${REGION} ``` ### Manual Commands #### Prerequisites * Create an input file similar to [integration_test_input.csv](/data/integration_test_input.csv) (or copy it to GCS and use it as input) * Set extra variables (use same dataset name as created by the Terraform module) ``` export INPUT_CSV="gs://$BUCKET_NAME/input/path_to_CSV" export BQ_RESULTS="project:dataset.ml_preproc_results" export BQ_ERRORS="project:dataset.ml_preproc_errors" export TEMP_LOCATION="gs://$BUCKET_NAME/tmp" export SETUP_FILE="/dataflow/template/ml_preproc/setup.py" ``` #### Running pipeline locally Export this extra variables and run the script ``` chmod +x run_direct_runner.sh ./run_direct_runner.sh ``` #### Running pipeline on Dataflow service Export this extra variables and run the script ``` chmod +x run_dataflow_runner.sh ./run_dataflow_runner.sh ``` #### Running Flex Templates Even if the job runs successfully on Dataflow service when submitted locally, the template has to be tested as well since it might contain errors in the Docker file that prevents the job from running. To run the flex template after deploying it, run: ``` chmod +x run_dataflow_template.sh ./run_dataflow_template.sh ``` Note that the parameter setup_file must be included in [metadata.json](ml_preproc/spec/metadata.json) and passed to the pipeline. It enables working with multiple Python modules/files and it's set to the path of [setup.py](ml_preproc/setup.py) inside the docker container. #### Running Unit Tests To run all unit tests ``` python -m unittest discover ``` To run particular test file ``` python -m unittest ml_preproc.pipeline.ml_preproc_test ``` #### Debug flex-template container image In cloud shell, run the deployed container image using the bash endpoint ``` docker run -it --entrypoint /bin/bash gcr.io/$PROJECT_ID/$TARGET_GCR_IMAGE ``` ## Dataflow Pipeline * [main.py](ml_preproc/main.py) - The entry point of the pipeline * [setup.py](ml_preproc/setup.py) - To package the pipeline and distribute it to the workers. Without this file, main.py won't be able to import modules at runtime. [[source]](https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/#multiple-file-dependencies) ## Flex Templates Overview The pipeline demonstrates how to use Flex Templates in Dataflow to create a template out of practically any Dataflow pipeline. This pipeline does not use any [ValueProvider](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/value_provider.py) to accept user inputs and is built like any other non-templated Dataflow pipeline. This pipeline also allows the user to change the job graph depending on the value provided for an option at runtime We make the pipeline ready for reuse by "packaging" the pipeline artifacts in a Docker container. In order to simplify the process of packaging the pipeline into a container we utilize [Google Cloud Build](https://cloud.google.com/cloud-build/). We preinstall all the dependencies needed to *compile and execute* the pipeline into a container using a custom [Dockerfile](ml_preproc/Dockerfile). In this example, we are using the following base image for Python 3: `gcr.io/dataflow-templates-base/python3-template-launcher-base` We will utilize Google Cloud Builds ability to build a container using a Dockerfile as documented in the [quickstart](https://cloud.google.com/cloud-build/docs/quickstart-docker). In addition, we will use a CD pipeline on Cloud Build to update the flex template automatically. ## Continues deployment The CD pipeline is defined in [cloudbuild.yaml](ml_preproc/cloudbuild.yaml) to be executed by Cloud Build. It follows the following steps: 1. Run unit tests 2. Build and register a container image via Cloud Build as defined in the [Dockerfile](ml_preproc/Dockerfile). The container packages the Dataflow pipeline and its dependencies and acts as the Dataflow Flex Template 3. Build the Dataflow template by creating a spec.json file on GCS including the container image ID and the pipeline metadata based on [metadata.json](ml_preproc/spec/metadata.json). The template could be run later on by pointing to this spec.json file 4. Running system integration test using the deployed Flex-template and waiting for it's results ### Substitution variables Cloud Build provides default variables such as `$PROJECT_ID` that could be used in the build YAML file. User defined variables could also be used in the form of `$_USER_VARIABLE`. In this project the following variables are used: - `$_TARGET_GCR_IMAGE`: The GCR image name to be submitted to Cloud Build (not URI) (e.g wordcount-flex-template) - `$_TEMPLATE_GCS_LOCATION`: GCS location to store the template spec file (e.g. gs://bucket/dir/). The spec file path is required later on to submit run commands to Dataflow - `$_REGION`: GCP region to deploy and run the dataflow flex template - `$_IMAGE_TAG`: Image tag These variables must be set during manual build execution or via a build trigger ### Triggering builds automatically To trigger a build on certain actions (e.g. commits to master) 1. Go to Cloud Build > Triggers > Create Trigger. If you're using Github, choose the "Connect Repository" option. 2. Configure the trigger 3. Point the trigger to the [cloudbuild.yaml](ml_preproc/cloudbuild.yaml) file in the repository 4. Add the substitution variables as explained in the [Substitution variables](#substitution-variables) section.
GCP
dataflow production ready Python Usage Creating infrastructure components Prepare the infrastructure e g datasets tables etc needed by the pipeline by referring to the Terraform module terraform README MD Note the BigQuery dataset name that you crate for late steps Creating Python Virtual Environment for development In the module root directory run the following python3 m venv tmp venv dataflow production ready env source tmp venv dataflow production ready env bin activate pip install r python requirements txt Setting the GCP Project In the repo root directory set the environment variables export GCP PROJECT PROJECT ID export REGION GCP REGION export BUCKET NAME DEMO BUCKET NAME Then set the GCP project gcloud config set project GCP PROJECT Then create a GCS bucket for this demo gsutil mb l REGION p GCP PROJECT gs BUCKET NAME Running a full build The build is defined by cloudbuild yaml cloudbuild yaml and runs on Cloud Build It applies the following steps Run unit tests Build a container image as defined in Dockerfile Dockerfile Create a Dataflow flex template based on the container image Run automated system integration test using the Flex template including test resources provisioning Set the following variables export TARGET GCR IMAGE dataflow flex ml preproc export TARGET GCR IMAGE TAG python export TEMPLATE GCS LOCATION gs BUCKET NAME template spec json Run the following command in the root folder gcloud builds submit config python cloudbuild yaml substitutions IMAGE NAME TARGET GCR IMAGE IMAGE TAG TARGET GCR IMAGE TAG TEMPLATE GCS LOCATION TEMPLATE GCS LOCATION REGION REGION Manual Commands Prerequisites Create an input file similar to integration test input csv data integration test input csv or copy it to GCS and use it as input Set extra variables use same dataset name as created by the Terraform module export INPUT CSV gs BUCKET NAME input path to CSV export BQ RESULTS project dataset ml preproc results export BQ ERRORS project dataset ml preproc errors export TEMP LOCATION gs BUCKET NAME tmp export SETUP FILE dataflow template ml preproc setup py Running pipeline locally Export this extra variables and run the script chmod x run direct runner sh run direct runner sh Running pipeline on Dataflow service Export this extra variables and run the script chmod x run dataflow runner sh run dataflow runner sh Running Flex Templates Even if the job runs successfully on Dataflow service when submitted locally the template has to be tested as well since it might contain errors in the Docker file that prevents the job from running To run the flex template after deploying it run chmod x run dataflow template sh run dataflow template sh Note that the parameter setup file must be included in metadata json ml preproc spec metadata json and passed to the pipeline It enables working with multiple Python modules files and it s set to the path of setup py ml preproc setup py inside the docker container Running Unit Tests To run all unit tests python m unittest discover To run particular test file python m unittest ml preproc pipeline ml preproc test Debug flex template container image In cloud shell run the deployed container image using the bash endpoint docker run it entrypoint bin bash gcr io PROJECT ID TARGET GCR IMAGE Dataflow Pipeline main py ml preproc main py The entry point of the pipeline setup py ml preproc setup py To package the pipeline and distribute it to the workers Without this file main py won t be able to import modules at runtime source https beam apache org documentation sdks python pipeline dependencies multiple file dependencies Flex Templates Overview The pipeline demonstrates how to use Flex Templates in Dataflow to create a template out of practically any Dataflow pipeline This pipeline does not use any ValueProvider https github com apache beam blob master sdks python apache beam options value provider py to accept user inputs and is built like any other non templated Dataflow pipeline This pipeline also allows the user to change the job graph depending on the value provided for an option at runtime We make the pipeline ready for reuse by packaging the pipeline artifacts in a Docker container In order to simplify the process of packaging the pipeline into a container we utilize Google Cloud Build https cloud google com cloud build We preinstall all the dependencies needed to compile and execute the pipeline into a container using a custom Dockerfile ml preproc Dockerfile In this example we are using the following base image for Python 3 gcr io dataflow templates base python3 template launcher base We will utilize Google Cloud Builds ability to build a container using a Dockerfile as documented in the quickstart https cloud google com cloud build docs quickstart docker In addition we will use a CD pipeline on Cloud Build to update the flex template automatically Continues deployment The CD pipeline is defined in cloudbuild yaml ml preproc cloudbuild yaml to be executed by Cloud Build It follows the following steps 1 Run unit tests 2 Build and register a container image via Cloud Build as defined in the Dockerfile ml preproc Dockerfile The container packages the Dataflow pipeline and its dependencies and acts as the Dataflow Flex Template 3 Build the Dataflow template by creating a spec json file on GCS including the container image ID and the pipeline metadata based on metadata json ml preproc spec metadata json The template could be run later on by pointing to this spec json file 4 Running system integration test using the deployed Flex template and waiting for it s results Substitution variables Cloud Build provides default variables such as PROJECT ID that could be used in the build YAML file User defined variables could also be used in the form of USER VARIABLE In this project the following variables are used TARGET GCR IMAGE The GCR image name to be submitted to Cloud Build not URI e g wordcount flex template TEMPLATE GCS LOCATION GCS location to store the template spec file e g gs bucket dir The spec file path is required later on to submit run commands to Dataflow REGION GCP region to deploy and run the dataflow flex template IMAGE TAG Image tag These variables must be set during manual build execution or via a build trigger Triggering builds automatically To trigger a build on certain actions e g commits to master 1 Go to Cloud Build Triggers Create Trigger If you re using Github choose the Connect Repository option 2 Configure the trigger 3 Point the trigger to the cloudbuild yaml ml preproc cloudbuild yaml file in the repository 4 Add the substitution variables as explained in the Substitution variables substitution variables section
GCP In this solution we build an approch to ingestion flat files in GCS to BigQuery using serverless technology This solution might be not be performanct if you have frequent small files that lands to GCP We use data in this example Figure below shows an overall approach Once the object is uploaded in the GCS bucket a object notification is recevied by the Pub Sub Topic Pub Sub Topic triggers a cloud function which then invokes the serverless spark Any error during the cloud function invocation and serverless spark execution is send to dead letter topic Step 1 Create a bucket the bucket holds the data to be ingested in GCP Once the object is upload in a bucket the notification is created in Pub Sub topic Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark
# Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark In this solution, we build an approch to ingestion flat files (in GCS) to BigQuery using serverless technology. This solution might be not be performanct if you have frequent small files that lands to GCP. We use [Daily Shelter Occupancy](https://open.toronto.ca/dataset/daily-shelter-occupancy/) data in this example. Figure below shows an overall approach. Once the object is uploaded in the GCS bucket, a object notification is recevied by the Pub/Sub Topic. Pub/Sub Topic triggers a cloud function which then invokes the serverless spark. Any error during the cloud function invocation and serverless spark execution is send to dead letter topic. ![](docs/gcs2bq_serverless_spark.jpg) - **Step 1:** Create a bucket, the bucket holds the data to be ingested in GCP. Once the object is upload in a bucket, the notification is created in Pub/Sub topic. ``` PROJECT_ID=<<project_id>> GCS_BUCKET_NAME=<<Bucket name>> gsutil mb gs://${GCS_BUCKET_NAME} gsutil notification create \ -t projects/${PROJECT_ID}/topics/create_notification_${GCS_BUCKET_NAME} \ -e OBJECT_FINALIZE \ -f json gs://${GCS_BUCKET_NAME} ``` - **Step 2:** Build and copy jar to a GCS bucket(Create a GCS bucket to store the jar if you dont have one). There are number of dataproce templates that are avaliable to [use](https://github.com/GoogleCloudPlatform/dataproc-templates). ``` GCS_ARTIFACT_REPO=<<artifact repo name>> gsutil mb gs://${GCS_ARTIFACT_REPO} cd gcs2bq-spark mvn clean install gsutil cp target/GCS2BQWithSpark-1.0-SNAPSHOT.jar gs://${GCS_ARTIFACT_REPO}/ ``` - **Step 3:** [The page](https://cloud.google.com/dataproc-serverless/docs/concepts/network) describe the network configuration required to run serverless spark - **Open subnet connectivity:** The subnet must allow subnet communication on all ports. The following gcloud command attaches a network firewall to a subnet that allows ingress communications using all protocols on all ports if the source and destination are tagged with "serverless-spark" ``` gcloud compute firewall-rules create allow-internal-ingress \ --network="default" \ --source-tags="serverless-spark" \ --target-tags="serverless-spark" \ --direction="ingress" \ --action="allow" \ --rules="all" ```` - **Private Google Access:** The subnet must have [Private Google Access](https://cloud.google.com/vpc/docs/configure-private-google-access) enabled. - External network access. Drivers and executors have internal IP addresses. You can set up [Cloud NAT](https://cloud.google.com/nat/docs/overview) to allow outbound traffic using internal IPs on your VPC network. - **Step 4:** Create necessary GCP resources required by Serverless Spark - **Create BQ Dataset** Create a dataset to load GCS files. ``` DATASET_NAME=<<dataset_name>> bq --location=US mk -d \ ${DATASET_NAME} ``` - **Create BQ table** Create a table using the schema in `schema/schema.json` ``` TABLE_NAME=<<table_name>> bq mk --table ${PROJECT_ID}:${DATASET_NAME}.${TABLE_NAME} \ ./schema/schema.json ``` - **Create service account** Create service acccount used run the service account. We also create the permission required to read from GCS bucket, write to BigQuery table and publish error message in deadletter queue. The service account is used to run the serverless spark, so it needs dataproc worker role as well. ``` SERVICE_ACCOUNT_ID="gcs-to-bq-sa" gcloud iam service-accounts create ${SERVICE_ACCOUNT_ID} \ --description="GCS to BQ service account for Serverless Spark" \ --display-name="GCS2BQ-SA" roles=("roles/dataproc.worker" "roles/bigquery.dataEditor" "roles/bigquery.jobUser" "roles/storage.objectViewer" "roles/pubsub.publisher") for role in ${roles[@]}; do gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member="serviceAccount:${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role="$role" done ``` - **Create BQ temp Bucket** GCS to BigQuery requires a temporary bucket. Lets create a temporary bucket ``` GCS_TEMP_BUCKET=<<temp_bucket>> gsutil mb gs://${GCS_TEMP_BUCKET} ``` - **Create Deadletter Topic and Subscription** Lets create a dead letter topic and subscription ``` ERROR_TOPIC=err_gcs2bq_${GCS_BUCKET_NAME} gcloud pubsub topics create $ERROR_TOPIC gcloud pubsub subscriptions create err_sub_${GCS_BUCKET_NAME}} \ --topic=${ERROR_TOPIC} ``` Once all resources are create, please change the varaibles value () in `trigger-serverless-spark-fxn/main.py` from line 25 to 29 ``` bq_temp_bucket = <<GCS_TEMP_BUCKET>> gcs_artifact_rep = <<GCS_ARTIFACT_REPO>> dataset= <<DATASET_NAME>> bq_table = <<TABLE_NAME>> error_topic=<<ERROR_TOPIC>> ``` - **Step 5:** The cloud function is triggered once the object is copied to bucket. The cloud function triggers the Servereless spark Deploy the function. ``` cd trigger-serverless-spark-fxn gcloud functions deploy trigger-serverless-spark-fxn --entry-point \ invoke_sreverless_spark --runtime python37 \ --trigger-resource ${GCS_BUCKET_NAME}_create_notification \ --trigger-event google.pubsub.topic.publish ``` - **Step 6:** Invoke the end-to-end pipeline. Download [2020 Daily Center Data](https://ckan0.cf.opendata.inter.prod-toronto.ca/download_resource/800cc97f-34b3-4d4d-9bc1-6e2ce2d6f44a?format=csv) and upload to the GCS bucket(<<GCS_BUCKET_NAME>>) in Step 1. **Debugging Pipelines** Error message for the failed data pipelines are publish to Pub/Sub topic (ERROR_TOPIC) created in Step 4(Create Deadletter Topic and Subscription). The errors from both cloud function and spark are forwarded to Pub/Sub. Pub/Sub topic might have multiple entry for the same data-pipeline instance. Messages in Pub/Sub topic can be filtered using "oid" attribute. The attribute(oid) is unique for each pipeline run and holds full object name with the [generation id](https://cloud.google.com/storage/docs/metadata#generation-number).
GCP
Ingesting GCS files to BigQuery using Cloud Functions and Serverless Spark In this solution we build an approch to ingestion flat files in GCS to BigQuery using serverless technology This solution might be not be performanct if you have frequent small files that lands to GCP We use Daily Shelter Occupancy https open toronto ca dataset daily shelter occupancy data in this example Figure below shows an overall approach Once the object is uploaded in the GCS bucket a object notification is recevied by the Pub Sub Topic Pub Sub Topic triggers a cloud function which then invokes the serverless spark Any error during the cloud function invocation and serverless spark execution is send to dead letter topic docs gcs2bq serverless spark jpg Step 1 Create a bucket the bucket holds the data to be ingested in GCP Once the object is upload in a bucket the notification is created in Pub Sub topic PROJECT ID project id GCS BUCKET NAME Bucket name gsutil mb gs GCS BUCKET NAME gsutil notification create t projects PROJECT ID topics create notification GCS BUCKET NAME e OBJECT FINALIZE f json gs GCS BUCKET NAME Step 2 Build and copy jar to a GCS bucket Create a GCS bucket to store the jar if you dont have one There are number of dataproce templates that are avaliable to use https github com GoogleCloudPlatform dataproc templates GCS ARTIFACT REPO artifact repo name gsutil mb gs GCS ARTIFACT REPO cd gcs2bq spark mvn clean install gsutil cp target GCS2BQWithSpark 1 0 SNAPSHOT jar gs GCS ARTIFACT REPO Step 3 The page https cloud google com dataproc serverless docs concepts network describe the network configuration required to run serverless spark Open subnet connectivity The subnet must allow subnet communication on all ports The following gcloud command attaches a network firewall to a subnet that allows ingress communications using all protocols on all ports if the source and destination are tagged with serverless spark gcloud compute firewall rules create allow internal ingress network default source tags serverless spark target tags serverless spark direction ingress action allow rules all Private Google Access The subnet must have Private Google Access https cloud google com vpc docs configure private google access enabled External network access Drivers and executors have internal IP addresses You can set up Cloud NAT https cloud google com nat docs overview to allow outbound traffic using internal IPs on your VPC network Step 4 Create necessary GCP resources required by Serverless Spark Create BQ Dataset Create a dataset to load GCS files DATASET NAME dataset name bq location US mk d DATASET NAME Create BQ table Create a table using the schema in schema schema json TABLE NAME table name bq mk table PROJECT ID DATASET NAME TABLE NAME schema schema json Create service account Create service acccount used run the service account We also create the permission required to read from GCS bucket write to BigQuery table and publish error message in deadletter queue The service account is used to run the serverless spark so it needs dataproc worker role as well SERVICE ACCOUNT ID gcs to bq sa gcloud iam service accounts create SERVICE ACCOUNT ID description GCS to BQ service account for Serverless Spark display name GCS2BQ SA roles roles dataproc worker roles bigquery dataEditor roles bigquery jobUser roles storage objectViewer roles pubsub publisher for role in roles do gcloud projects add iam policy binding PROJECT ID member serviceAccount SERVICE ACCOUNT ID PROJECT ID iam gserviceaccount com role role done Create BQ temp Bucket GCS to BigQuery requires a temporary bucket Lets create a temporary bucket GCS TEMP BUCKET temp bucket gsutil mb gs GCS TEMP BUCKET Create Deadletter Topic and Subscription Lets create a dead letter topic and subscription ERROR TOPIC err gcs2bq GCS BUCKET NAME gcloud pubsub topics create ERROR TOPIC gcloud pubsub subscriptions create err sub GCS BUCKET NAME topic ERROR TOPIC Once all resources are create please change the varaibles value in trigger serverless spark fxn main py from line 25 to 29 bq temp bucket GCS TEMP BUCKET gcs artifact rep GCS ARTIFACT REPO dataset DATASET NAME bq table TABLE NAME error topic ERROR TOPIC Step 5 The cloud function is triggered once the object is copied to bucket The cloud function triggers the Servereless spark Deploy the function cd trigger serverless spark fxn gcloud functions deploy trigger serverless spark fxn entry point invoke sreverless spark runtime python37 trigger resource GCS BUCKET NAME create notification trigger event google pubsub topic publish Step 6 Invoke the end to end pipeline Download 2020 Daily Center Data https ckan0 cf opendata inter prod toronto ca download resource 800cc97f 34b3 4d4d 9bc1 6e2ce2d6f44a format csv and upload to the GCS bucket GCS BUCKET NAME in Step 1 Debugging Pipelines Error message for the failed data pipelines are publish to Pub Sub topic ERROR TOPIC created in Step 4 Create Deadletter Topic and Subscription The errors from both cloud function and spark are forwarded to Pub Sub Pub Sub topic might have multiple entry for the same data pipeline instance Messages in Pub Sub topic can be filtered using oid attribute The attribute oid is unique for each pipeline run and holds full object name with the generation id https cloud google com storage docs metadata generation number
GCP This pre commit hook uses open source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository Left Shift Validation at Pre Commit Hook Using left shift validate means you learn if your deployments are going to fail on the order of seconds as it happens right before committing rather than minutes or hours after it has already undergone multiple parts or even most of your CI CD pipeline While these scripts were first designed to work in environemnts using Kustomize they ve been adapted to work for whatever file organization structure you provide uses git diff to find any and all staged yaml files and then will validate each against the Constraints and ConstraintTemplates you provide Policy checks are typically instantiated after code is pushed to the repository as it goes through each environment dev QA production etc and right before administration to the cluster The goal with this script is to validate at the pre commit hook stage applying your security and policy check even earlier shifting left than usual
# Left-Shift Validation at Pre-Commit Hook This pre-commit hook uses open-source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository. Policy checks are typically instantiated after code is pushed to the repository, as it goes through each environment (dev, QA, production, etc.), and right before administration to the cluster. The goal with this script is to validate at the pre-commit hook stage, applying your security and policy check even earlier (shifting left) than usual. Using left-shift validate means you learn *if your deployments are going to fail* on the order of seconds as it happens right before committing, rather than minutes or hours after it has already undergone multiple parts, or even most, of your CI/CD pipeline. While these scripts were first designed to work in environemnts using Kustomize, they've been adapted to work for whatever file organization structure you provide. `validate.sh` uses git diff to find any and all staged yaml files, and then will validate each against the Constraints and ConstraintTemplates you provide. --- ## Setting Up left-shift validation Using left-shift validation is simple! ### Initial installation 1. From this repository, you only need the following items: - `validate.sh` - `setup.sh` - `constraints-and-templates/` directory *If you don't want to clone the whole project, you can use `wget` to download the specific files. For example:* `wget https://raw.githubusercontent.com/GoogleCloudPlatform/professional-services/main/examples/left-shift-validation-pre-commit-hook/validate.sh` 2. Place these items into the same directory as your `.git` folder. `setup.sh` will move `validate.sh` into your hooks from there. 3. Make both files executable, which can be accomplished by: `$ chmod +x setup.sh && chmod +x validate.sh` 4. After this step, run `setup.sh` `$ ./setup.sh` 5. Answer the prompts, and then you're done! Now, whenever you go to commit code and updated Kubernetes yaml files are found, the pre-commit hook will test your changes against the policy constraints you've specified. Nifty! **After initial installation:** `setup.sh` is primarily for dependency installations and contains a guided walkthrough of obtaining the locations of your Constraint and ConstraintTemplates. If your policy constraints change, you can run this script again, which will automatically update dependencies as well. The locations you specify for your Constraints, ConstraintTemplates, and (if using Kustomize) base Kustomization folder will be written into a settings file in `.oss_dependencies` for `validate.sh` (pre-commit hook) to use. You can directly change these variables as necessary without having to run `setup.sh`. The pre-commit hook will run every time you attempt a commit! --- ## Technical Overview Left-shift validation is intended to run as a pre-commit hook, so it has been designed with two distinct Bash scripts, `setup.sh` and `validate.sh`. **Here's what they do:** ### setup.sh 1. Install or update dependencies. 2. Turn `validate.sh` into a pre commit hook. 3. Determine the locations of Constraints/ConstraintTemplates to use, then save the path/remote repository URL to `.env`. ### validate.sh (Pre-commit Hook) 1. When `git commit` runs, check if any yaml files have been updated. 2. (If you're using Kustomize) Run `kustomize` to build a unified yaml file for evaluation. 3. Gather the Constraints and ConstraintTemplates and save them to a local folder. Use `kpt` if from a remote repo. 4. Run `gator test`, which validates the unified yaml file against the policies (Constraints and ConstraintTemplates). 5. Fail the commit if violations are found. If there are no errors, continue the commit. --- ## Purpose Organizations that deploy applications on Kubernetes clusters often use tools like [Open Policy Agent](https://www.openpolicyagent.org/) (OPA) or [Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/docs) to enforce security and operational policies. These are often essential for a company to meet legal or business requirements, but they have the added benefit of empowering their developers by providing consistent feedback on their work by showcasing if it meets the security standards of their organization. Typically, validation checks occur at the end of the CI/CD pipeline right before admission to deploy to a cluster, throughout the pipeline, or even in the repository, with automated code reviews. These checks are great, and using many in tandem is benificial for redundancy, but our goal is to reduce the potentially long wait times between when a developer submits their Kubernetes files and when those files either pass or fail policy reviews. Left-shift validation intends to streamline the development process by providing actionable feedback as quickly as possible at the pre-commit hook stage (which one can consider even before the pipeline), which as early as you could go. Here's what a typical CI/CD pipeline might look like with OSS policy validations throughout. In this case, we use Gatekeeper to define Constraints and ConstraintTemplates, and they are stored in a Git Repo that can be accessed by the pipeline (Jenkins, Google Cloud Build, etc.). ![A Sample Ci/CD Pipeline with Policy Validation Built-in](img/sample-cicd-pipeline.png) And what left-shift validation does is extend these redundant validation steps into the developer's local development environment, much earlier in the pipeline, like so: ![Left-shift validation Architectural Diagram](img/leftshift-validate-architecture.png) --- ## Important Things to Note ### Left-shift validation is an **Enhancement**, not a Replacement We do not intend for left-shift validation to replace other automated policy control systems. Instead, this is a project that can be used to help support developers who work on Kubernetes manifests by enhancing the delivery pipeline. Using left-shift validation means you learn *if your deployments are going to fail* on the order of seconds, rather than minutes or hours. The only thing that happens if you don't use left-shift validation to shift left on automated policy validation is it takes longer for you to learn if you have a problem. That's all! ### Handling Dependencies Left-shift validation uses the follwing dependencies: | Name | Role | | ----- | -----| | [kpt](https://kpt.dev/) | Enables us to fetch directories from remote git repositories and use their contents in later steps. Kpt also provides easier integration with Kustomize. | | [kustomize](https://github.com/kubernetes-sigs/kustomize) | Collates and hydrates raw yaml files into formats that work best with validation steps. | | [gator](https://open-policy-agent.github.io/gatekeeper/website/docs/gator/) | Allows for evaluating Gatekeeper ConstraintTemplates and Constraints in a local environment. | In order for left-shift validation to work, these tools must be installed on your system. `setup.sh` will install or update each tool, and they will be accessible to the validation script via a dependency folder. If you'd like to handle installation yourself, you may! As long as the commands are in your `$PATH`, `validate.sh` will recognize and use them. --- ### Deep-Dive Let's go a bit further into how everything works together. The idea is that you can run `setup.sh` whenever you need to configure your pre-commit script. This can include changes like: - Updating Dependencies (which will happen automatically anytime you run `setup.sh`) - Resetting the default behavior of your pre-commit hook, if you make changes that break the code. - Describing new Constraints and/or ConstraintTemplates to use. We have a collection of samples from the [OPA Gatekeeper Library](https://github.com/open-policy-agent/gatekeeper-library) that you can use, but you can also supply your own repository. If you have separate repositories for your Constraints and Templates, that is also supported. `validate.sh` depends on `setup.sh` to take care of the more time-consuming steps in order to run as fast as possible. It's for this reason that `setup.sh` handles all aspects of setting up the environment, Then, `validate.sh` only needs to identify changed resources, gather Constraints and ConstraintTemplates with kpt, and finally run `gator test` to produce an outcome. When configured as a pre-commit script (taken care of by `setup.sh`), `validate.sh` will take the locations of your Constraints and ConstraintTemplates, which you provided in `setup.sh`, and obtain those manifests. Whether they're stored in one or two repositories, stored locally, or if you want to continue with the sample policies in the OPA Gatekeeper Library, the script supports all of those combinations. Here's how that decision flow works: ![validate-decision-tree](img/validate-decision-tree.png) --- ## Cleanup Done with left-shift validation? Uninstalling is easy! Since we installed dependencies from pre-compiled binaries, all we need to do is delete a few directories and files. You have two options here: ### Automatic Uninstall Using cleanup.sh You can simply use `cleanup.sh`, which will automatically delete folders like `.oss_dependencies/` and any manifests, Constraints, or ConstraintTemplates that are still lingering around. ### IMPORTANT NOTE When you install left-shift validation, it becomes `pre-commit` in the `.git/hooks/` directory. This script will delete the file, rather than renaming it by appending `.sample` to the filename. ### Manual Uninstall To remove left-shift validation, you must delete all of the files that have been created. This includes: - `setup.sh` - `.oss_dependencies/*` (which can be found in the root directory of your project) Then, you must go into your .git/hooks/ folder, and either delete `pre-commit`, or add ".sample" to the end of the filename, which tells Git not to run it in the future. --- ## Contact Info We're always looking for help, or suggestions for improvement. Please feel free to reach out to us if you've got ideas or feedback! - [Janine Bariuan](mailto:janinebariuan@google.com) - [Thomas Desrosiers](mailto:tdesrosi@google.com)
GCP
Left Shift Validation at Pre Commit Hook This pre commit hook uses open source tools to provide developers with a way to validate Kubernetes manifests before changes are committed and pushed to a repository Policy checks are typically instantiated after code is pushed to the repository as it goes through each environment dev QA production etc and right before administration to the cluster The goal with this script is to validate at the pre commit hook stage applying your security and policy check even earlier shifting left than usual Using left shift validate means you learn if your deployments are going to fail on the order of seconds as it happens right before committing rather than minutes or hours after it has already undergone multiple parts or even most of your CI CD pipeline While these scripts were first designed to work in environemnts using Kustomize they ve been adapted to work for whatever file organization structure you provide validate sh uses git diff to find any and all staged yaml files and then will validate each against the Constraints and ConstraintTemplates you provide Setting Up left shift validation Using left shift validation is simple Initial installation 1 From this repository you only need the following items validate sh setup sh constraints and templates directory If you don t want to clone the whole project you can use wget to download the specific files For example wget https raw githubusercontent com GoogleCloudPlatform professional services main examples left shift validation pre commit hook validate sh 2 Place these items into the same directory as your git folder setup sh will move validate sh into your hooks from there 3 Make both files executable which can be accomplished by chmod x setup sh chmod x validate sh 4 After this step run setup sh setup sh 5 Answer the prompts and then you re done Now whenever you go to commit code and updated Kubernetes yaml files are found the pre commit hook will test your changes against the policy constraints you ve specified Nifty After initial installation setup sh is primarily for dependency installations and contains a guided walkthrough of obtaining the locations of your Constraint and ConstraintTemplates If your policy constraints change you can run this script again which will automatically update dependencies as well The locations you specify for your Constraints ConstraintTemplates and if using Kustomize base Kustomization folder will be written into a settings file in oss dependencies for validate sh pre commit hook to use You can directly change these variables as necessary without having to run setup sh The pre commit hook will run every time you attempt a commit Technical Overview Left shift validation is intended to run as a pre commit hook so it has been designed with two distinct Bash scripts setup sh and validate sh Here s what they do setup sh 1 Install or update dependencies 2 Turn validate sh into a pre commit hook 3 Determine the locations of Constraints ConstraintTemplates to use then save the path remote repository URL to env validate sh Pre commit Hook 1 When git commit runs check if any yaml files have been updated 2 If you re using Kustomize Run kustomize to build a unified yaml file for evaluation 3 Gather the Constraints and ConstraintTemplates and save them to a local folder Use kpt if from a remote repo 4 Run gator test which validates the unified yaml file against the policies Constraints and ConstraintTemplates 5 Fail the commit if violations are found If there are no errors continue the commit Purpose Organizations that deploy applications on Kubernetes clusters often use tools like Open Policy Agent https www openpolicyagent org OPA or Gatekeeper https open policy agent github io gatekeeper website docs to enforce security and operational policies These are often essential for a company to meet legal or business requirements but they have the added benefit of empowering their developers by providing consistent feedback on their work by showcasing if it meets the security standards of their organization Typically validation checks occur at the end of the CI CD pipeline right before admission to deploy to a cluster throughout the pipeline or even in the repository with automated code reviews These checks are great and using many in tandem is benificial for redundancy but our goal is to reduce the potentially long wait times between when a developer submits their Kubernetes files and when those files either pass or fail policy reviews Left shift validation intends to streamline the development process by providing actionable feedback as quickly as possible at the pre commit hook stage which one can consider even before the pipeline which as early as you could go Here s what a typical CI CD pipeline might look like with OSS policy validations throughout In this case we use Gatekeeper to define Constraints and ConstraintTemplates and they are stored in a Git Repo that can be accessed by the pipeline Jenkins Google Cloud Build etc A Sample Ci CD Pipeline with Policy Validation Built in img sample cicd pipeline png And what left shift validation does is extend these redundant validation steps into the developer s local development environment much earlier in the pipeline like so Left shift validation Architectural Diagram img leftshift validate architecture png Important Things to Note Left shift validation is an Enhancement not a Replacement We do not intend for left shift validation to replace other automated policy control systems Instead this is a project that can be used to help support developers who work on Kubernetes manifests by enhancing the delivery pipeline Using left shift validation means you learn if your deployments are going to fail on the order of seconds rather than minutes or hours The only thing that happens if you don t use left shift validation to shift left on automated policy validation is it takes longer for you to learn if you have a problem That s all Handling Dependencies Left shift validation uses the follwing dependencies Name Role kpt https kpt dev Enables us to fetch directories from remote git repositories and use their contents in later steps Kpt also provides easier integration with Kustomize kustomize https github com kubernetes sigs kustomize Collates and hydrates raw yaml files into formats that work best with validation steps gator https open policy agent github io gatekeeper website docs gator Allows for evaluating Gatekeeper ConstraintTemplates and Constraints in a local environment In order for left shift validation to work these tools must be installed on your system setup sh will install or update each tool and they will be accessible to the validation script via a dependency folder If you d like to handle installation yourself you may As long as the commands are in your PATH validate sh will recognize and use them Deep Dive Let s go a bit further into how everything works together The idea is that you can run setup sh whenever you need to configure your pre commit script This can include changes like Updating Dependencies which will happen automatically anytime you run setup sh Resetting the default behavior of your pre commit hook if you make changes that break the code Describing new Constraints and or ConstraintTemplates to use We have a collection of samples from the OPA Gatekeeper Library https github com open policy agent gatekeeper library that you can use but you can also supply your own repository If you have separate repositories for your Constraints and Templates that is also supported validate sh depends on setup sh to take care of the more time consuming steps in order to run as fast as possible It s for this reason that setup sh handles all aspects of setting up the environment Then validate sh only needs to identify changed resources gather Constraints and ConstraintTemplates with kpt and finally run gator test to produce an outcome When configured as a pre commit script taken care of by setup sh validate sh will take the locations of your Constraints and ConstraintTemplates which you provided in setup sh and obtain those manifests Whether they re stored in one or two repositories stored locally or if you want to continue with the sample policies in the OPA Gatekeeper Library the script supports all of those combinations Here s how that decision flow works validate decision tree img validate decision tree png Cleanup Done with left shift validation Uninstalling is easy Since we installed dependencies from pre compiled binaries all we need to do is delete a few directories and files You have two options here Automatic Uninstall Using cleanup sh You can simply use cleanup sh which will automatically delete folders like oss dependencies and any manifests Constraints or ConstraintTemplates that are still lingering around IMPORTANT NOTE When you install left shift validation it becomes pre commit in the git hooks directory This script will delete the file rather than renaming it by appending sample to the filename Manual Uninstall To remove left shift validation you must delete all of the files that have been created This includes setup sh oss dependencies which can be found in the root directory of your project Then you must go into your git hooks folder and either delete pre commit or add sample to the end of the filename which tells Git not to run it in the future Contact Info We re always looking for help or suggestions for improvement Please feel free to reach out to us if you ve got ideas or feedback Janine Bariuan mailto janinebariuan google com Thomas Desrosiers mailto tdesrosi google com
GCP Config files included in this repo 3 1 The config files in this repo supports the GCP blogpost on Visualizing Cloud DNS public zone query data using log based metrics and Cloud Monitoring Note For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones please refer to the blog post 2 Overview Monitoring GCP Cloud DNS public zone
# Monitoring GCP Cloud DNS public zone ## Overview The config files in this repo supports the GCP blogpost on "Visualizing Cloud DNS public zone query data using log-based metrics and Cloud Monitoring". ### Config files included in this repo 1. [config.yaml](config.yaml) 2. [dashboard.json](dashboard.json) 3. [latency-config.yaml](latency-config.yaml) ###### Note: For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones, please refer to the blog post. ## Creating the log-based metrics We require the creation of two distinct log-based metrics: a counter metric and a distribution metric. * Counter metrics count the number of log entries that match a specified filter within a specified period. For example, we can use a counter metric to count the number of log entries for a specific DNS query name, query type, or response code. * Distribution metrics also count values, but they collect the counts into ranges of values (histogram buckets). For example, we can use a distribution metric to extract the distribution of server latency. To create log-based metrics, use the `gcloud logging metrics create` command. The configuration for the logging metrics can be passed to `gcloud` using the [config.yaml](./config.yaml) file. **Note:** All [user-defined log-based](https://cloud.google.com/logging/docs/logs-based-metrics#user-metrics) metrics are a class of Cloud Monitoring custom metrics and are subject to charges. For pricing information, please refer to [Cloud Logging pricing: Log-based metrics](https://cloud.google.com/stackdriver/pricing#log-based-metrics). **Note:** The retention period for log-based metrics is six weeks. Please refer to the [data retention](https://cloud.google.com/monitoring/quotas#data_retention_policy) documentation for more details. ## **Create the counter metric** 1. Create a file named `config.yaml `with the following content: ``` filter: |- resource.type="dns_query" resource.labels.target_type="public-zone" labelExtractors: ProjectID: EXTRACT(resource.labels.project_id) QueryName: EXTRACT(jsonPayload.queryName) QueryType: EXTRACT(jsonPayload.queryType) ResponseCode: EXTRACT(jsonPayload.responseCode) TargetName: EXTRACT(resource.labels.target_name) metricDescriptor: labels: - key: QueryName - key: TargetName - key: ResponseCode - key: ProjectID - key: QueryType metricKind: DELTA unit: "1" valueType: INT64 ``` 2. To create counter metrics, use the `gcloud logging metrics create` command. **Command** ``` gcloud logging metrics create cloud-dns-log-based-metric --config-from-file=config.yaml ``` ## **Create the distribution metric** 1. Create a file named `latency-config.yaml `with the following content: ``` filter: | resource.type="dns_query" resource.labels.target_type="public-zone" labelExtractors: ProjectID: EXTRACT(resource.labels.project_id) QueryName: EXTRACT(jsonPayload.queryName) QueryType: EXTRACT(jsonPayload.queryType) ResponseCode: EXTRACT(jsonPayload.responseCode) SourceIP: EXTRACT(jsonPayload.sourceIP) TargetName: EXTRACT(resource.labels.target_name) metricDescriptor: labels: - key: ResponseCode - key: QueryType - key: TargetName - key: ProjectID - key: SourceIP - key: QueryName metricKind: DELTA unit: "1" valueType: DISTRIBUTION valueExtractor: EXTRACT(jsonPayload.serverLatency) bucketOptions: exponentialBuckets: growthFactor: 2 numFiniteBuckets: 64 scale: 0.01 ``` To create counter metrics, use the `gcloud logging metrics create` command. **Command** ``` gcloud logging metrics create cloud-dns-latency-log-based-metric --config-from-file=latency-config.yaml ``` ## **Customization options** The provided customization options are optional and are included for illustrative purposes only. We are not using these options in this blog post. If you decide to use these options in the future, you can edit the log-based metrics to make the desired changes. ### **Include Source IP (Counter Metrics Only)** To extract the Source IP from the log based metrics, add the following to labelExtractors and metricDescriptor in the `config.yaml` provided above. However, please note that extracting this label comes with risk and would be best suitable for temporary testing or zones where the expected volume of DNS queries is low. In general, it is best practice to extract labels with a finite set of values. Otherwise, values that come from an infinite set, or are always unique, can lead to [high cardinality of metrics](https://cloud.google.com/monitoring/api/v3/metric-model#cardinality), which can not only increase costs but also result in ingestion errors. **Example** ``` labelExtractors: ProjectID: EXTRACT(resource.labels.project_id) QueryName: EXTRACT(jsonPayload.queryName) QueryType: EXTRACT(jsonPayload.queryType) ResponseCode: EXTRACT(jsonPayload.responseCode) TargetName: EXTRACT(resource.labels.target_name) SourceIP: EXTRACT(jsonPayload.sourceIP) metricDescriptor: labels: - key: QueryName - key: TargetName - key: ResponseCode - key: ProjectID - key: QueryType - key: SourceIP ``` ### **Finetune the filters** The provided `config.yaml` processes all DNS query logs with a target_type of public-zone. Cloud Logging will process logs for all public zones that have logging enabled. To reduce the number of log entries processed, users can update the filter to provide a specific project or public zones. **Example** ``` resource.type="dns_query" resource.labels.target_type="public-zone" resource.labels.project_id="my-project-id" resource.labels.target_name="my-zone-name" ``` ## **Create the custom dashboard** Use the `gcloud monitoring dashboards create command` to create the dashboard. This command will create a custom dashboard named gcloud-custom-dashboard. **Command** ``` gcloud monitoring dashboards create --config-from-file=dashboard.json ``` ### **Things to consider** 1. Log-based metrics are not suitable for real-time monitoring or highly sensitive alerts because they have higher ingestion delays than other types of metrics. This is because the ingestion time of the log, the metric processing time, and the reporting time must all be taken into account. 2. There may be a delay in your metric counts. Due to the potential 10 minute delay for log ingestion, the corresponding log-base metric could also have delays in displaying the correct log count. 3. It is recommended that users change the alignment period to at least 5 minutes when configuring alerts for log-based metrics to account for delays. This will ensure that alerts are triggered only when there is a significant change in the metric, rather than being triggered by minor fluctuations. ## References - [GCP Cloud DNS](https://cloud.google.com/dns/docs/overview) - [GCP Cloud DNS log schema](https://cloud.google.com/dns/docs/monitoring) - [GCP Cloud Monitoring](https://cloud.google.com/monitoring) - [GCP Log based metrics](https://cloud.google.com/logging/docs/logs-based-metrics) - [High cardinality of metrics](https://cloud.google.com/monitoring/api/v3/metric-model#cardinality) - [User-defined log-based](https://cloud.google.com/logging/docs/logs-based-metrics#user-metrics) - [Cloud Logging pricing: Log-based metrics](https://cloud.google.com/stackdriver/pricing#log-based-metrics) - [Data retention](https://cloud.google.com/monitoring/quotas#data_retention_policy)
GCP
Monitoring GCP Cloud DNS public zone Overview The config files in this repo supports the GCP blogpost on Visualizing Cloud DNS public zone query data using log based metrics and Cloud Monitoring Config files included in this repo 1 config yaml config yaml 2 dashboard json dashboard json 3 latency config yaml latency config yaml Note For details on configuring Cloud Monitoring to monitor GCP cloud DNS public zones please refer to the blog post Creating the log based metrics We require the creation of two distinct log based metrics a counter metric and a distribution metric Counter metrics count the number of log entries that match a specified filter within a specified period For example we can use a counter metric to count the number of log entries for a specific DNS query name query type or response code Distribution metrics also count values but they collect the counts into ranges of values histogram buckets For example we can use a distribution metric to extract the distribution of server latency To create log based metrics use the gcloud logging metrics create command The configuration for the logging metrics can be passed to gcloud using the config yaml config yaml file Note All user defined log based https cloud google com logging docs logs based metrics user metrics metrics are a class of Cloud Monitoring custom metrics and are subject to charges For pricing information please refer to Cloud Logging pricing Log based metrics https cloud google com stackdriver pricing log based metrics Note The retention period for log based metrics is six weeks Please refer to the data retention https cloud google com monitoring quotas data retention policy documentation for more details Create the counter metric 1 Create a file named config yaml with the following content filter resource type dns query resource labels target type public zone labelExtractors ProjectID EXTRACT resource labels project id QueryName EXTRACT jsonPayload queryName QueryType EXTRACT jsonPayload queryType ResponseCode EXTRACT jsonPayload responseCode TargetName EXTRACT resource labels target name metricDescriptor labels key QueryName key TargetName key ResponseCode key ProjectID key QueryType metricKind DELTA unit 1 valueType INT64 2 To create counter metrics use the gcloud logging metrics create command Command gcloud logging metrics create cloud dns log based metric config from file config yaml Create the distribution metric 1 Create a file named latency config yaml with the following content filter resource type dns query resource labels target type public zone labelExtractors ProjectID EXTRACT resource labels project id QueryName EXTRACT jsonPayload queryName QueryType EXTRACT jsonPayload queryType ResponseCode EXTRACT jsonPayload responseCode SourceIP EXTRACT jsonPayload sourceIP TargetName EXTRACT resource labels target name metricDescriptor labels key ResponseCode key QueryType key TargetName key ProjectID key SourceIP key QueryName metricKind DELTA unit 1 valueType DISTRIBUTION valueExtractor EXTRACT jsonPayload serverLatency bucketOptions exponentialBuckets growthFactor 2 numFiniteBuckets 64 scale 0 01 To create counter metrics use the gcloud logging metrics create command Command gcloud logging metrics create cloud dns latency log based metric config from file latency config yaml Customization options The provided customization options are optional and are included for illustrative purposes only We are not using these options in this blog post If you decide to use these options in the future you can edit the log based metrics to make the desired changes Include Source IP Counter Metrics Only To extract the Source IP from the log based metrics add the following to labelExtractors and metricDescriptor in the config yaml provided above However please note that extracting this label comes with risk and would be best suitable for temporary testing or zones where the expected volume of DNS queries is low In general it is best practice to extract labels with a finite set of values Otherwise values that come from an infinite set or are always unique can lead to high cardinality of metrics https cloud google com monitoring api v3 metric model cardinality which can not only increase costs but also result in ingestion errors Example labelExtractors ProjectID EXTRACT resource labels project id QueryName EXTRACT jsonPayload queryName QueryType EXTRACT jsonPayload queryType ResponseCode EXTRACT jsonPayload responseCode TargetName EXTRACT resource labels target name SourceIP EXTRACT jsonPayload sourceIP metricDescriptor labels key QueryName key TargetName key ResponseCode key ProjectID key QueryType key SourceIP Finetune the filters The provided config yaml processes all DNS query logs with a target type of public zone Cloud Logging will process logs for all public zones that have logging enabled To reduce the number of log entries processed users can update the filter to provide a specific project or public zones Example resource type dns query resource labels target type public zone resource labels project id my project id resource labels target name my zone name Create the custom dashboard Use the gcloud monitoring dashboards create command to create the dashboard This command will create a custom dashboard named gcloud custom dashboard Command gcloud monitoring dashboards create config from file dashboard json Things to consider 1 Log based metrics are not suitable for real time monitoring or highly sensitive alerts because they have higher ingestion delays than other types of metrics This is because the ingestion time of the log the metric processing time and the reporting time must all be taken into account 2 There may be a delay in your metric counts Due to the potential 10 minute delay for log ingestion the corresponding log base metric could also have delays in displaying the correct log count 3 It is recommended that users change the alignment period to at least 5 minutes when configuring alerts for log based metrics to account for delays This will ensure that alerts are triggered only when there is a significant change in the metric rather than being triggered by minor fluctuations References GCP Cloud DNS https cloud google com dns docs overview GCP Cloud DNS log schema https cloud google com dns docs monitoring GCP Cloud Monitoring https cloud google com monitoring GCP Log based metrics https cloud google com logging docs logs based metrics High cardinality of metrics https cloud google com monitoring api v3 metric model cardinality User defined log based https cloud google com logging docs logs based metrics user metrics Cloud Logging pricing Log based metrics https cloud google com stackdriver pricing log based metrics Data retention https cloud google com monitoring quotas data retention policy
GCP Getting user profile from IAP enabled GAE application This setup can be done from This example demonstrates how to retrieve user profile e g name photo from an IAP enabled GAE application GCP project Initial Setup following The following setup assumes that you are setting up your new application GCP project based on the You need permission to run this e g for creating GAE app
# Getting user profile from IAP-enabled GAE application This example demonstrates how to retrieve user profile (e.g. name, photo) from an IAP-enabled GAE application. ## Initial Setup This setup can be done from `Cloud Shell`. You need `Project Owner` permission to run this, e.g. for creating GAE app. The following setup assumes that you are setting up your new application GCP project based on the following: * GCP project: `project-id-1234` * GAE region: `asia-northeast1` 1. Set up environment variables. ```bash export PROJECT=project-id-1234 export REGION=asia-northeast1 ``` 1. Setup your gcloud ```bash gcloud config configurations create iap-user-profile gcloud config set project $PROJECT ``` 1. Create GAE application. ```bash gcloud app create --region=$REGION ``` 1. Enable required APIs ```bash gcloud services enable \ iap.googleapis.com \ secretmanager.googleapis.com \ cloudresourcemanager.googleapis.com \ people.googleapis.com ``` 1. Deploy this sample application. This will become the `default` service. Note: IAP can only be enabled when there is already a service deployed in GAE. ```bash cd professional-services/examples/iap-user-profile/ gcloud app deploy --quiet ``` 1. Configure `Consent Screen` on the below link https://console.cloud.google.com/apis/credentials/consent?project=project-id-1234 1. Choose `Internal` for the `User Type`, then click `Create` 1. Type in the `Application name`, e.g. IAP User Profile Example 1. Choose the appropriate `Support email`. Alternatively, you can leave it to use your own email. 1. Fill in the `Authorized domains` based on your GAE application domain, e.g. * project-id-1234.an.r.appspot.com 1. Click `Save` 1. Enable IAP on the below link for the `App Engine app`. Toggle on the button and then click `Turn On`. https://console.cloud.google.com/security/iap?project=project-id-1234 1. Add IAM policy binding to the IAP-enabled App Engine application. Register your user email to access the application. ```bash gcloud iap web add-iam-policy-binding --resource-type=app-engine \ --member='user:your-user@domain.com' \ --role='roles/iap.httpsResourceAccessor' ``` 1. Create new `Credentials` on the below link. This credential will be used by the OAuth2 login flow to retrieve the user profile. https://console.cloud.google.com/apis/credentials?project=project-id-1234 1. Click `Create Credentials`. Choose `OAuth client ID`. 1. Choose `Web application` for the `Application type` 1. Type in `IAP User Profile Svc` for the `Name` 1. Fill in the `Authorized JavaScript origins` * https://project-id-1234.an.r.appspot.com 1. Fill in the `Authorized redirect URIs` * https://project-id-1234.an.r.appspot.com/auth-callback 1. Click `Create` 1. Click the newly created `IAP User Profile Example` credential and then click the `Download JSON` button. You'll need to paste the JSON content later as secret in the Secret Manager. 1. Create a secret in Secret Manager named `iap-user-profile-svc-oauth2-client` with the client credential JSON file as the value. ```bash gcloud secrets create iap-user-profile-svc-oauth2-client \ --locations=asia-southeast1 --replication-policy=user-managed \ --data-file=/path-to/client_secret.json ``` 1. Add IAM policy binding to the secret for GAE default service account. ```bash gcloud secrets add-iam-policy-binding iap-user-profile-svc-oauth2-client \ --member='serviceAccount:project-id-1234@appspot.gserviceaccount.com' \ --role='roles/secretmanager.secretAccessor' ``` ## Accessing the Application 1. Access the application in your browser. Note: If you are accessing it first time, it may take some time before the policy takes effect. Retry several times until you are prompted the OAuth login screen. https://project-id-1234.an.r.appspot.com/ 1. You will be prompted the OAuth login one more time. This is intended since we are going to use this scope to access your People API. 1. You should be able to see your user profile displayed on the web page.
GCP
Getting user profile from IAP enabled GAE application This example demonstrates how to retrieve user profile e g name photo from an IAP enabled GAE application Initial Setup This setup can be done from Cloud Shell You need Project Owner permission to run this e g for creating GAE app The following setup assumes that you are setting up your new application GCP project based on the following GCP project project id 1234 GAE region asia northeast1 1 Set up environment variables bash export PROJECT project id 1234 export REGION asia northeast1 1 Setup your gcloud bash gcloud config configurations create iap user profile gcloud config set project PROJECT 1 Create GAE application bash gcloud app create region REGION 1 Enable required APIs bash gcloud services enable iap googleapis com secretmanager googleapis com cloudresourcemanager googleapis com people googleapis com 1 Deploy this sample application This will become the default service Note IAP can only be enabled when there is already a service deployed in GAE bash cd professional services examples iap user profile gcloud app deploy quiet 1 Configure Consent Screen on the below link https console cloud google com apis credentials consent project project id 1234 1 Choose Internal for the User Type then click Create 1 Type in the Application name e g IAP User Profile Example 1 Choose the appropriate Support email Alternatively you can leave it to use your own email 1 Fill in the Authorized domains based on your GAE application domain e g project id 1234 an r appspot com 1 Click Save 1 Enable IAP on the below link for the App Engine app Toggle on the button and then click Turn On https console cloud google com security iap project project id 1234 1 Add IAM policy binding to the IAP enabled App Engine application Register your user email to access the application bash gcloud iap web add iam policy binding resource type app engine member user your user domain com role roles iap httpsResourceAccessor 1 Create new Credentials on the below link This credential will be used by the OAuth2 login flow to retrieve the user profile https console cloud google com apis credentials project project id 1234 1 Click Create Credentials Choose OAuth client ID 1 Choose Web application for the Application type 1 Type in IAP User Profile Svc for the Name 1 Fill in the Authorized JavaScript origins https project id 1234 an r appspot com 1 Fill in the Authorized redirect URIs https project id 1234 an r appspot com auth callback 1 Click Create 1 Click the newly created IAP User Profile Example credential and then click the Download JSON button You ll need to paste the JSON content later as secret in the Secret Manager 1 Create a secret in Secret Manager named iap user profile svc oauth2 client with the client credential JSON file as the value bash gcloud secrets create iap user profile svc oauth2 client locations asia southeast1 replication policy user managed data file path to client secret json 1 Add IAM policy binding to the secret for GAE default service account bash gcloud secrets add iam policy binding iap user profile svc oauth2 client member serviceAccount project id 1234 appspot gserviceaccount com role roles secretmanager secretAccessor Accessing the Application 1 Access the application in your browser Note If you are accessing it first time it may take some time before the policy takes effect Retry several times until you are prompted the OAuth login screen https project id 1234 an r appspot com 1 You will be prompted the OAuth login one more time This is intended since we are going to use this scope to access your People API 1 You should be able to see your user profile displayed on the web page
GCP Social Security Numbers SSNs While the DLP API in GCP offers the ability to look for SSNs it may not be Hashpipeline accurate especially if there are other items such as account numbers that look similar One solution would Only 5 Million total records be to store SSNs in a Dictionary InfoType in Cloud DLP however that has the following limitations Overview In this solution we are trying to create a way to indicate security teams if there is a file found with US
# Hashpipeline ## Overview In this solution, we are trying to create a way to indicate security teams if there is a file found with US Social Security Numbers (SSNs). While the DLP API in GCP offers the ability to look for SSNs, it may not be accurate, especially if there are other items such as account numbers that look similar. One solution would be to store SSNs in a Dictionary InfoType in Cloud DLP, however that has the following limitations: * Only 5 Million total records * SSNs stored in plain text To avoid those limitations, we built a PoC Dataflow pipeline that will run for every new file in a specified GCS bucket and determine how many (if any) SSNs are found, triggering a Pubsub Topic. The known SSNs will be stored in Firestore, a highly scalable key value store, only after being hashed with a salt and key, which is stored in Secret Manager. This is what the architecture will look like when we're done. ![](./img/arch.png) ## Usage This repo offers end-to-end deployment of the Hashpipeline solution using [HashiCorp Terraform](https://terraform.io) given a project and list of buckets to monitor. ### Prerequisites This has only been tested on Mac OSX but will likely work on Linux as well. * `terraform` executable is available in `$PATH` * `gcloud` is installed and up to date * `python` is version 3.5 or higher ### Step 1: Deploy the Infrastructure Note that the following APIs will be enabled on your project by Terraform: * `iam.googleapis.com` * `dlp.googleapis.com` * `secretmanager.googleapis.com` * `firestore.googleapis.com` * `dataflow.googleapis.com` * `compute.googleapis.com` Then deploy the infrastructure to your project ``` cd infrastructure cp terraform.tfvars.sample terraform.tfvars # Update with your own values. terraform apply ``` ### Step 2: Generate the Hash Key This will create a new 64 byte key for use with HMAC and store it in Secret Manager ``` make pip make create_key ``` ### Step 3: Seed Firestore with SSNs Since SSNs can exist in the data center in many stores, we'll assume the input is a flat, newline separated file including valid SSNs. How you get them in that format is up to you. Once you have your input file, simply authenticate to `gcloud` and then run: ``` ./scripts/hasher.py upload \ --project $PROJECT \ --secret $SECRET \ --salt $SALT \ --collection $COLLECTION \ --infile $SSN_FILE ``` For more information on the input parameters, just run `./bin/hasher.py --help` ### Step 4: Build and Deploy This uses Dataflow's Templates to build our pipeline and then run it. To use the values we created in terraform, just run: ``` make build make deploy ``` At this point your Dataflow job will start up, so you can check its progress in the GCP Console. ### Step 5: Subscribe This pipeline just emits every finding in the file as a separate Pubsub message. We show an example of how to subscribe to this and consume these messages in Python in the [poller.py](./scripts/poller.py) script. However since this is specifically a security solution, you will likely want to consume these notifications in your SIEM such as Splunk, etc. ## Testing/Demo ### Step 1 Follow Step 1 and 2 from above to set up the demo environment ### Step 2: Seed the Firestore with Fake SSNs This script will do the following: * Create a list of valid and random Social Security Numbers * Store the plain text in `scripts/socials.txt` * Hash the numbers (normalized without dashes) using HMAC-SHA256 and the key generated from `make create_key` * Store the hashed values in Firestore under the collection specified in the terraform variable: `firestore_collection` ``` make seed_firestore ``` ### Step 3: Generate some input files for dataflow to use This will store the input files under the `inputs/` directory, so we have something to test with. ``` make generate_input_files ``` ### Step 5: Test out the pipeline locally This will run the pipeline against the `small-input.txt` file generated by the previous step. In only has 50 lines so it shouldn't take too long. ``` make run_local ``` ### Step 6: Subscribe In a separate terminal, start the poller from the test subscription and count the findings by filename. ``` $ make subscribe Successfully subscribed to <subscription>. Messages will print below... ``` Now in a third terminal, run the following command to upload a file to the test bucket. ``` export BUCKET=<dataflow-test-bucket> gsutil cp inputs/small-input.txt gs://$BUCKET/small.txt ``` After a little while, in your `subscribe` terminal, you should get something that looks like this, after the files have been uploaded, along with the raw messages printed to standard out: ``` ... ----------------------------------- -------- Filename Findings gs://<dataflow-test-bucket>/small.txt 26 ----------------------------------- -------- ``` This number can be verified by looking in the file itself on the first line, which would say `expected_valid_socials = 26` for this example. ### Step 7: Deploy the pipeline to a template ``` make build make deploy ``` Now you can try out the same thing as Step 4 to verify it works. ## Disclaimer While best efforts have been made to make this pipeline hardened from a security perspective, this is meant **only as a demo and proof of concept** and should not be directly used in a production system without being fully vetted by security teams and the people who will maintain the code in the organization.
GCP
Hashpipeline Overview In this solution we are trying to create a way to indicate security teams if there is a file found with US Social Security Numbers SSNs While the DLP API in GCP offers the ability to look for SSNs it may not be accurate especially if there are other items such as account numbers that look similar One solution would be to store SSNs in a Dictionary InfoType in Cloud DLP however that has the following limitations Only 5 Million total records SSNs stored in plain text To avoid those limitations we built a PoC Dataflow pipeline that will run for every new file in a specified GCS bucket and determine how many if any SSNs are found triggering a Pubsub Topic The known SSNs will be stored in Firestore a highly scalable key value store only after being hashed with a salt and key which is stored in Secret Manager This is what the architecture will look like when we re done img arch png Usage This repo offers end to end deployment of the Hashpipeline solution using HashiCorp Terraform https terraform io given a project and list of buckets to monitor Prerequisites This has only been tested on Mac OSX but will likely work on Linux as well terraform executable is available in PATH gcloud is installed and up to date python is version 3 5 or higher Step 1 Deploy the Infrastructure Note that the following APIs will be enabled on your project by Terraform iam googleapis com dlp googleapis com secretmanager googleapis com firestore googleapis com dataflow googleapis com compute googleapis com Then deploy the infrastructure to your project cd infrastructure cp terraform tfvars sample terraform tfvars Update with your own values terraform apply Step 2 Generate the Hash Key This will create a new 64 byte key for use with HMAC and store it in Secret Manager make pip make create key Step 3 Seed Firestore with SSNs Since SSNs can exist in the data center in many stores we ll assume the input is a flat newline separated file including valid SSNs How you get them in that format is up to you Once you have your input file simply authenticate to gcloud and then run scripts hasher py upload project PROJECT secret SECRET salt SALT collection COLLECTION infile SSN FILE For more information on the input parameters just run bin hasher py help Step 4 Build and Deploy This uses Dataflow s Templates to build our pipeline and then run it To use the values we created in terraform just run make build make deploy At this point your Dataflow job will start up so you can check its progress in the GCP Console Step 5 Subscribe This pipeline just emits every finding in the file as a separate Pubsub message We show an example of how to subscribe to this and consume these messages in Python in the poller py scripts poller py script However since this is specifically a security solution you will likely want to consume these notifications in your SIEM such as Splunk etc Testing Demo Step 1 Follow Step 1 and 2 from above to set up the demo environment Step 2 Seed the Firestore with Fake SSNs This script will do the following Create a list of valid and random Social Security Numbers Store the plain text in scripts socials txt Hash the numbers normalized without dashes using HMAC SHA256 and the key generated from make create key Store the hashed values in Firestore under the collection specified in the terraform variable firestore collection make seed firestore Step 3 Generate some input files for dataflow to use This will store the input files under the inputs directory so we have something to test with make generate input files Step 5 Test out the pipeline locally This will run the pipeline against the small input txt file generated by the previous step In only has 50 lines so it shouldn t take too long make run local Step 6 Subscribe In a separate terminal start the poller from the test subscription and count the findings by filename make subscribe Successfully subscribed to subscription Messages will print below Now in a third terminal run the following command to upload a file to the test bucket export BUCKET dataflow test bucket gsutil cp inputs small input txt gs BUCKET small txt After a little while in your subscribe terminal you should get something that looks like this after the files have been uploaded along with the raw messages printed to standard out Filename Findings gs dataflow test bucket small txt 26 This number can be verified by looking in the file itself on the first line which would say expected valid socials 26 for this example Step 7 Deploy the pipeline to a template make build make deploy Now you can try out the same thing as Step 4 to verify it works Disclaimer While best efforts have been made to make this pipeline hardened from a security perspective this is meant only as a demo and proof of concept and should not be directly used in a production system without being fully vetted by security teams and the people who will maintain the code in the organization
GCP This directory shows a series of pipelines used to generate data in GCS or BigQuery The intention for these pipelines are to be a tool for partners customers and SCEs who want to create a dummy dataset that Testing Data Generators Human Readable Data Generation looks like the schema of their actual data in order to run some queries in BigQuery Data Generator There are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance These pipelines are a great place to get started when you only have a customer s schema
# Data Generator This directory shows a series of pipelines used to generate data in GCS or BigQuery. The intention for these pipelines are to be a tool for partners, customers and SCEs who want to create a dummy dataset that looks like the schema of their actual data in order to run some queries in BigQuery. There are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance Testing Data Generators. ## Human Readable Data Generation These pipelines are a great place to get started when you only have a customer's schema and do not have a requirement for your generated dataset to have similar distribution to the source dataset (this is required for accurately capturing query performance). - Human readable / queryable data. This includes smart populating columns with data formatted based on the field name. - This can be used in scenarios where there are hurdles to get over in migrating actual data to BigQuery to unblock integration tests and downstream development. - Generate joinable schemas for < 1 Billion distinct keys - Generates data from just a schema - Numeric columns trend upwards based on a `date` field if it exists. ![Alt text](img/data-generator.png) - [Data Generator](data-generator-pipeline/data_generator_pipeline.py): This pipeline should can be used to generate a central fact table in snowflake schema. - [Data Generator (Joinable Table)](data-generator-pipeline/data_generator_pipeline.py): this pipeline should be used to generate data that joins to an existing BigQuery Table on a certain key. ## Performance Testing Data Generation The final pipeline supports the later use case where matching the distribution of the source dataset for replicating query performance is the goal. - Prioritizes speed and distribution matching over human readable data (ie. random strings rather than random sentences w/ english words) - Match the distribution of keys in a dataset to benchmark join performance - Generate joinable schemas on a larger scale. - Generates data based on a schema and a histogram table containing the desired distribution of data across the key columns ![Alt text](img/distribution-matcher.png) - [Histogram Tool](bigquery-scripts/bq_histogram_tool.py): This is an example script of what could be run on a customer's table to extract the distribution information per key without collecting meaningful data. This script would be run by the client and they would share the output table. If the customer is not already in BigQuery this histogram tool can serve as boilerplate for a histogram tool that reads from their source database and writes to BigQuery. - [Distribution Matcher](data-generator-pipeline/data_distribution_matcher.py): This pipeline operates on a BigQuery table containing key hashes and counts and will replicate this distribution in the generated dataset.. ## General Performance Recommendations A few recommendations when generating large datasets with any of these pipelines: - Write to AVRO on GCS then load to BigQuery. - Use machines with a lot of CPU. We recommend `n1-highcpu-32`. - Run on a private network to avoid using public ip addresses. - Request higher quotas for your project to support scaling to 300+ large workers, specifically, in the region you wish to run the pipeline: - 300+ In-use IP addresses - 10,000+ CPUs ### Human Readable Data Generator Usage This tool has several parameters to specify what kind of data you would like to generate. #### Schema The schema may be specified using the `--schema_file` parameter with a file containing a list of json objects with `name`, `type`, `mode` and optionally `description` fields. This form follows the output of`bq show --format=json --schema <table_reference>`. This data generator now supports nested types like `RECORD`/`STRUCT`. Note, that the approach taken was to generate a `REPEATED` `RECORD` (aka `ARRAY<STRUCT>`) and each record generated will have between 0 and 3 elements in this array. ie. ``` --schema_file=gs://python-dataflow-example/schemas/lineorder-schema.json ``` lineorder-schema.json: ``` { "fields": [ {"name": "lo_order_key", "type": "STRING", "mode": "REQUIRED" }, {"name": "lo_linenumber", "type": "INTEGER", "mode": "NULLABLE" }, {...} ] } ``` Alternatively, the schema may be specified with a reference to an existing BigQuery table with the `--input_bq_table` parameter. We suggest using the BigQuery UI to create an empty BigQuery table to avoid typos when writing your own schema json. ``` --input_bq_table=BigQueryFaker.lineorders ``` Note, if you are generating data that is also being loaded into an RDBMS you can specify the RDMS type in the `description` field of the schema. The data generator will parse this to extract datasize. ie. The below field will have strings truncated to be within 36 bytes. ``` [ {"name": "lo_order_key", "type": "STRING", "mode": "REQUIRED", "description": "VARCHAR(36)" }, {...} ] ``` #### Number of records To specify the number of records to generate use the `--num_records` parameter. Note we recommend only calling this pipeline for a maximum of 50 Million records at a time. For generating larger tables you can simply call the pipeline script several times. ``` --num_records=1000000 ``` #### Output Prefix The output is specified as a GCS prefix. Note that multiple files will be written with `<prefix>-<this-shard-number>-of-<total-shards>.<suffix>`. The suffix will be the appropriate suffix for the file type based on if you pass the `--csv_schema_order` or `--avro_schema_file` parameters described later. --gcs_output_prefix=gs://<BUCKET NAME>/path/to/myprefix Will create files at: gs://<BUCKET NAME>/path/to/myprefix-#####-of-#####.<suffix> #### Output format Output format is specified by passing one of the `--csv_schema_order`, `--avro_schema_file`, or `--write_to_parquet` parameters. `--csv_schema_order` should be a comma separated list specifying the order of the fieldnames for writing. Note that `RECORD` are not supported when writing to CSV, because it is a flat file format. ``` --csv_schema_order=lo_order_key,lo_linenumber,... ``` `--avro_schema_file` should be a file path to the avro schema to write. ``` --avro_schema_file=/path/to/linorders.avsc ``` `--write_to_parquet` is a flag that specifies the output should be parquet. In order for beam to write to parquet, a pyarrow schema is needed. Therefore, this tool translates the schema in the `--schema_file` to a pyarrow schema automatically if this flag is included, but pyarrow doesn't support all fields that are supported by BigQuery. STRING, NUMERIC, INTEGER, FLOAT, NUMERIC, BOOLEAN, TIMESTAMP, DATE, TIME, and DATETIME types are supported. There is limited support for writing RECORD types to parquet. Due to this [known pyarrow issue](https://jira.apache.org/jira/browse/ARROW-2587?jql=project%20%3D%20ARROW%20AND%20fixVersion%20%3D%200.14.0%20AND%20text%20~%20%22struct%22) this tool does not support writing arrays nested within structs. However BYTE, and GEOGRAPHY fields are not supported and cannot be included in the `--schema_file` when writing to parquet. ``` --write_to_parquet ``` Alternatively, you can write directly to a BigQuery table by specifying an `--output_bq_table`. However, if you are generating more than 100K records, you may run into the limitation of the python SDK where WriteToBigQuery does not orchestrate multiple load jobs you hit one of the single load job limitations [BEAM-2801](https://issues.apache.org/jira/browse/BEAM-2801). If you are not concerned with having many duplicates, you can generate an initial BigQuery table with `--num_records=10000000` and then use [`bq_table_resizer.py`](bigquery-scripts/bq_table_resizer.py) to copy the table into itself until it reaches the desired size. ``` --output_bq_table=project:dataset.table ``` #### Sparsity (optional) Data is seldom full for every record so you can specify the probability of a NULLABLE column being null with the `--p_null` parameter. ``` --p_null=0.2 ``` #### Keys and IDs (optional) The data generator will parse your field names and generate keys/ids for fields whose name contains "`_key`" or "`_id`". The cardinality of such key columns can be controlled with the `--n_keys` parameter. Additionally, you can parameterize the key-skew by passing` --key_skew_distribution`. By default this is `None`, meaning roughly equal distribution of rowcount across keys. This also supports `"binomial"` giving a maximum variance bell curve of keys over the range of the keyset or `"zipf"` giving a distribution across the keyset according to zipf's law. ##### Primary Key (optional) The data generator can support a primary key columns by passing a comma separated list of field names to `--primary_key_cols`. Note this is done by a deduplication process at the end of the pipeline. This may be a bottleneck for large data volumes. Also, using this parameter might cause you to fall short of `--num_records` output records due to the deduplicaiton. To mitigate this you can set `--n_keys` to a number much larger than the number of records you are generating. #### Date Parameters (optional) To constrain the dates generated in date columns one can use the `--min_date` and `--max_date` parameters. The minimum date will default to January 1, 2000 and the `max_date` will default to today. If you are using these parameters be sure to use YYYY-MM-DD format. ``` --min_date=1970-01-01 \ --max_date=2010-01-01 ``` #### Number Parameters (optional) The range of integers and/or floats can be constrained with the `--max_int` and `--max_float` parameters. These default to 100 Million. The number of decimal places in a float can be controlled with the `--float_precision` parameter. The default float precision is 2. Both integers and floats can be constrained to strictly positive values using the `--strictly_pos=True`. True is the default. #### Write Disposition (optional) The BigQuery write disposition can be specified using the `--write_disp` parameter. The default is `WRITE_APPEND`. #### Dataflow Pipeline parameters For basic usage we recommend the following parameters: ``` python data_generator_pipeline.py \ --project=<PROJECT ID> \ --setup_file=./setup.py \ --worker_machine_type=n1-highcpu-32 \ # This is a high cpu process so tuning the machine type will boost performance --runner=DataflowRunner \ # run on Dataflow workers --staging_location=gs://<BUCKET NAME>/test \ --temp_location=gs://<BUCKET NAME>/temp \ --save_main_session \ # serializes main session and sends to each worker ``` For isolating your Dataflow workers on a private network you can additionally specify: ``` ... --region=us-east1 \ --subnetwork=<FULL PATH TO SUBNET> \ --network=<NETWORK ID> \ --no_use_public_ips ``` ### Modifying FakeRowGen You may want to change the `FakeRowGen` DoFn class to more accurately spoof your data. You can use `special_map` to map substrings in field names to [Faker Providers](https://faker.readthedocs.io/en/latest/providers.html). The only requirement for this DoFn is for it to return a list containing a single python dictionary mapping field names to values. So hack away if you need something more specific any python code is fair game. Keep in mind that if you use a non-standard module (available in PyPI) you will need to make sure it gets installed on each of the workers or you will get namespace issues. This can be done most simply by adding the module to `setup.py`. ### Generating Joinable tables Snowflake schema To generate multiple tables that join based on certain keys, start by generating the central fact table with the above described [`data_generator_pipeline.py`](data-generator-pipeline/data_generator_pipeline.py). Then use [`data_generator_joinable_table.py`](data-generator-pipeline/data_generator_pipeline.py) with the above described parameters for the new table plus three additional parameters described below. - `--fact_table` The existing fact table in BigQuery that will be queried to obtain list of distinct key values. - `--source_joining_key_col` The field name of the foreign key col in the existing table. - `--dest_joining_key_col` The field name in the table we are generating with the pipeline for joining to the existing table. Note, this method selects distinct keys from the `--fact_table` as a side input which are passed as a list to the to each worker which randomly selects a value to assign to this record. This means that this list must comfortably fit in memory. This makes this method only suitable for key columns with relatively low cardinality (< 1 Billion distinct keys). If you have more rigorous needs for generating joinable schemas, you should consider using the distribution matcher pipeline. ## Performance Testing Data Generator Usage Steps: - Generate the posterior histogram table. For an example of how to do this on an existing BigQuery table look at the BigQuery Histogram Tool described later in this doc. - Use the [`data_distribution_matcher.py`](data-generator-pipeline/data_distribution_matcher.py) pipeline. You can specify `--schema_file` (or `--input_table`), `--gcs_output_prefix` and `--output_format` the same way as described above in the Human Readable Data Generator section. Additionally, you must specify an `--histogram_table`. This table will have a field for each key column (which will store a hash of each value) and a frequency with which these values occur. ### Generating Joinable Schemas Joinable tables can be created by running the distribution matcher on a histogram for all relevant tables in the dataset. Because each histogram table entry captures the hash of each key it refers to we can capture exact join scenarios without handing over any real data. ## BigQuery Scripts Included are three BigQuery utility scripts to help you with your data generating needs. The first helps with loading many gcs files to BigQuery while staying under the 15TB per load job limit, the next will help you profile the distribution of an existing dataset and the last will allow you to resize BigQuery tables to be a desired size. ### BigQuery batch loads This script is meant to orchestrate BigQuery load jobs of many json files on Google Cloud Storage. It ensures that each load stays under the 15 TB per load job limit. It operates on the output of `gsutil -l`. This script can be called with the following arguments: `--project`: GCP project ID `--dataset`: BigQuery dataset ID containing the table your wish to populate. `--table`: BigQuery table ID of the table you wish to populate `--source_file`: This is the output of `gsutil -l` with the URI of each file that you would like to load `--create_table`: If passed this script will create the destination table. `--schema_file`: Path to a json file defining the destination BigQuery table schema. `--partitioning_column`: name of the field for date partitioning in the destination table. `--max_bad_records`: Number of permissible bad records per load job. #### Example Usage: ``` gsutil -l gs://<bucket>/path/to/json/<file prefix>-*.json >> ./files_to_load.txt python bq_load_batches.py --project=<project> \ --dataset=<dataset_id> \ --table=<table_id> \ --partitioning_column date \ --source_file=files_to_load.txt ``` ### BigQuery Histogram Tool This script will create a BigQuery table containing the hashes of the key columns specified as a comma separated list to the `--key_cols` parameter and the frequency for which that group of key columns appears in the `--input_table`. This serves as a histogram of the original table and will be used as the source for [`data_distribution_matcher.py`](data_generator_pipeline/data_distribution_matcher.py) #### Example Usage: ``` python bq_histogram_tool.py \ --input_table=<project>.<dataset>.<source_table> \ --output_table=<project>.<dataset>.<histogram_table> \ --key_cols=item_id,store_id ``` ### BigQuery table resizer This script is to help increase the size of a table based on a generated or sample. If you are short on time and have a requirement to generate a 100TB table you can use this script to generate a few GB and copy table into itself until it it is the desired size or number of rows. While this would be inappropriate for accurate performance benchmarking it can be used to get a query specific cost estimate. This script can be used to copy a table in place or create a new table if you want to maintain the record of the original records. You can specify the target table size in either number of rows or GB. #### Example Usage ``` python bq_table_resizer.py \ --project my-project-id \ --source_dataset my-dataset-id \ --source_table my-source-table-id \ --destination_dataset my-dataset-id \ --destination_table my-new-table-id \ --target_gb 15000 \ --location US ``` ### Running the tests Note, that the tests for the BigQuery table resizer require that you have `GOOGLE_APPLICATION_DEFAULT` set to credentials with access to a BigQuery environment where you can create and destroy tables. ``` cd data-generator-pipeline python -m unittest discover ```
GCP
Data Generator This directory shows a series of pipelines used to generate data in GCS or BigQuery The intention for these pipelines are to be a tool for partners customers and SCEs who want to create a dummy dataset that looks like the schema of their actual data in order to run some queries in BigQuery There are two different types of use cases for this kind of tool which we refer to throughout this documentation as Human Readable and Performance Testing Data Generators Human Readable Data Generation These pipelines are a great place to get started when you only have a customer s schema and do not have a requirement for your generated dataset to have similar distribution to the source dataset this is required for accurately capturing query performance Human readable queryable data This includes smart populating columns with data formatted based on the field name This can be used in scenarios where there are hurdles to get over in migrating actual data to BigQuery to unblock integration tests and downstream development Generate joinable schemas for 1 Billion distinct keys Generates data from just a schema Numeric columns trend upwards based on a date field if it exists Alt text img data generator png Data Generator data generator pipeline data generator pipeline py This pipeline should can be used to generate a central fact table in snowflake schema Data Generator Joinable Table data generator pipeline data generator pipeline py this pipeline should be used to generate data that joins to an existing BigQuery Table on a certain key Performance Testing Data Generation The final pipeline supports the later use case where matching the distribution of the source dataset for replicating query performance is the goal Prioritizes speed and distribution matching over human readable data ie random strings rather than random sentences w english words Match the distribution of keys in a dataset to benchmark join performance Generate joinable schemas on a larger scale Generates data based on a schema and a histogram table containing the desired distribution of data across the key columns Alt text img distribution matcher png Histogram Tool bigquery scripts bq histogram tool py This is an example script of what could be run on a customer s table to extract the distribution information per key without collecting meaningful data This script would be run by the client and they would share the output table If the customer is not already in BigQuery this histogram tool can serve as boilerplate for a histogram tool that reads from their source database and writes to BigQuery Distribution Matcher data generator pipeline data distribution matcher py This pipeline operates on a BigQuery table containing key hashes and counts and will replicate this distribution in the generated dataset General Performance Recommendations A few recommendations when generating large datasets with any of these pipelines Write to AVRO on GCS then load to BigQuery Use machines with a lot of CPU We recommend n1 highcpu 32 Run on a private network to avoid using public ip addresses Request higher quotas for your project to support scaling to 300 large workers specifically in the region you wish to run the pipeline 300 In use IP addresses 10 000 CPUs Human Readable Data Generator Usage This tool has several parameters to specify what kind of data you would like to generate Schema The schema may be specified using the schema file parameter with a file containing a list of json objects with name type mode and optionally description fields This form follows the output of bq show format json schema table reference This data generator now supports nested types like RECORD STRUCT Note that the approach taken was to generate a REPEATED RECORD aka ARRAY STRUCT and each record generated will have between 0 and 3 elements in this array ie schema file gs python dataflow example schemas lineorder schema json lineorder schema json fields name lo order key type STRING mode REQUIRED name lo linenumber type INTEGER mode NULLABLE Alternatively the schema may be specified with a reference to an existing BigQuery table with the input bq table parameter We suggest using the BigQuery UI to create an empty BigQuery table to avoid typos when writing your own schema json input bq table BigQueryFaker lineorders Note if you are generating data that is also being loaded into an RDBMS you can specify the RDMS type in the description field of the schema The data generator will parse this to extract datasize ie The below field will have strings truncated to be within 36 bytes name lo order key type STRING mode REQUIRED description VARCHAR 36 Number of records To specify the number of records to generate use the num records parameter Note we recommend only calling this pipeline for a maximum of 50 Million records at a time For generating larger tables you can simply call the pipeline script several times num records 1000000 Output Prefix The output is specified as a GCS prefix Note that multiple files will be written with prefix this shard number of total shards suffix The suffix will be the appropriate suffix for the file type based on if you pass the csv schema order or avro schema file parameters described later gcs output prefix gs BUCKET NAME path to myprefix Will create files at gs BUCKET NAME path to myprefix of suffix Output format Output format is specified by passing one of the csv schema order avro schema file or write to parquet parameters csv schema order should be a comma separated list specifying the order of the fieldnames for writing Note that RECORD are not supported when writing to CSV because it is a flat file format csv schema order lo order key lo linenumber avro schema file should be a file path to the avro schema to write avro schema file path to linorders avsc write to parquet is a flag that specifies the output should be parquet In order for beam to write to parquet a pyarrow schema is needed Therefore this tool translates the schema in the schema file to a pyarrow schema automatically if this flag is included but pyarrow doesn t support all fields that are supported by BigQuery STRING NUMERIC INTEGER FLOAT NUMERIC BOOLEAN TIMESTAMP DATE TIME and DATETIME types are supported There is limited support for writing RECORD types to parquet Due to this known pyarrow issue https jira apache org jira browse ARROW 2587 jql project 20 3D 20ARROW 20AND 20fixVersion 20 3D 200 14 0 20AND 20text 20 20 22struct 22 this tool does not support writing arrays nested within structs However BYTE and GEOGRAPHY fields are not supported and cannot be included in the schema file when writing to parquet write to parquet Alternatively you can write directly to a BigQuery table by specifying an output bq table However if you are generating more than 100K records you may run into the limitation of the python SDK where WriteToBigQuery does not orchestrate multiple load jobs you hit one of the single load job limitations BEAM 2801 https issues apache org jira browse BEAM 2801 If you are not concerned with having many duplicates you can generate an initial BigQuery table with num records 10000000 and then use bq table resizer py bigquery scripts bq table resizer py to copy the table into itself until it reaches the desired size output bq table project dataset table Sparsity optional Data is seldom full for every record so you can specify the probability of a NULLABLE column being null with the p null parameter p null 0 2 Keys and IDs optional The data generator will parse your field names and generate keys ids for fields whose name contains key or id The cardinality of such key columns can be controlled with the n keys parameter Additionally you can parameterize the key skew by passing key skew distribution By default this is None meaning roughly equal distribution of rowcount across keys This also supports binomial giving a maximum variance bell curve of keys over the range of the keyset or zipf giving a distribution across the keyset according to zipf s law Primary Key optional The data generator can support a primary key columns by passing a comma separated list of field names to primary key cols Note this is done by a deduplication process at the end of the pipeline This may be a bottleneck for large data volumes Also using this parameter might cause you to fall short of num records output records due to the deduplicaiton To mitigate this you can set n keys to a number much larger than the number of records you are generating Date Parameters optional To constrain the dates generated in date columns one can use the min date and max date parameters The minimum date will default to January 1 2000 and the max date will default to today If you are using these parameters be sure to use YYYY MM DD format min date 1970 01 01 max date 2010 01 01 Number Parameters optional The range of integers and or floats can be constrained with the max int and max float parameters These default to 100 Million The number of decimal places in a float can be controlled with the float precision parameter The default float precision is 2 Both integers and floats can be constrained to strictly positive values using the strictly pos True True is the default Write Disposition optional The BigQuery write disposition can be specified using the write disp parameter The default is WRITE APPEND Dataflow Pipeline parameters For basic usage we recommend the following parameters python data generator pipeline py project PROJECT ID setup file setup py worker machine type n1 highcpu 32 This is a high cpu process so tuning the machine type will boost performance runner DataflowRunner run on Dataflow workers staging location gs BUCKET NAME test temp location gs BUCKET NAME temp save main session serializes main session and sends to each worker For isolating your Dataflow workers on a private network you can additionally specify region us east1 subnetwork FULL PATH TO SUBNET network NETWORK ID no use public ips Modifying FakeRowGen You may want to change the FakeRowGen DoFn class to more accurately spoof your data You can use special map to map substrings in field names to Faker Providers https faker readthedocs io en latest providers html The only requirement for this DoFn is for it to return a list containing a single python dictionary mapping field names to values So hack away if you need something more specific any python code is fair game Keep in mind that if you use a non standard module available in PyPI you will need to make sure it gets installed on each of the workers or you will get namespace issues This can be done most simply by adding the module to setup py Generating Joinable tables Snowflake schema To generate multiple tables that join based on certain keys start by generating the central fact table with the above described data generator pipeline py data generator pipeline data generator pipeline py Then use data generator joinable table py data generator pipeline data generator pipeline py with the above described parameters for the new table plus three additional parameters described below fact table The existing fact table in BigQuery that will be queried to obtain list of distinct key values source joining key col The field name of the foreign key col in the existing table dest joining key col The field name in the table we are generating with the pipeline for joining to the existing table Note this method selects distinct keys from the fact table as a side input which are passed as a list to the to each worker which randomly selects a value to assign to this record This means that this list must comfortably fit in memory This makes this method only suitable for key columns with relatively low cardinality 1 Billion distinct keys If you have more rigorous needs for generating joinable schemas you should consider using the distribution matcher pipeline Performance Testing Data Generator Usage Steps Generate the posterior histogram table For an example of how to do this on an existing BigQuery table look at the BigQuery Histogram Tool described later in this doc Use the data distribution matcher py data generator pipeline data distribution matcher py pipeline You can specify schema file or input table gcs output prefix and output format the same way as described above in the Human Readable Data Generator section Additionally you must specify an histogram table This table will have a field for each key column which will store a hash of each value and a frequency with which these values occur Generating Joinable Schemas Joinable tables can be created by running the distribution matcher on a histogram for all relevant tables in the dataset Because each histogram table entry captures the hash of each key it refers to we can capture exact join scenarios without handing over any real data BigQuery Scripts Included are three BigQuery utility scripts to help you with your data generating needs The first helps with loading many gcs files to BigQuery while staying under the 15TB per load job limit the next will help you profile the distribution of an existing dataset and the last will allow you to resize BigQuery tables to be a desired size BigQuery batch loads This script is meant to orchestrate BigQuery load jobs of many json files on Google Cloud Storage It ensures that each load stays under the 15 TB per load job limit It operates on the output of gsutil l This script can be called with the following arguments project GCP project ID dataset BigQuery dataset ID containing the table your wish to populate table BigQuery table ID of the table you wish to populate source file This is the output of gsutil l with the URI of each file that you would like to load create table If passed this script will create the destination table schema file Path to a json file defining the destination BigQuery table schema partitioning column name of the field for date partitioning in the destination table max bad records Number of permissible bad records per load job Example Usage gsutil l gs bucket path to json file prefix json files to load txt python bq load batches py project project dataset dataset id table table id partitioning column date source file files to load txt BigQuery Histogram Tool This script will create a BigQuery table containing the hashes of the key columns specified as a comma separated list to the key cols parameter and the frequency for which that group of key columns appears in the input table This serves as a histogram of the original table and will be used as the source for data distribution matcher py data generator pipeline data distribution matcher py Example Usage python bq histogram tool py input table project dataset source table output table project dataset histogram table key cols item id store id BigQuery table resizer This script is to help increase the size of a table based on a generated or sample If you are short on time and have a requirement to generate a 100TB table you can use this script to generate a few GB and copy table into itself until it it is the desired size or number of rows While this would be inappropriate for accurate performance benchmarking it can be used to get a query specific cost estimate This script can be used to copy a table in place or create a new table if you want to maintain the record of the original records You can specify the target table size in either number of rows or GB Example Usage python bq table resizer py project my project id source dataset my dataset id source table my source table id destination dataset my dataset id destination table my new table id target gb 15000 location US Running the tests Note that the tests for the BigQuery table resizer require that you have GOOGLE APPLICATION DEFAULT set to credentials with access to a BigQuery environment where you can create and destroy tables cd data generator pipeline python m unittest discover
GCP Unit Tests pytest bash pip install r requirements dev txt Run unit tests after installing development dependencis
Unit Tests === Run unit tests after installing development dependencis: ```bash pip install -r requirements-dev.txt pytest ``` Save and Replay VM Deletion Events === It is useful to replay VM deletion events to test out changes to the Background Function. See the [Replay Quickstart][replay-qs] for more information. 1. Deploy the function 2. Get the subscription of the function with `gcloud pubsub subscriptions list` 3. Create a pubsub snapshot with `gcloud pubsub snapshots create vm-deletions --subscription <subscription>` 4. Delete a VM instance 5. Deploy a new version of the same function 6. Replay the deletion events to the new version `gcloud pubsub subscriptions seek <subscription> --snapshot=vm-deletions` 7. Inspect the function output with `gcloud functions logs read --limit 50` Interactive Testing === A run helper method is provided to run the function from a local workstation. First, make sure you have the following environment variables set as if they would be in the context of GCF. ```bash # env | grep DNS DNS_VM_GC_DNS_PROJECT=dnsregistration DNS_VM_GC_DNS_ZONES=nonprod-private-zone ``` If necessary, use Application Default Credentials: ```bash # env | grep GOOG GOOGLE_APPLICATION_CREDENTIALS=/Users/jmccune/.credentials/dns-logging-83a9d261f444.json ``` Run the python REPL with debugging enabled via the DEBUG environment variable. ```ipython Python 3.7.3 (default, Mar 27 2019, 09:23:39) [Clang 10.0.0 (clang-1000.11.45.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import conftest >>> from main_test import run >>> run() {"message": "BEGIN Zone cleanup", "managed_zone": "nonprod-private-zone"} {"message": "BEGIN search for deletion candidates", "instance": "test", "ip": "10.138.0.45"} {"message": "Skipped, not an A record", "record": {"name": "gcp.example.com.", "type": "NS", "ttl": 21600, "rrdatas": ["ns-gcp-private.googledomains.com."], "signatureRrdatas": [], "kind": "dns#resourceRecordSet"}} {"message": "Skipped, not an A record", "record": {"name": "gcp.example.com.", "type": "SOA", "ttl": 21600, "rrdatas": ["ns-gcp-private.googledomains.com. cloud-dns-hostmaster.google.com. 1 21600 3600 259200 300"], "signatureRrdatas": [], "kind": "dns#resourceRecordSet"}} {"message": "Skipped, shortname != instance", "record": {"name": "keep.gcp.example.com.", "type": "A", "ttl": 300, "rrdatas": ["10.138.0.45"], "signatureRrdatas": [], "kind": "dns#resourceRecordSet"}} {"message": "Skipped, ip does not match", "record": {"name": "test.keep.gcp.example.com.", "type": "A", "ttl": 300, "rrdatas": ["10.138.0.43", "10.138.0.44", "10.138.0.45"], "signatureRrdatas": [], "kind": "dns#resourceRecordSet"}} {"message": "Skipped, ip does not match", "record": {"name": "test.nonprod.gcp.example.com.", "type": "A", "ttl": 300, "rrdatas": ["10.138.0.250"], "signatureRrdatas": [], "kind": "dns#resourceRecordSet"}} {"message": "END search for deletion candidates", "candidates": []} {"message": "END Zone cleanup", "managed_zone": "nonprod-private-zone"} >>> ``` Unit Tests === Run unit tests to interact with fixture data. Fixture data is provided for the main entry point of a pubsub message sent to Google Cloud Functions and for the API response when collecting the IP address from the compute API. Test dependencies --- Install test dependencies: ```bash pip install pytest pip install google-api-python-client ``` Run the tests --- ``` pytest -vv tests ``` Sample events === This is the JSON log data sent from Stackdriver's compute.instances.delete event to Pub/Sub and then to the Background Function implemented in Python. There are two events, one for the GCE_API_CALL initiating the VM deletion, the second for the GCE_OPERATION_DONE event concluding the VM deletion. Obtained using `gcloud functions logs read --limit 100`. ```txt D dns_vm_gc 578383257362746 2019-06-12 01:30:09.128 Function execution started I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "insertId": "6o9bdnfxt05mn", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "jsonPayload": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "actor": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "user": "jmccune@google.com" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "event_subtype": "compute.instances.delete", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "event_timestamp_us": "1560300993146327", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "event_type": "GCE_API_CALL", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "ip_address": "", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "operation": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "id": "971500189857477422", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "name": "operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "type": "operation", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "zone": "us-west1-a" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "request": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "body": "null", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "url": "https://www.googleapis.com/compute/v1/projects/user-dev-242122/zones/us-west1-a/instances/test?key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "resource": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "id": "613579339353422259", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "name": "test", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "type": "instance", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "zone": "us-west1-a" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "trace_id": "operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "version": "1.2" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "labels": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "compute.googleapis.com/resource_id": "613579339353422259", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "compute.googleapis.com/resource_name": "test", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "compute.googleapis.com/resource_type": "instance", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "compute.googleapis.com/resource_zone": "us-west1-a" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "logName": "projects/user-dev-242122/logs/compute.googleapis.com%2Factivity_log", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "receiveTimestamp": "2019-06-12T00:56:33.188262452Z", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "resource": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "labels": { I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "instance_id": "613579339353422259", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "project_id": "user-dev-242122", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "zone": "us-west1-a" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "type": "gce_instance" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 }, I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "severity": "INFO", I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 "timestamp": "2019-06-12T00:56:33.146327Z" I dns_vm_gc 578383257362746 2019-06-12 01:30:09.145 } D dns_vm_gc 578383257362746 2019-06-12 01:30:09.146 Function execution took 20 ms, finished with status: 'ok' D dns_vm_gc 578389534045377 2019-06-12 01:30:20.433 Function execution started I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "insertId": "hrbv6bfh9qzr2", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "jsonPayload": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "actor": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "user": "jmccune@google.com" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "event_subtype": "compute.instances.delete", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "event_timestamp_us": "1560301035092322", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "event_type": "GCE_OPERATION_DONE", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "operation": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "id": "971500189857477422", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "name": "operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "type": "operation", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "zone": "us-west1-a" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "resource": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "id": "613579339353422259", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "name": "test", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "type": "instance", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "zone": "us-west1-a" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "trace_id": "operation-1560300992590-58b15e267f2cc-e7529c4d-f1343924", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "version": "1.2" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "labels": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "compute.googleapis.com/resource_id": "613579339353422259", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "compute.googleapis.com/resource_name": "test", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "compute.googleapis.com/resource_type": "instance", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "compute.googleapis.com/resource_zone": "us-west1-a" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "logName": "projects/user-dev-242122/logs/compute.googleapis.com%2Factivity_log", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "receiveTimestamp": "2019-06-12T00:57:15.146606691Z", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "resource": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "labels": { I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "instance_id": "613579339353422259", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "project_id": "user-dev-242122", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "zone": "us-west1-a" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "type": "gce_instance" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 }, I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "severity": "INFO", I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 "timestamp": "2019-06-12T00:57:15.092322Z" I dns_vm_gc 578389534045377 2019-06-12 01:30:20.438 } D dns_vm_gc 578389534045377 2019-06-12 01:30:20.439 Function execution took 7 ms, finished with status: 'ok' ```
GCP
Unit Tests Run unit tests after installing development dependencis bash pip install r requirements dev txt pytest Save and Replay VM Deletion Events It is useful to replay VM deletion events to test out changes to the Background Function See the Replay Quickstart replay qs for more information 1 Deploy the function 2 Get the subscription of the function with gcloud pubsub subscriptions list 3 Create a pubsub snapshot with gcloud pubsub snapshots create vm deletions subscription subscription 4 Delete a VM instance 5 Deploy a new version of the same function 6 Replay the deletion events to the new version gcloud pubsub subscriptions seek subscription snapshot vm deletions 7 Inspect the function output with gcloud functions logs read limit 50 Interactive Testing A run helper method is provided to run the function from a local workstation First make sure you have the following environment variables set as if they would be in the context of GCF bash env grep DNS DNS VM GC DNS PROJECT dnsregistration DNS VM GC DNS ZONES nonprod private zone If necessary use Application Default Credentials bash env grep GOOG GOOGLE APPLICATION CREDENTIALS Users jmccune credentials dns logging 83a9d261f444 json Run the python REPL with debugging enabled via the DEBUG environment variable ipython Python 3 7 3 default Mar 27 2019 09 23 39 Clang 10 0 0 clang 1000 11 45 5 on darwin Type help copyright credits or license for more information import conftest from main test import run run message BEGIN Zone cleanup managed zone nonprod private zone message BEGIN search for deletion candidates instance test ip 10 138 0 45 message Skipped not an A record record name gcp example com type NS ttl 21600 rrdatas ns gcp private googledomains com signatureRrdatas kind dns resourceRecordSet message Skipped not an A record record name gcp example com type SOA ttl 21600 rrdatas ns gcp private googledomains com cloud dns hostmaster google com 1 21600 3600 259200 300 signatureRrdatas kind dns resourceRecordSet message Skipped shortname instance record name keep gcp example com type A ttl 300 rrdatas 10 138 0 45 signatureRrdatas kind dns resourceRecordSet message Skipped ip does not match record name test keep gcp example com type A ttl 300 rrdatas 10 138 0 43 10 138 0 44 10 138 0 45 signatureRrdatas kind dns resourceRecordSet message Skipped ip does not match record name test nonprod gcp example com type A ttl 300 rrdatas 10 138 0 250 signatureRrdatas kind dns resourceRecordSet message END search for deletion candidates candidates message END Zone cleanup managed zone nonprod private zone Unit Tests Run unit tests to interact with fixture data Fixture data is provided for the main entry point of a pubsub message sent to Google Cloud Functions and for the API response when collecting the IP address from the compute API Test dependencies Install test dependencies bash pip install pytest pip install google api python client Run the tests pytest vv tests Sample events This is the JSON log data sent from Stackdriver s compute instances delete event to Pub Sub and then to the Background Function implemented in Python There are two events one for the GCE API CALL initiating the VM deletion the second for the GCE OPERATION DONE event concluding the VM deletion Obtained using gcloud functions logs read limit 100 txt D dns vm gc 578383257362746 2019 06 12 01 30 09 128 Function execution started I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 insertId 6o9bdnfxt05mn I dns vm gc 578383257362746 2019 06 12 01 30 09 145 jsonPayload I dns vm gc 578383257362746 2019 06 12 01 30 09 145 actor I dns vm gc 578383257362746 2019 06 12 01 30 09 145 user jmccune google com I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 event subtype compute instances delete I dns vm gc 578383257362746 2019 06 12 01 30 09 145 event timestamp us 1560300993146327 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 event type GCE API CALL I dns vm gc 578383257362746 2019 06 12 01 30 09 145 ip address I dns vm gc 578383257362746 2019 06 12 01 30 09 145 operation I dns vm gc 578383257362746 2019 06 12 01 30 09 145 id 971500189857477422 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 name operation 1560300992590 58b15e267f2cc e7529c4d f1343924 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 type operation I dns vm gc 578383257362746 2019 06 12 01 30 09 145 zone us west1 a I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 request I dns vm gc 578383257362746 2019 06 12 01 30 09 145 body null I dns vm gc 578383257362746 2019 06 12 01 30 09 145 url https www googleapis com compute v1 projects user dev 242122 zones us west1 a instances test key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 resource I dns vm gc 578383257362746 2019 06 12 01 30 09 145 id 613579339353422259 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 name test I dns vm gc 578383257362746 2019 06 12 01 30 09 145 type instance I dns vm gc 578383257362746 2019 06 12 01 30 09 145 zone us west1 a I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 trace id operation 1560300992590 58b15e267f2cc e7529c4d f1343924 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 user agent Mozilla 5 0 Macintosh Intel Mac OS X 10 14 5 AppleWebKit 537 36 KHTML like Gecko Chrome 74 0 3729 169 Safari 537 36 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 version 1 2 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 labels I dns vm gc 578383257362746 2019 06 12 01 30 09 145 compute googleapis com resource id 613579339353422259 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 compute googleapis com resource name test I dns vm gc 578383257362746 2019 06 12 01 30 09 145 compute googleapis com resource type instance I dns vm gc 578383257362746 2019 06 12 01 30 09 145 compute googleapis com resource zone us west1 a I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 logName projects user dev 242122 logs compute googleapis com 2Factivity log I dns vm gc 578383257362746 2019 06 12 01 30 09 145 receiveTimestamp 2019 06 12T00 56 33 188262452Z I dns vm gc 578383257362746 2019 06 12 01 30 09 145 resource I dns vm gc 578383257362746 2019 06 12 01 30 09 145 labels I dns vm gc 578383257362746 2019 06 12 01 30 09 145 instance id 613579339353422259 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 project id user dev 242122 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 zone us west1 a I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 type gce instance I dns vm gc 578383257362746 2019 06 12 01 30 09 145 I dns vm gc 578383257362746 2019 06 12 01 30 09 145 severity INFO I dns vm gc 578383257362746 2019 06 12 01 30 09 145 timestamp 2019 06 12T00 56 33 146327Z I dns vm gc 578383257362746 2019 06 12 01 30 09 145 D dns vm gc 578383257362746 2019 06 12 01 30 09 146 Function execution took 20 ms finished with status ok D dns vm gc 578389534045377 2019 06 12 01 30 20 433 Function execution started I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 insertId hrbv6bfh9qzr2 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 jsonPayload I dns vm gc 578389534045377 2019 06 12 01 30 20 438 actor I dns vm gc 578389534045377 2019 06 12 01 30 20 438 user jmccune google com I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 event subtype compute instances delete I dns vm gc 578389534045377 2019 06 12 01 30 20 438 event timestamp us 1560301035092322 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 event type GCE OPERATION DONE I dns vm gc 578389534045377 2019 06 12 01 30 20 438 operation I dns vm gc 578389534045377 2019 06 12 01 30 20 438 id 971500189857477422 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 name operation 1560300992590 58b15e267f2cc e7529c4d f1343924 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 type operation I dns vm gc 578389534045377 2019 06 12 01 30 20 438 zone us west1 a I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 resource I dns vm gc 578389534045377 2019 06 12 01 30 20 438 id 613579339353422259 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 name test I dns vm gc 578389534045377 2019 06 12 01 30 20 438 type instance I dns vm gc 578389534045377 2019 06 12 01 30 20 438 zone us west1 a I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 trace id operation 1560300992590 58b15e267f2cc e7529c4d f1343924 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 version 1 2 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 labels I dns vm gc 578389534045377 2019 06 12 01 30 20 438 compute googleapis com resource id 613579339353422259 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 compute googleapis com resource name test I dns vm gc 578389534045377 2019 06 12 01 30 20 438 compute googleapis com resource type instance I dns vm gc 578389534045377 2019 06 12 01 30 20 438 compute googleapis com resource zone us west1 a I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 logName projects user dev 242122 logs compute googleapis com 2Factivity log I dns vm gc 578389534045377 2019 06 12 01 30 20 438 receiveTimestamp 2019 06 12T00 57 15 146606691Z I dns vm gc 578389534045377 2019 06 12 01 30 20 438 resource I dns vm gc 578389534045377 2019 06 12 01 30 20 438 labels I dns vm gc 578389534045377 2019 06 12 01 30 20 438 instance id 613579339353422259 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 project id user dev 242122 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 zone us west1 a I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 type gce instance I dns vm gc 578389534045377 2019 06 12 01 30 20 438 I dns vm gc 578389534045377 2019 06 12 01 30 20 438 severity INFO I dns vm gc 578389534045377 2019 06 12 01 30 20 438 timestamp 2019 06 12T00 57 15 092322Z I dns vm gc 578389534045377 2019 06 12 01 30 20 438 D dns vm gc 578389534045377 2019 06 12 01 30 20 439 Function execution took 7 ms finished with status ok
GCP guaranteed A race exists between the function obtaining the VM IP address and Please note DNS record deletion is implemented however cannot be is obtained the function will not delete the DNS record because it cannot the operation If the VM is deleted before the IP when a VM is deleted VM DNS Garbage Collection This folder contains a Background Function bg which deletes DNS A records
VM DNS Garbage Collection === This folder contains a [Background Function][bg] which deletes DNS A records when a VM is deleted. **Please note** DNS record deletion is implemented, however, cannot be guaranteed. A race exists between the function obtaining the VM IP address and the `compute.instances.delete` operation. If the VM is deleted before the IP is obtained, the function will not delete the DNS record because it cannot check the IP address matches the VM being deleted. In practice this background function collects the IP address well within the ~30 second window of the VM delete operation. Structured logs enable VM deletions which were not processed because the race was lost. See [Lost Race](#lost-race) for log filters to identify VM's deleted before cleanup could take place. ![Example Log Output](./img/example_logs.png) Project Setup === This example has been developed for use with multiple service projects. A centralized logs project is used to host one pubsub topic for all VM deletion events. One deployment of the function implements the event handler. * The logs project contains the dns-vm-gc Pub/Sub topic and the dns_vm_gc function deployed as a Background Function. * One or more service projects contain VM resources to be deleted. * The host project contains a VPC shared with the user project and DNS resource record sets needing to be cleaned up automatically. Identify the Logs Project === Identify a project to host the `vm-deletions` Pub/Sub topic and the DNS VM GC Cloud Function. Service projects are configured to export filtered logs into this topic. If a project does not already exist, create a new project. A suggested name is `logs`. The rest of this document will use `logs-123456` as the project ID for the centralized logs project. Create the vm-deletions Pub/Sub topic --- Service projects export `compute.instances.delete` events to the `vm-deletions` topic. The VM DNS GC background function subscribes to this topic and triggers on each event. Create a topic named `vm-deletions` in the logs project as per [Create a topic][pubsub-quickstart]. Configure Log Exports --- Configure Log Exports in one or more service projects. Logs are exported to the `vm-deletions` topic in the logs project. [Stackdriver logs exports][logs-exports] are used to convey VM lifecycle events to the DNS VM GC function via Cloud Pub/Sub. A Stackdriver filter is used to limit logs to VM deletion events, reducing data traveling through Pub/Sub. Configure an export to the `vm-deletions` topic with the following filter, for example `projects/logs-123456/topics/vm-deletions`. ``` resource.type="gce_instance" jsonPayload.event_type="GCE_API_CALL" jsonPayload.event_subtype="compute.instances.delete" ``` This filter results in one event published per VM deletion, a `GCE_API_CALL` event when the VM deletion is requested. If additional events are published to the topic, the function triggers, but ignores events which do not match this filter. Service Account === The Background Function runs with a service account identity. Create a service account named `dns-vm-gc` in the logs project for this purpose. This example assumes [GCP-managed][sa-gcp-managed] keys. If you are modifying this example you may download the service account key and run locally as the service account using the GOOGLE_APPLICATION_CREDENTIALS environment file. See [Providing credentials to your application][adc] for details. Service Account Roles === The Background Function service account requires the following roles. DNS Admin --- Grant the DNS Admin role to the dns-vm-gc service account in the host project. DNS Admin allows the DNS VM GC function to delete DNS records in the host project. This role may be granted at the Shared VPC project level. Compute Viewer --- Grant the Compute Viewer role to the dns-vm-gc service account. Compute Viewer allows the DNS VM GC function to read the IP address of the VM, necessary to ensure the correct A record is deleted. This role may be granted at the project, folder or organization level as appropriate. Logs Writer --- Grant the Logs Writer role to the dns-vm-gc service account. Logs Writer is required to write structured event logs to the [Reporting Stream](#reporting-stream). This role may be granted at the project, folder, or organization level as appropriate. It is recommended to grant the role at the same level the log stream exists at, the logging project by default. See [Custom Reporting Destination](#custom-reporting-destination) for more information. Deployment === Deploy this function into the logs project to simplify the subscription to the `vm-deletions` topic. Environment variables are used to configure the behavior of the function. Update the env.yaml file to reflect the correct VPC Host project and Managed Zone names for your environment. A sample is provided in env.yaml.sample. ```yaml # env.yaml --- DNS_VM_GC_DNS_PROJECT: my-vpc-host-project DNS_VM_GC_DNS_ZONES: my-nonprod-private-zone,my-prod-private-zone ``` ```bash gcloud functions deploy dns_vm_gc \ --retry \ --runtime=python37 \ --service-account=dns-vm-gc@logs-123456.iam.gserviceaccount.com \ --trigger-topic=vm-deletions \ --env-vars-file=env.yaml ``` Logging and Reporting === The DNS VM GC function logs into two different locations. Structured Events intended for reporting are sent to a special purpose reporting stream. Plain text logs are sent to the standard Cloud Function logs accessible via `gcloud functions logs read`. Reporting Stream --- The reporting stream is intended to answer two primary questions: 1. Which VM deletion events, if any, were not processed? 2. What records were deleted automatically? When the function loses the race against the delete operation, the event is not processed and the function reports a detail code of `LOST_RACE`. When the function deletes a record automatically, the fully qualified domain name is logged along with a detail code of `RR_DELETED` for resource record deleted. Custom Reporting Destination --- By default the reporting stream is located at `projects/<logs_project>/logs/<function_name>`. The reporting stream is configurable by setting the `DNS_VM_GC_REPORTING_LOG_STREAM` environment variable when deploying the function. For example, to send reporting events to the organization level: ```yaml # env.yaml --- DNS_VM_GC_DNS_PROJECT: my-vpc-host-project DNS_VM_GC_DNS_ZONES: my-nonprod-private-zone,my-prod-private-zone DNS_VM_GC_REPORTING_LOG_STREAM: organizations/000000000000/logs/dns-vm-gc-report ``` See the `logName` field of the [LogEntry][logentry] resource for a list of possible report stream destinations. Reading the Report Logs --- Download all structured logs to the report stream produced by the function using: ```bash gcloud functions logs read logName="projects/<logs_project>/logs/<function_name>" ``` Cloud Function Logs --- The function also logs unstructured plain text logs using [Cloud Function Logs][gcf-logs]. Becasue these logs are unstructured, they are less useful than the Report Stream logs for reporting purposes, however, are present to keep all activity associated together with each execution ID of the function. Note the cloud function logs have an execution_id. This execution ID is not readily available at runtime and therefore absent from the structured report log stream. The function logs a message with the `event_id` being processed to associate the execution_id with the event_id. This behavior is intended to correlate each execution in the Cloud Function Logs with each report in the Report Stream. The correlation of execution_id to event_id is not necessary for day to day reporting. The correlation is useful for the rare situation of complete end-to-end tracing. Reporting === Lost Race --- Periodic reporting should be performed to monitor for `NOT_PROCESSED` results. In the event of a lost race, automatic DNS record deletion is not guaranteed. The following Stackdriver Advanced Filter identifies when a VM deletion event was not processed automatically: ``` resource.type="cloud_function" resource.labels.function_name="dns_vm_gc" logName="projects/dns-logging/logs/dns-vm-gc-report" jsonPayload.result="NOT_PROCESSED" ``` Deleted Resource Records --- All records automatically deleted may be identified with the a filter on the detail code. ``` resource.type="cloud_function" resource.labels.function_name="dns_vm_gc" logName="projects/dns-logging/logs/dns-vm-gc-report" jsonPayload.detail="RR_DELETED" ``` Debug Logs --- Debug logs are also available, but are not sent by default. To enable, deploy the function with the `DEBUG` environment variable set to a non-empty string. Note, debug logs generates 2*N log events every time a VM is deleted where N is the number of DNS records across all configured managed zones. For example, deleting 10 VM instances with 1,000 managed DNS records generates 20,000 debug log entries at minimum. Detail Codes --- The following detail codes may be reported to the reporting stream: | Detail Code | Description | Result | | ----------- | ----------- | ------ | | NO_MATCHES | No DNS records matched the VM deleted | OK | | RR_DELETED | A DNS record matched and has been deleted | OK | | VM_NO_IP | The function won the race, but the VM has no IP | OK | | IGNORED_EVENT | Trigger event is not a VM delete GCE_API_CALL | OK | | LOST_RACE | The VM was deleted before the IP was determined | NOT_PROCESSED | In addition, there are detail codes when DEBUG is turned on indicating the reason why DNS records were not automatically deleted. | Detail Code | Reason DNS record not deleted | Result | | ----------- | ----------------------------- | ------ | | RR_NOT_A_RECORD | Resource Record is not an A record | OK | | RR_NAME_MISMATCH | Shortname doesn't match the VM name | OK | | RR_IP_MISMATCH | rrdatas is not one IP matching the VM's IP | OK | [bg]: https://cloud.google.com/functions/docs/writing/background [sa-gcp-managed]: https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys [pubsub-quickstart]: https://cloud.google.com/pubsub/docs/quickstart-console#create_a_topic [logs-exports]: https://cloud.google.com/logging/docs/export/ [adc]: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application [logentry]: https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry [gcf-logs]: https://cloud.google.com/functions/docs/monitoring/logging
GCP
VM DNS Garbage Collection This folder contains a Background Function bg which deletes DNS A records when a VM is deleted Please note DNS record deletion is implemented however cannot be guaranteed A race exists between the function obtaining the VM IP address and the compute instances delete operation If the VM is deleted before the IP is obtained the function will not delete the DNS record because it cannot check the IP address matches the VM being deleted In practice this background function collects the IP address well within the 30 second window of the VM delete operation Structured logs enable VM deletions which were not processed because the race was lost See Lost Race lost race for log filters to identify VM s deleted before cleanup could take place Example Log Output img example logs png Project Setup This example has been developed for use with multiple service projects A centralized logs project is used to host one pubsub topic for all VM deletion events One deployment of the function implements the event handler The logs project contains the dns vm gc Pub Sub topic and the dns vm gc function deployed as a Background Function One or more service projects contain VM resources to be deleted The host project contains a VPC shared with the user project and DNS resource record sets needing to be cleaned up automatically Identify the Logs Project Identify a project to host the vm deletions Pub Sub topic and the DNS VM GC Cloud Function Service projects are configured to export filtered logs into this topic If a project does not already exist create a new project A suggested name is logs The rest of this document will use logs 123456 as the project ID for the centralized logs project Create the vm deletions Pub Sub topic Service projects export compute instances delete events to the vm deletions topic The VM DNS GC background function subscribes to this topic and triggers on each event Create a topic named vm deletions in the logs project as per Create a topic pubsub quickstart Configure Log Exports Configure Log Exports in one or more service projects Logs are exported to the vm deletions topic in the logs project Stackdriver logs exports logs exports are used to convey VM lifecycle events to the DNS VM GC function via Cloud Pub Sub A Stackdriver filter is used to limit logs to VM deletion events reducing data traveling through Pub Sub Configure an export to the vm deletions topic with the following filter for example projects logs 123456 topics vm deletions resource type gce instance jsonPayload event type GCE API CALL jsonPayload event subtype compute instances delete This filter results in one event published per VM deletion a GCE API CALL event when the VM deletion is requested If additional events are published to the topic the function triggers but ignores events which do not match this filter Service Account The Background Function runs with a service account identity Create a service account named dns vm gc in the logs project for this purpose This example assumes GCP managed sa gcp managed keys If you are modifying this example you may download the service account key and run locally as the service account using the GOOGLE APPLICATION CREDENTIALS environment file See Providing credentials to your application adc for details Service Account Roles The Background Function service account requires the following roles DNS Admin Grant the DNS Admin role to the dns vm gc service account in the host project DNS Admin allows the DNS VM GC function to delete DNS records in the host project This role may be granted at the Shared VPC project level Compute Viewer Grant the Compute Viewer role to the dns vm gc service account Compute Viewer allows the DNS VM GC function to read the IP address of the VM necessary to ensure the correct A record is deleted This role may be granted at the project folder or organization level as appropriate Logs Writer Grant the Logs Writer role to the dns vm gc service account Logs Writer is required to write structured event logs to the Reporting Stream reporting stream This role may be granted at the project folder or organization level as appropriate It is recommended to grant the role at the same level the log stream exists at the logging project by default See Custom Reporting Destination custom reporting destination for more information Deployment Deploy this function into the logs project to simplify the subscription to the vm deletions topic Environment variables are used to configure the behavior of the function Update the env yaml file to reflect the correct VPC Host project and Managed Zone names for your environment A sample is provided in env yaml sample yaml env yaml DNS VM GC DNS PROJECT my vpc host project DNS VM GC DNS ZONES my nonprod private zone my prod private zone bash gcloud functions deploy dns vm gc retry runtime python37 service account dns vm gc logs 123456 iam gserviceaccount com trigger topic vm deletions env vars file env yaml Logging and Reporting The DNS VM GC function logs into two different locations Structured Events intended for reporting are sent to a special purpose reporting stream Plain text logs are sent to the standard Cloud Function logs accessible via gcloud functions logs read Reporting Stream The reporting stream is intended to answer two primary questions 1 Which VM deletion events if any were not processed 2 What records were deleted automatically When the function loses the race against the delete operation the event is not processed and the function reports a detail code of LOST RACE When the function deletes a record automatically the fully qualified domain name is logged along with a detail code of RR DELETED for resource record deleted Custom Reporting Destination By default the reporting stream is located at projects logs project logs function name The reporting stream is configurable by setting the DNS VM GC REPORTING LOG STREAM environment variable when deploying the function For example to send reporting events to the organization level yaml env yaml DNS VM GC DNS PROJECT my vpc host project DNS VM GC DNS ZONES my nonprod private zone my prod private zone DNS VM GC REPORTING LOG STREAM organizations 000000000000 logs dns vm gc report See the logName field of the LogEntry logentry resource for a list of possible report stream destinations Reading the Report Logs Download all structured logs to the report stream produced by the function using bash gcloud functions logs read logName projects logs project logs function name Cloud Function Logs The function also logs unstructured plain text logs using Cloud Function Logs gcf logs Becasue these logs are unstructured they are less useful than the Report Stream logs for reporting purposes however are present to keep all activity associated together with each execution ID of the function Note the cloud function logs have an execution id This execution ID is not readily available at runtime and therefore absent from the structured report log stream The function logs a message with the event id being processed to associate the execution id with the event id This behavior is intended to correlate each execution in the Cloud Function Logs with each report in the Report Stream The correlation of execution id to event id is not necessary for day to day reporting The correlation is useful for the rare situation of complete end to end tracing Reporting Lost Race Periodic reporting should be performed to monitor for NOT PROCESSED results In the event of a lost race automatic DNS record deletion is not guaranteed The following Stackdriver Advanced Filter identifies when a VM deletion event was not processed automatically resource type cloud function resource labels function name dns vm gc logName projects dns logging logs dns vm gc report jsonPayload result NOT PROCESSED Deleted Resource Records All records automatically deleted may be identified with the a filter on the detail code resource type cloud function resource labels function name dns vm gc logName projects dns logging logs dns vm gc report jsonPayload detail RR DELETED Debug Logs Debug logs are also available but are not sent by default To enable deploy the function with the DEBUG environment variable set to a non empty string Note debug logs generates 2 N log events every time a VM is deleted where N is the number of DNS records across all configured managed zones For example deleting 10 VM instances with 1 000 managed DNS records generates 20 000 debug log entries at minimum Detail Codes The following detail codes may be reported to the reporting stream Detail Code Description Result NO MATCHES No DNS records matched the VM deleted OK RR DELETED A DNS record matched and has been deleted OK VM NO IP The function won the race but the VM has no IP OK IGNORED EVENT Trigger event is not a VM delete GCE API CALL OK LOST RACE The VM was deleted before the IP was determined NOT PROCESSED In addition there are detail codes when DEBUG is turned on indicating the reason why DNS records were not automatically deleted Detail Code Reason DNS record not deleted Result RR NOT A RECORD Resource Record is not an A record OK RR NAME MISMATCH Shortname doesn t match the VM name OK RR IP MISMATCH rrdatas is not one IP matching the VM s IP OK bg https cloud google com functions docs writing background sa gcp managed https cloud google com iam docs understanding service accounts managing service account keys pubsub quickstart https cloud google com pubsub docs quickstart console create a topic logs exports https cloud google com logging docs export adc https cloud google com docs authentication production providing credentials to your application logentry https cloud google com logging docs reference v2 rest v2 LogEntry gcf logs https cloud google com functions docs monitoring logging
GCP representation for any use or purpose Your use of it is subject to your agreement with Google Twilio Conversation Integration with a Virtual Agent using Dialogflow Project Structure This is an example how to integrate a Twilio Conversation Services with Virtual Copyright 2023 Google This software is provided as is without warranty or Agent using Dialogflow
Copyright 2023 Google. This software is provided as-is, without warranty or representation for any use or purpose. Your use of it is subject to your agreement with Google. # Twilio Conversation Integration with a Virtual Agent using Dialogflow This is an example how to integrate a Twilio Conversation Services with Virtual Agent using Dialogflow. ## Project Structure ``` . └── twilio └── src └── main └── java └── com.middleware.controller β”œβ”€β”€ cache # redis initialization and mapping of conversations β”œβ”€β”€ dialogflow # handler of a conversation with dialogflow β”œβ”€β”€ rest # entry point and request handler β”œβ”€β”€ twilio # marketplace and twilio conversation services set up and initialization β”œβ”€β”€ util # utility to inittialize a twilio conversation to test the integration └── webhook # classes to process new message and new participant └── resources └── application.properties └── proto └── dialogflow.proto # dialogflow conversation information holder β”œβ”€β”€ pom.xml └── README.md ``` ## Components - Dialogflow - Google Cloud MemoryStore (Redis) - Twilio with Flex ## How mapping between Twilio Conversation and Dialogflow Conversation is implemented We use a mapping between the Twilio Conversation SID and the Dialogflow Conversation ID to maintain the conversation context. This mapping is stored in the [Redis cache.](https://cloud.google.com/memorystore/docs/redis/redis-overview) and it works as followed: 1. If there is no active conversation available in the Redis cache, we create a new Dialogflow conversation and store the mapping in the Redis cache. 2. If there is an active conversation available in the Redis cache, we use the same Dialogflow conversation to send the message to the Dialogflow agent. 3. If the user is not already present in the conversation, we add them to the conversation. 4. On each message, we send the message to the Dialogflow agent and get the response using [AnalyzeContent API](https://developers.google.com/resources/api-libraries/documentation/dialogflow/v2beta1/java/latest/index.html?com/google/api/services/dialogflow/v2beta1/Dialogflow.Projects.Conversations.Participants.AnalyzeContent.html) 5. Reply message is sent to the user using the Twilio Conversations API. 6. At the time of handoff, we send the conversation context to the Flex UI using the [Interaction API](https://www.twilio.com/docs/flex/developer/conversations/interactions-api/interactions). 7. If the Agent Assist feature is enabled, we won't close the conversation in Dialogflow. Otherwise, we will close the conversation in Dialogflow if agent-handoff or end-of-conversation is detected. 9. If the conversation is closed in Dialogflow, we delete the mapping from the Redis cache. ## Setup Instructions ### GCP Project Setup #### Creating a Project in the Google Cloud Platform Console If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services. 1. Open the [Cloud Platform Console][cloud-console]. 1. In the drop-down menu at the top, select **Create a project**. 1. Give your project a name. 1. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations. [cloud-console]: https://console.cloud.google.com/ #### Enabling billing for your project. If you haven't already enabled billing for your project, [enable billing][enable-billing] now. Enabling billing allows is required to use Cloud Bigtable and to create VM instances. [enable-billing]: https://console.cloud.google.com/project/_/settings #### Install the Google Cloud SDK. If you haven't already installed the Google Cloud SDK, [install the Google Cloud SDK][cloud-sdk] now. The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform. [cloud-sdk]: https://cloud.google.com/sdk/ #### Setting Google Application Default Credentials Set your [Google Application Default Credentials][application-default-credentials] by [initializing the Google Cloud SDK][cloud-sdk-init] with the command: ``` gcloud init ``` Generate a credentials file by running the [application-default login](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login) command: ``` gcloud auth application-default login ``` [cloud-sdk-init]: https://cloud.google.com/sdk/docs/initializing [application-default-credentials]: https://developers.google.com/identity/protocols/application-default-credentials ### Local Development Set Up This is a Spring Boot application designed to run on port 8080. Upon launch, the application will initialize and bind to port 8080, making it accessible for incoming connections. #### Set of environment variables: The following environmental variables need to be set up in the localhost: ``` REDIS_HOST : IP address of the Redis instance REDIS_PORT : Port of the Redis instance (Default: 6379) ``` ``` TWILIO_ADD_ON_CONVERSATION_PROFILE_ID : Dialogflow Conversation Profile ID (Provided by Twilio Integration) TWILIO_ADD_ON_PROJECT_ID : GCP Project ID where the Dialogflow Agent is deployed (Provided by Twilio Integration) TWILIO_ADD_ON_AGENT_LOCATION(Optional): Dialogflow Agent Location (Provided by Twilio Integration). Default: global ``` ``` TWILIO_ACCOUNT_SID : Twilio Account SID TWILIO_AUTH_TOKEN : OAuth Token for the Twilio Account TWILIO_WORKSPACE_SID : Flex Workspace SID, (Used for interaction task creation) TWILIO_WORKFLOW_SID : Flex Workflow SID, (Used for interaction task creation) ``` #### Redis Set Up ##### Install a Redis Emulator Please refer to this [doc](https://redis.io/docs/getting-started/) to install a redis emulator in your localhost For Linux: [Install Redis on Linux](https://redis.io/docs/getting-started/installation/install-redis-on-linux/) ##### Initialized the server ``` $ redis-server ``` ##### Basic commands to access the data List all the keys ``` $ redis-cli 127.0.0.1:6379> KEYS * (empty array) ``` Delete a key ``` 127.0.0.1:6379> DEL "<key>" ``` Get a key ``` 127.0.0.1:6379> GET "<key>" ``` ## Usage ### Endpoints > POST /handleConversations Events Handled 1. **onParticipantAdded** : Expected Variables from the request - ConversationSid 2. **onMessageAdded** : Expected Variables from the request - ConversationSid - Body - Source ### Initialize the application Reference: [Building an Application with Spring Boot](https://spring.io/guides/gs/spring-boot/) ``` ./mvnw spring-boot:run ``` ### Send a request ``` curl --location --request POST 'localhost:8080/handleConversations' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'Body=Talk to agent' \ --data-urlencode 'EventType=onMessageAdded' \ --data-urlencode 'Source=whatsapp' \ --data-urlencode 'ConversationSid=XXXXXX' \ ``` ## How to get the dialogflow conversation profile id Dialogflow Conversation Profile ID also known as Integration ID is a unique text which is used for the interaction of Dialogflow with 3rd party applications like Twilio. > [Create/Edit a Dialogflow Conversation Profile](https://cloud.google.com/agent-assist/docs/conversation-profile#create_and_edit_a_conversation_profile) You can also create a conversation profile using Twilio One Click Integration from the Dialogflow Console. > Dialogflow Console > Corresponding Agent > Manage > Integrations > > Twilio > One Click Integration > Connect Once the conversation profile is created, you can find the conversation profile id in the following ways: > Open [Agent Assist](https://agentassist.cloud.google.com/), then Conversation > Profiles on the left bottom ## How to create a Twilio conversation 1. Create a Conversation programmatically using the Twilio [Conversations API](https://www.twilio.com/docs/conversations/api/conversation-resource?code-sample=code-create-conversation&code-language=curl&code-sdk-version=json) 2. Add the customer chat participant to the conversation. Use the SID of the conversation that you created in the previous step. [Reference](https://www.twilio.com/docs/conversations/api/conversation-participant-resource?code-sample=code-create-conversation-participant-chat&code-language=curl&code-sdk-version=json) 3. Add a [scoped webhook](https://www.twilio.com/docs/conversations/api/conversation-scoped-webhook-resource?code-sample=code-create-attach-a-new-conversation-scoped-webhook&code-language=curl&code-sdk-version=json) with "webhook" as the target using the Conversations Webhook endpoint. With a [scoped webhook](https://www.twilio.com/docs/conversations/api/conversation-scoped-webhook-resource?code-sample=code-create-attach-a-new-conversation-scoped-webhook&code-language=curl&code-sdk-version=json) set to your endpoint and the configuration filter set to "onMessageAdded", any message that is added to the conversation will invoke the configured webhook URL. 4) Simulate a new customer message by using the Message endpoint. Remember to set Author to the identity you set in step 2 and to add the header β€œX-Twilio-Webhook-Enabled” to the request so our webhook gets invoked 5) Use the Twilio [Interaction API](https://www.twilio.com/docs/flex/developer/conversations/interactions-api) to invoke a handoff to the Flex UI ## How to run the initializer to programmatically create a Twilio conversation > [ConversationInitializer](./src/main/java/com/middleware/controller/util/ConversationInitializer.java) ``` mvn -DskipTests package exec:java -Dexec.mainClass=com.middleware.controller.util.ConversationInitializer -Dexec.args="add" mvn -DskipTests package exec:java -Dexec.mainClass=com.middleware.controller.util.ConversationInitializer -Dexec.args="delete <conversation sid>" ``` # Deployment JIB is used to build the docker image and push it to the GCR. ```bash mvn compile jib:build ``` # References 1.[Twilio Conversations Fundamentals](https://www.twilio.com/docs/conversations/fundamentals) 2.[Dialogflow](https://cloud.google.com/dialogflow/es/docs)
GCP
Copyright 2023 Google This software is provided as is without warranty or representation for any use or purpose Your use of it is subject to your agreement with Google Twilio Conversation Integration with a Virtual Agent using Dialogflow This is an example how to integrate a Twilio Conversation Services with Virtual Agent using Dialogflow Project Structure twilio src main java com middleware controller cache redis initialization and mapping of conversations dialogflow handler of a conversation with dialogflow rest entry point and request handler twilio marketplace and twilio conversation services set up and initialization util utility to inittialize a twilio conversation to test the integration webhook classes to process new message and new participant resources application properties proto dialogflow proto dialogflow conversation information holder pom xml README md Components Dialogflow Google Cloud MemoryStore Redis Twilio with Flex How mapping between Twilio Conversation and Dialogflow Conversation is implemented We use a mapping between the Twilio Conversation SID and the Dialogflow Conversation ID to maintain the conversation context This mapping is stored in the Redis cache https cloud google com memorystore docs redis redis overview and it works as followed 1 If there is no active conversation available in the Redis cache we create a new Dialogflow conversation and store the mapping in the Redis cache 2 If there is an active conversation available in the Redis cache we use the same Dialogflow conversation to send the message to the Dialogflow agent 3 If the user is not already present in the conversation we add them to the conversation 4 On each message we send the message to the Dialogflow agent and get the response using AnalyzeContent API https developers google com resources api libraries documentation dialogflow v2beta1 java latest index html com google api services dialogflow v2beta1 Dialogflow Projects Conversations Participants AnalyzeContent html 5 Reply message is sent to the user using the Twilio Conversations API 6 At the time of handoff we send the conversation context to the Flex UI using the Interaction API https www twilio com docs flex developer conversations interactions api interactions 7 If the Agent Assist feature is enabled we won t close the conversation in Dialogflow Otherwise we will close the conversation in Dialogflow if agent handoff or end of conversation is detected 9 If the conversation is closed in Dialogflow we delete the mapping from the Redis cache Setup Instructions GCP Project Setup Creating a Project in the Google Cloud Platform Console If you haven t already created a project create one now Projects enable you to manage all Google Cloud Platform resources for your app including deployment access control billing and services 1 Open the Cloud Platform Console cloud console 1 In the drop down menu at the top select Create a project 1 Give your project a name 1 Make a note of the project ID which might be different from the project name The project ID is used in commands and in configurations cloud console https console cloud google com Enabling billing for your project If you haven t already enabled billing for your project enable billing enable billing now Enabling billing allows is required to use Cloud Bigtable and to create VM instances enable billing https console cloud google com project settings Install the Google Cloud SDK If you haven t already installed the Google Cloud SDK install the Google Cloud SDK cloud sdk now The SDK contains tools and libraries that enable you to create and manage resources on Google Cloud Platform cloud sdk https cloud google com sdk Setting Google Application Default Credentials Set your Google Application Default Credentials application default credentials by initializing the Google Cloud SDK cloud sdk init with the command gcloud init Generate a credentials file by running the application default login https cloud google com sdk gcloud reference auth application default login command gcloud auth application default login cloud sdk init https cloud google com sdk docs initializing application default credentials https developers google com identity protocols application default credentials Local Development Set Up This is a Spring Boot application designed to run on port 8080 Upon launch the application will initialize and bind to port 8080 making it accessible for incoming connections Set of environment variables The following environmental variables need to be set up in the localhost REDIS HOST IP address of the Redis instance REDIS PORT Port of the Redis instance Default 6379 TWILIO ADD ON CONVERSATION PROFILE ID Dialogflow Conversation Profile ID Provided by Twilio Integration TWILIO ADD ON PROJECT ID GCP Project ID where the Dialogflow Agent is deployed Provided by Twilio Integration TWILIO ADD ON AGENT LOCATION Optional Dialogflow Agent Location Provided by Twilio Integration Default global TWILIO ACCOUNT SID Twilio Account SID TWILIO AUTH TOKEN OAuth Token for the Twilio Account TWILIO WORKSPACE SID Flex Workspace SID Used for interaction task creation TWILIO WORKFLOW SID Flex Workflow SID Used for interaction task creation Redis Set Up Install a Redis Emulator Please refer to this doc https redis io docs getting started to install a redis emulator in your localhost For Linux Install Redis on Linux https redis io docs getting started installation install redis on linux Initialized the server redis server Basic commands to access the data List all the keys redis cli 127 0 0 1 6379 KEYS empty array Delete a key 127 0 0 1 6379 DEL key Get a key 127 0 0 1 6379 GET key Usage Endpoints POST handleConversations Events Handled 1 onParticipantAdded Expected Variables from the request ConversationSid 2 onMessageAdded Expected Variables from the request ConversationSid Body Source Initialize the application Reference Building an Application with Spring Boot https spring io guides gs spring boot mvnw spring boot run Send a request curl location request POST localhost 8080 handleConversations header Content Type application x www form urlencoded data urlencode Body Talk to agent data urlencode EventType onMessageAdded data urlencode Source whatsapp data urlencode ConversationSid XXXXXX How to get the dialogflow conversation profile id Dialogflow Conversation Profile ID also known as Integration ID is a unique text which is used for the interaction of Dialogflow with 3rd party applications like Twilio Create Edit a Dialogflow Conversation Profile https cloud google com agent assist docs conversation profile create and edit a conversation profile You can also create a conversation profile using Twilio One Click Integration from the Dialogflow Console Dialogflow Console Corresponding Agent Manage Integrations Twilio One Click Integration Connect Once the conversation profile is created you can find the conversation profile id in the following ways Open Agent Assist https agentassist cloud google com then Conversation Profiles on the left bottom How to create a Twilio conversation 1 Create a Conversation programmatically using the Twilio Conversations API https www twilio com docs conversations api conversation resource code sample code create conversation code language curl code sdk version json 2 Add the customer chat participant to the conversation Use the SID of the conversation that you created in the previous step Reference https www twilio com docs conversations api conversation participant resource code sample code create conversation participant chat code language curl code sdk version json 3 Add a scoped webhook https www twilio com docs conversations api conversation scoped webhook resource code sample code create attach a new conversation scoped webhook code language curl code sdk version json with webhook as the target using the Conversations Webhook endpoint With a scoped webhook https www twilio com docs conversations api conversation scoped webhook resource code sample code create attach a new conversation scoped webhook code language curl code sdk version json set to your endpoint and the configuration filter set to onMessageAdded any message that is added to the conversation will invoke the configured webhook URL 4 Simulate a new customer message by using the Message endpoint Remember to set Author to the identity you set in step 2 and to add the header X Twilio Webhook Enabled to the request so our webhook gets invoked 5 Use the Twilio Interaction API https www twilio com docs flex developer conversations interactions api to invoke a handoff to the Flex UI How to run the initializer to programmatically create a Twilio conversation ConversationInitializer src main java com middleware controller util ConversationInitializer java mvn DskipTests package exec java Dexec mainClass com middleware controller util ConversationInitializer Dexec args add mvn DskipTests package exec java Dexec mainClass com middleware controller util ConversationInitializer Dexec args delete conversation sid Deployment JIB is used to build the docker image and push it to the GCR bash mvn compile jib build References 1 Twilio Conversations Fundamentals https www twilio com docs conversations fundamentals 2 Dialogflow https cloud google com dialogflow es docs
GCP Dataflow Streaming Schema Handler This package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch The code will uses Schema enforcement and DLT Dead Letter Table approach to store schema incompability In case of schema incompability detected from the incoming message the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table In addition the original message with schema incomability will be store in the DLT for debugging and record purposes Pipeline This is the main class deploying the pipeline The program waits for the Pub sub JSON push event If the predefined schema matches with the data ingests the streaming data into a successful BigQuery dataset
# Dataflow Streaming Schema Handler This package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch. The code will uses Schema enforcement and DLT (Dead Letter Table) approach to store schema incompability. In case of schema incompability detected from the incoming message, the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table. In addition, the original message (with schema incomability) will be store in the DLT for debugging and record purposes. ## Pipeline [Dataflow Streaming Schema Handler](src/main/java/com/google/cloud/pso/pipeline/PubSubToBigQueryJSON.java) - This is the main class deploying the pipeline. The program waits for the Pub/sub JSON push event. If the predefined schema matches with the data, ingests the streaming data into a successful BigQuery dataset. Else if there is a schema mismatch, ingests data into an unsuccessful BigQuery dataset (DLT). ![Architecture Diagram](img/architecture.png "Architecture") Happy path: > Incoming message fields matches with the targetted schema > Result are store in the targetted table Schema mismatch: > Incoming message fields did not match with the targetted schema > The incoming message fields which matches with the targetted schema will be kept, any additional fields that does not match will be remove and in case of field are missing, the field will bet set to null value. The result will then be store in the targetted table > The original message (with the mismatch schema) will be store in the DLT table ## Getting Started ### Requirements * [gcloud sdk](https://cloud.google.com/sdk/docs/install-sdk) * Java 11 * Maven 3 ### Building the Project Build the entire project using the maven compile command. ```sh mvn clean compile ``` ### Setting GCP Resources ```bash # Set the pipeline vars PROJECT_ID=<gcp-project-id> BQ_DATASET=<bigquery-dataset-name> BUCKET=<gcs-bucket> PIPELINE_FOLDER=gs://${BUCKET}/dataflow/pipelines/streaming-benchmark PUBSUB_TOPIC=<pubsub-topic> PUBSUB_SUBS_ID=<subscriptions-id> ``` ### Creating an example dataset ```sh bq --location=US mk -d ${BQ_DATASET} ``` #### Create an example table ```sh bq mk \ --table \ ${PROJECT_ID}:${BQ_DATASET}.tutorial_table \ ./src/resources/person.json ``` #### Create an example PubSub topic and subscription ```sh gcloud pubsub topics create ${PUBSUB_TOPIC} gcloud pubsub subscriptions create ${PUBSUB_SUBS_ID} \ --topic=${PUBSUB_TOPIC} ``` ### Executing the Pipeline The below instruction will deploy dataflow into the GCP project ```bash # Set the runner RUNNER=DataflowRunner # Compute engine zone REGION=us-central1 # Build the template mvn compile exec:java \ -Dexec.mainClass=com.google.cloud.pso.pipeline.PubsubToBigQueryJSON \ -Dexec.cleanupDaemonThreads=false \ -Dexec.args=" \ --project=${PROJECT_ID} \ --dataset=${BQ_DATASET} \ --stagingLocation=${PIPELINE_FOLDER}/staging \ --tempLocation=${PIPELINE_FOLDER}/temp \ --region=${REGION} \ --maxNumWorkers=5 \ --pubsubSubscription=projects/${PROJECT_ID}/subscriptions/${PUBSUB_SUBS_ID} \ --runner=${RUNNER}" ``` ### Sending sample data to pubsub Once the dataflow job are deploy and running we can try to send few messages to see the output being store in the GCP project Below are the happy path message ```sh gcloud pubsub topics publish ${PUBSUB_TOPIC} \ --message="{\"tutorial_table\":{\"person\":{\"name\":\"happyPath\",\"age\":22},\"date\": \"2022-08-03\",\"id\": \"xpze\"}}" ``` Below are the mismatch schema message, with "age" field being removed to the pubsub ```sh gcloud pubsub topics publish ${PUBSUB_TOPIC} \ --message="{\"tutorial_table\":{\"person\":{\"name\":\"mismatchSchema\"},\"date\": \"2022-08-03\",\"id\": \"avghy\"}}" ``
GCP
Dataflow Streaming Schema Handler This package contains a set of components required to handle unanticipated incoming streaming data into BigQuery with schema mismatch The code will uses Schema enforcement and DLT Dead Letter Table approach to store schema incompability In case of schema incompability detected from the incoming message the incoming message will be evolved to match the targeted table schema by adding and removing fields which then will be store in the targetted table In addition the original message with schema incomability will be store in the DLT for debugging and record purposes Pipeline Dataflow Streaming Schema Handler src main java com google cloud pso pipeline PubSubToBigQueryJSON java This is the main class deploying the pipeline The program waits for the Pub sub JSON push event If the predefined schema matches with the data ingests the streaming data into a successful BigQuery dataset Else if there is a schema mismatch ingests data into an unsuccessful BigQuery dataset DLT Architecture Diagram img architecture png Architecture Happy path Incoming message fields matches with the targetted schema Result are store in the targetted table Schema mismatch Incoming message fields did not match with the targetted schema The incoming message fields which matches with the targetted schema will be kept any additional fields that does not match will be remove and in case of field are missing the field will bet set to null value The result will then be store in the targetted table The original message with the mismatch schema will be store in the DLT table Getting Started Requirements gcloud sdk https cloud google com sdk docs install sdk Java 11 Maven 3 Building the Project Build the entire project using the maven compile command sh mvn clean compile Setting GCP Resources bash Set the pipeline vars PROJECT ID gcp project id BQ DATASET bigquery dataset name BUCKET gcs bucket PIPELINE FOLDER gs BUCKET dataflow pipelines streaming benchmark PUBSUB TOPIC pubsub topic PUBSUB SUBS ID subscriptions id Creating an example dataset sh bq location US mk d BQ DATASET Create an example table sh bq mk table PROJECT ID BQ DATASET tutorial table src resources person json Create an example PubSub topic and subscription sh gcloud pubsub topics create PUBSUB TOPIC gcloud pubsub subscriptions create PUBSUB SUBS ID topic PUBSUB TOPIC Executing the Pipeline The below instruction will deploy dataflow into the GCP project bash Set the runner RUNNER DataflowRunner Compute engine zone REGION us central1 Build the template mvn compile exec java Dexec mainClass com google cloud pso pipeline PubsubToBigQueryJSON Dexec cleanupDaemonThreads false Dexec args project PROJECT ID dataset BQ DATASET stagingLocation PIPELINE FOLDER staging tempLocation PIPELINE FOLDER temp region REGION maxNumWorkers 5 pubsubSubscription projects PROJECT ID subscriptions PUBSUB SUBS ID runner RUNNER Sending sample data to pubsub Once the dataflow job are deploy and running we can try to send few messages to see the output being store in the GCP project Below are the happy path message sh gcloud pubsub topics publish PUBSUB TOPIC message tutorial table person name happyPath age 22 date 2022 08 03 id xpze Below are the mismatch schema message with age field being removed to the pubsub sh gcloud pubsub topics publish PUBSUB TOPIC message tutorial table person name mismatchSchema date 2022 08 03 id avghy