Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Tags
    3. rancher
    Log in to post

    • J

      default backend - 404
      Continuous Integration and Delivery (CI, CD) • ingress rancher • • juvenalb  

      2
      0
      Votes
      2
      Posts
      0
      Views

      G

      Apparently - I forgot annotations and ingressclass: I first added this: apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: annotations: ingressclass.kubernetes.io/is-default-class: "true" generation: 1 labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: haproxy app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: haproxy-ingress-controller name: haproxy spec: controller: haproxy.org/ingress-controller And in the ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: notesncrap annotations: kubernetes.io/ingress.class: "haproxy" generation: 1 spec: rules: - host: k8scrap.selea.se http: paths: - path: / backend: serviceName: notesncrap servicePort: 80 Please observe the annotation.
    • T

      Apache Camel-k metrics in rancher embedded prometheus
      Continuous Integration and Delivery (CI, CD) • kubernetes prometheus rancher • • tahishae  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      Solved updating the prometheus operator 0.38.1 -> 0.39.0. More details: https://github.com/apache/camel-k/issues/2794
    • P

      How do I automate deployments with Kubernetes?
      Continuous Integration and Delivery (CI, CD) • docker kubernetes deployment rancher • • Pearlaqua  

      2
      0
      Votes
      2
      Posts
      1
      Views

      briley

      As of you are using Rancher the easiest way would be to register custom Rancher Catalog and create an item for each stack/service you want to deploy. Rancher Catalog is a Git repository with the following structure Then in Jenkins you can create a job that would call Rancher REST API to deploy/update stack/service. In its turn Rancher would pull latest version of the Docker image for this service and deploy it corresponding to Docker Compose file from the catalog. Pros: generic approach that might be used for almost every app Jenkins itself might be deployed into Rancher environment, agents might be created in Kubernetes cluster Cons: development team should follow solid release strategy to be able to use generic builds storage drivers are still under the question in alpha release of Rancher 2.0
    • P

      Why do some components vanish from the catalog list if one component has been installed in Rancher?
      Continuous Integration and Delivery (CI, CD) • rancher • • Pearlaqua  

      2
      0
      Votes
      2
      Posts
      1
      Views

      A

      You haven't specified anything concrete, so I can't give you a specific reason... But catalog items can: Be mutually exclusive (e.g. you can't use ipsec and vxlan at the same time) Be deployed at most once in an environment Require the environment be a specific orchestration type Require a certain range of Rancher versions etc
    • S

      Why is the K8s UI not available when rancher is used?
      Continuous Integration and Delivery (CI, CD) • kubernetes rancher • • shizuka  

      2
      0
      Votes
      2
      Posts
      1
      Views

      M

      I've seen this happen, especially on the newest (as of now) version. Try going into the "Infrastructure stack" and look for stopped containers. Often it helps to just refresh the entire deployment (hit the button "up to date" next to the "kubernetes" stack and hit save (If I remember correctly) in the bottom to force the "refresh"). You could also try manually just restarting the kubernetes-dashboard containers. Lastly you could use the kubectl cli to delete the dashboard pods and force them to get recreated. Let me know how it goes! I know I had to do this fix a few times.
    • R

      "Error: forwarding ports: Upgrade request required" Error in helm of a kubernetes cluster
      Continuous Integration and Delivery (CI, CD) • centos kubernetes upgrade helm rancher • • Rossere  

      2
      0
      Votes
      2
      Posts
      2
      Views

      briley

      See my answer here: Ran into this today when trying to use Garden.io for a cluster running in Jelastic. Found the solution in this Github comment: First acquire a local binary for Tiller (server-version of Helm), either by compiling or by downloading it from the release page. Then run: $ export HELM_HOST=":44134" $ tiller -listen ${HELM_HOST} -alsologtostderr >/dev/null 2>&1 & This will run a local version of the Kubernetes Helm Server. Now try your original command again, kubectl, that will delegate to this local Helm instead and manage to connect.
    • Bogopo

      Kubernetes multi cloud
      Continuous Integration and Delivery (CI, CD) • kubernetes rancher • • Bogopo  

      2
      0
      Votes
      2
      Posts
      1
      Views

      briley

      (Rancher employee) Kubernetes itself is really not meant for that use-case. A cluster is generally a set of machines in the same provider with close proximity (latency) to each other. Etcd (and therefore API/CLI/UI/scheduling/everything) performance depends heavily on the (worst-case) latency between all members. Only one cloud-provider integration (storage providers, L4 load balancers) can be configured and most assume nodes are all in the same "region" or similar concept. Commonly used network plugins assume adjacency and/or provide no encryption or authentication suitable for communicating across untrusted networks/the internet. Communication between pods/services within the cluster has no consideration for where the "nearest" place to reach a pod for that service is, so latency from a pod to another service can be large (however far apart the two worst nodes are) and unpredictable (sometimes close, sometimes far). (There is an option for nodePorts to always go to a local pod, but if there is no local pod it will just get dropped). You can make a "custom" cluster and add nodes from wherever and do what you're asking in Rancher, but you're not going to have a good time (no matter what product is building the cluster).
    • Laycee

      Kubernetes on k3s can't resolve domains from custom dns server (fritz.box with dnsmasq)
      Continuous Integration and Delivery (CI, CD) • dns kubernetes configuration management helm rancher • • Laycee  

      2
      0
      Votes
      2
      Posts
      1
      Views

      A

      I believe this is an current bug in k3s that upstream DNS is hardcoded to 1.1.1.1. this should be resolved shortly https://github.com/rancher/k3s/issues/53
    • O

      Rancher, Load Balancing and own domain
      Continuous Integration and Delivery (CI, CD) • dns amazon web services amazon ec2 rancher • • Oba22  

      2
      0
      Votes
      2
      Posts
      1
      Views

      J

      Your first step is to setup and configure the Route53 service from the rancher catalog. After that, the load-balancer will automatically update its DNS in Route53.
    • R

      What are the two project layouts offered by Rancher on installation?
      Continuous Integration and Delivery (CI, CD) • rancher • • Raziyah00  

      2
      0
      Votes
      2
      Posts
      1
      Views

      J

      Project is a Rancher specific concept. A Project can have multiple Kubernetes namespaces. The System Project has all the Kubernetes system components like kube-dns, ingress etc. Think of it like System Services running on a machine/laptop. Default project is where you can launch your workloads to start with. Similar to a scratchpad. You can create new projects too. For example, Web App, Database, etc. Or it could be based on different users using the cluster. Project-For-Bob, Project-For-John.
    • Alberto

      Rancher & gitlab-runner - is there a way to get the gitlab-runner to tell Rancher to start the CI test environment instead?
      Continuous Integration and Delivery (CI, CD) • docker gitlab rancher • • Alberto  

      2
      0
      Votes
      2
      Posts
      1
      Views

      T

      The short answer is "it depends". Rancher itself runs using system-docker and the docker instances it starts for you, such as gitlab-runner are run separately in regular docker. Gitlab-runner then will start runners depending on its configuration: There are three methods to enable the use of... docker run during jobs; each with their own tradeoffs. https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#runner-configuration My eventual solution I'd initially installed Rancher 1.6 and just couldn't get the dependencies working (Rancher OS running Rancher 1.6 and Rancher agent running on a couple of Alpine nodes). The upgrade to Rancher v2 was a steep learning curve as you need to learn the Kubernetes ecosystem tooling (helm, kubectl, etc). I still couldn't get a gitlab-runner working under Rancher v2, but upgrading to v2.0.8 fixed it. I could add the custom helm chart for gitlab-runner and after cribbing the settings from https://gitlab.com/charts/gitlab-runner/blob/master/values.yaml I could get gitlab-runner working under Rancher v2.
    • R

      Measure service unavailability during upgrade
      Continuous Integration and Delivery (CI, CD) • upgrade high availability microservices rancher • • rosemadder  

      2
      0
      Votes
      2
      Posts
      1
      Views

      Alberto

      What you suggest might be the simplest way to collect the data, but you will have to do quite a bit of work to extract the availability over certain periods. I think it's fair to say that if you want availability, you need a monitoring system. This means having an extra service in your catalogue to continuously probe the availability of your microservices over time. Storing them in a time-series database would allow you to make queries to establish availability over various periods. There are many tools which could do this for you. A good starting place would be the CNCF monitoring landscape
    • briley

      Migrate Rancher host from server to server
      Continuous Integration and Delivery (CI, CD) • docker aws vpc rancher • • briley  

      2
      0
      Votes
      2
      Posts
      1
      Views

      Mystic

      First of all you need to support HA mode. You will need: external MySQL DB (e.g. AWS RDS Aurora) external load balancer (e.g. AWS ELB) 2 additional Rancher Server Nodes (to support quorum of 3) Once you will be ready with external DB Rancher would ask you to backup & restore his database in it. After that you'll be asked to redeploy Rancher Server with additional params for external DB connection. Then you'll be able to run two more Rancher Server Nodes (e.g. using AWS ELB). When previous steps have finished you would be able to add more Rancher Agent Nodes in AWS directly from Rancher Hosts tab. After that you can just switch off OVH nodes - payload would be transferred to AWS automagically. Make sure to backup & restore all databases or data-sensitive instances on OVH hosts. More information such as HA Requirements etc. can be found in documentation.
    • A

      Will data contained withing Persistent Volume survive cluster deletion?
      Continuous Integration and Delivery (CI, CD) • kubernetes rancher persistent volumes • • Anderson  

      2
      0
      Votes
      2
      Posts
      1
      Views

      K

      It depends on what your persistentVolumeReclaimPolicy is. If you manually created/defined the PersistentVolume, then the default action is to keep the data. (This sounds like what you're doing.) If you are using something like EKS on Amazon, your PV's are dynamically generated EBS volumes. In that case, they will be deleted by default. A little bit more about reclamation: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
    • Mystic

      Are there arguments against using “imagization" tools in the literature and if so, what are the main ones?
      Continuous Integration and Delivery (CI, CD) • docker images kubernetes culture rancher • • Mystic  

      2
      0
      Votes
      2
      Posts
      1
      Views

      R

      You might have better luck finding answers to your questions if you use the more standard term of "containerization". This article discusses containerization as well as some pros and cons you might want to consider.
    • S

      Configure Eureka or Consul in Rancher
      Continuous Integration and Delivery (CI, CD) • rancher • • Saumya  

      2
      0
      Votes
      2
      Posts
      1
      Views

      jeanid

      THe issue was related to permission capabilities. Thanks
    • D

      how to change a rancher UI installed prometheus server config
      Continuous Integration and Delivery (CI, CD) • kubernetes helm rancher • • derk  

      2
      0
      Votes
      2
      Posts
      1
      Views

      C

      If you could let us know what helm chart you have used to install prom-operator, that might help refine the answer. But in case of kube-prometheus-stack, in the values.yaml of the , use the additionalScrapeConfigs section to describe jobs that are external to k8s system and re-deploy. Or use serviceMonitors as described in this article
    • 1 / 1