https://rancher.com/ logo
Docs
Join the conversationJoin Slack
Channels
academy
amazon
arm
azure
cabpr
chinese
ci-cd
danish
deutsch
developer
elemental
epinio
espanol
events
extensions
fleet
français
gcp
general
harvester
harvester-dev
hobbyfarm
hypper
japanese
k3d
k3os
k3s
k3s-contributor
kim
kubernetes
kubewarden
lima
logging
longhorn-dev
longhorn-storage
masterclass
mesos
mexico
nederlands
neuvector-security
office-hours
one-point-x
onlinemeetup
onlinetraining
opni
os
ozt
phillydotnet
portugues
rancher-desktop
rancher-extensions
rancher-setup
rancher-wrangler
random
rfed_ara
rio
rke
rke2
russian
s3gw
service-mesh
storage
submariner
supermicro-sixsq
swarm
terraform-controller
terraform-provider-rancher2
terraform-provider-rke
theranchcast
training-0110
training-0124
training-0131
training-0207
training-0214
training-1220
ukranian
v16-v21-migration
vsphere
windows
Powered by Linen
general
  • a

    ancient-army-24563

    03/18/2023, 7:28 AM
    it would be appreciated if someone could help us on this
  • b

    broad-bird-4347

    03/18/2023, 1:38 PM
    Hi Team I would like to create a new custom cluster(RKE2) through Rancher. I am using my CNI for this deployment so edited the yaml file with cni: none. Is there any way to specify the manifest yaml required for my CNI like addons_include in RKE2.
    b
    • 2
    • 1
  • f

    fresh-memory-61679

    03/19/2023, 2:13 AM
    I use MAAS for bare metal and inventory management. I am trying to provision half of my machines as Harvester nodes and the rest as Rancher for RKE nodes for Kubernetes. Plan is to integrate them both together. I am able to install Rancher on the MAAS provisioned Ubuntu OS. I am having challenges to install Harvester in MAAS environment as it requires ISO or USB option and could not implement iPXE with MAAS without an image. Any suggestions?
  • p

    polite-piano-74233

    03/19/2023, 3:35 AM
    arent harvester and maas technically competing products? as in they both try to solve the same issue of bare-metal installs
  • f

    fresh-memory-61679

    03/19/2023, 4:49 AM
    While they have some similarity, they are not same. MAAS focused on baremetal OS installs, DHCP, PXE, IPMI.. It has some VM management capability with KVM that is not its strength. Harverster's focus is more on VM management. In that aspect it is more close to CloudStack or OpenStack with simpler interface and management. If Harvester handles a bit more of the baremetal stuff, I would be happy to move over from MAAS.
    p
    • 2
    • 1
  • l

    little-ram-17683

    03/19/2023, 8:22 AM
    Hi, I would like to deploy dual-stack cluster using Rancher UI. According to docs: https://docs.rke2.io/install/network_options/#dual-stack-configuration I should use:
    cluster-cidr: "10.42.0.0/16,2001:cafe:42:0::/56"
    service-cidr: "10.43.0.0/16,2001:cafe:42:1::/112"
    But not sure where should I put it. Under
    machineGlobalConfig
    section in Rancher UI yaml file?
  • f

    fancy-cricket-731

    03/19/2023, 10:04 AM
    Hey everyone, Maybe someone got an idea to that problem. We got a rancher 2 instance that is really old. Created with rancher 2.0.x and updated all the way to the current release over time. We got however some problems in this instance. When creating an RKE cluster (using interface or terraform does not matter) all the options that are set just disappear. When looking at the yaml in the local cluster, i can see that the cluster settings are pretty much all empty. I managed to write that configs manually by creating the yaml fully per hand, however this is not a good solution as we normally use terraform to setup our projects. Any idea on where i can look to fix the issue?
    p
    • 2
    • 2
  • q

    quaint-alarm-7893

    03/19/2023, 3:48 PM
    does anyone know if you can add a windows node to a Harvester-Provider cluster? if so, any directions on how?
  • m

    mammoth-australia-28930

    03/20/2023, 2:24 PM
    Hi guys, help me, how to fix this? Runcher v1.21.14
    Error: create: failed to create: the server responded with the status code 413
    but did not return more information (post secrets)
    helm.go:84: [debug] the server responded with the status code 413 but did not
    return more information (post secrets)
    create: failed to create
    <http://helm.sh/helm/v3/pkg/storage/driver.(*Secrets).Create|helm.sh/helm/v3/pkg/storage/driver.(*Secrets).Create>
        <http://helm.sh/helm/v3/pkg/storage/driver/secrets.go:164|helm.sh/helm/v3/pkg/storage/driver/secrets.go:164>
    <http://helm.sh/helm/v3/pkg/storage.(*Storage).Create|helm.sh/helm/v3/pkg/storage.(*Storage).Create>
        <http://helm.sh/helm/v3/pkg/storage/storage.go:69|helm.sh/helm/v3/pkg/storage/storage.go:69>
    <http://helm.sh/helm/v3/pkg/action.(*Install).RunWithContext|helm.sh/helm/v3/pkg/action.(*Install).RunWithContext>
        <http://helm.sh/helm/v3/pkg/action/install.go:341|helm.sh/helm/v3/pkg/action/install.go:341>
    main.runInstall
        <http://helm.sh/helm/v3/cmd/helm/install.go:279|helm.sh/helm/v3/cmd/helm/install.go:279>
    main.newUpgradeCmd.func2
        <http://helm.sh/helm/v3/cmd/helm/upgrade.go:123|helm.sh/helm/v3/cmd/helm/upgrade.go:123>
    <http://github.com/spf13/cobra.(*Command).execute|github.com/spf13/cobra.(*Command).execute>
        <http://github.com/spf13/cobra@v1.6.1/command.go:916|github.com/spf13/cobra@v1.6.1/command.go:916>
    <http://github.com/spf13/cobra.(*Command).ExecuteC|github.com/spf13/cobra.(*Command).ExecuteC>
        <http://github.com/spf13/cobra@v1.6.1/command.go:1044|github.com/spf13/cobra@v1.6.1/command.go:1044>
    <http://github.com/spf13/cobra.(*Command).Execute|github.com/spf13/cobra.(*Command).Execute>
        <http://github.com/spf13/cobra@v1.6.1/command.go:968|github.com/spf13/cobra@v1.6.1/command.go:968>
    main.main
        <http://helm.sh/helm/v3/cmd/helm/helm.go:83|helm.sh/helm/v3/cmd/helm/helm.go:83>
    runtime.main
        runtime/proc.go:250
    runtime.goexit
        runtime/asm_amd64.s:1571
    a
    • 2
    • 3
  • h

    hundreds-evening-84071

    03/20/2023, 2:34 PM
    hello all, I have a kubectl question - not necessarily rancher question... but more a reference with rancher. In Rancher UI Cluster Management we can click on the 3 vertical dots and there is an option to download yaml. Is there a way to do this (get cluster yaml) from command line via kubectl or another way?
    b
    p
    • 3
    • 9
  • c

    crooked-cat-21365

    03/20/2023, 3:22 PM
    I have setup a new cluster using Rancher 2.7.1 and RKE2 v1.24.10+rke2r1. First thing Rancher shows me is a warning about deprecated security policies:
    Pod Security Policies are deprecated as of Kubernetes v1.21, and have been removed in Kubernetes v1.25. You have one or more PodSecurityPolicy resource(s) in this cluster.
    How comes? Couldn't this have been avoided at install time (which is 10 minutes ago)?
    h
    b
    p
    • 4
    • 4
  • a

    abundant-hair-58573

    03/20/2023, 5:32 PM
    We've installed rancher-monitoring in our 2.6.5 cluster, now we're looking into kubecost. I see kubecost recommends using their bundled prometheus/grafana stack. Are there any conflicts or performance hits running both kubecost and rancher-monitoring?
  • q

    quiet-area-89381

    03/20/2023, 5:56 PM
    Anybody could give me a sense of sizing of a GKE cluster to host the rancher manager in HA mode to manage 10 small k8s clusters, about 10 human users of the UI/API, and run a prometheus/grafana stack for about total of 500 containers/apps/computers and other sensors. My goal is to avoid spawning too big of a cluster and be as cheap as possible but still have HA/prod level.
    p
    a
    • 3
    • 10
  • r

    red-vegetable-45199

    03/21/2023, 6:28 AM
    When I run "rke up", I see following error:
    WARN[0010] Can't pull Docker image [<image in our private registry>] on host [<hostname>]:  Error response from daemon: Get <image in our private registry>: uknown: Authentication is required
    However, if I ssh to that <hostname>, and run following command as root:
    docker pull <image in our private registry>
    Everything is fine. Some insight?
  • v

    victorious-insurance-69234

    03/21/2023, 7:15 AM
    Hello, trying to just do
    kubectl get nodes
    on a cluster created by
    k3d
    . The command
    kubectl cluster-info
    hangs with no output and no clear indication as to what is wrong. I'll paste some diagnostics now. The command used to create the cluster was
    k3d cluster create k8s
    $ k3d node list
    NAME               ROLE           CLUSTER   STATUS
    k3d-k8s-server-0   server         k8s       running
    k3d-k8s-serverlb   loadbalancer   k8s       running
    $ docker ps
    CONTAINER ID   IMAGE                            COMMAND                  CREATED         STATUS         PORTS                             NAMES
    8b1a899f1bcd   <http://ghcr.io/k3d-io/k3d-proxy:5.4.9|ghcr.io/k3d-io/k3d-proxy:5.4.9>   "/bin/sh -c nginx-pr…"   2 minutes ago   Up 2 minutes   80/tcp, 0.0.0.0:39277->6443/tcp   k3d-k8s-serverlb
    0d2361470d59   rancher/k3s:v1.25.7-k3s1         "/bin/k3d-entrypoint…"   2 minutes ago   Up 2 minutes
    $ cat ~/.kube/config
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: <redacted>
        server: <https://0.0.0.0:39277>
      name: k3d-k8s
    contexts:
    - context:
        cluster: k3d-k8s
        user: admin@k3d-k8s
      name: k3d-k8s
    current-context: k3d-k8s
    kind: Config
    preferences: {}
    users:
    - name: admin@k3d-k8s
      user:
        client-certificate-data: <redacted>
        client-key-data: <redacted>
    $ cat /etc/os-release 
    NAME="Arch Linux"
    PRETTY_NAME="Arch Linux"
    ID=arch
    BUILD_ID=rolling
    ANSI_COLOR="38;2;23;147;209"
    HOME_URL="<https://archlinux.org/>"
    DOCUMENTATION_URL="<https://wiki.archlinux.org/>"
    SUPPORT_URL="<https://bbs.archlinux.org/>"
    BUG_REPORT_URL="<https://bugs.archlinux.org/>"
    PRIVACY_POLICY_URL="<https://terms.archlinux.org/docs/privacy-policy/>"
    LOGO=archlinux-logo
    $ sudo iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination         
    
    Chain FORWARD (policy DROP)
    target     prot opt source               destination         
    DOCKER-USER  all  --  anywhere             anywhere            
    DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    DOCKER     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    DOCKER     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    DOCKER     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    ACCEPT     all  --  anywhere             anywhere            
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination         
    
    Chain DOCKER (3 references)
    target     prot opt source               destination         
    ACCEPT     tcp  --  anywhere             172.21.0.3           tcp dpt:sun-sr-https
    
    Chain DOCKER-ISOLATION-STAGE-1 (1 references)
    target     prot opt source               destination         
    DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
    DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
    DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
    RETURN     all  --  anywhere             anywhere            
    
    Chain DOCKER-ISOLATION-STAGE-2 (3 references)
    target     prot opt source               destination         
    DROP       all  --  anywhere             anywhere            
    DROP       all  --  anywhere             anywhere            
    DROP       all  --  anywhere             anywhere            
    RETURN     all  --  anywhere             anywhere            
    
    Chain DOCKER-USER (1 references)
    target     prot opt source               destination         
    RETURN     all  --  anywhere             anywhere
    $ docker inspect <http://ghcr.io/k3d-io/k3d-proxy:5.4.9|ghcr.io/k3d-io/k3d-proxy:5.4.9>
    [
        {
            "Id": "sha256:55aa42e8234edfe155d3fe46003f03988a3ec0171bccc3ae2854cad1e2d79c07",
            "RepoTags": [
                "<http://ghcr.io/k3d-io/k3d-proxy:5.4.9|ghcr.io/k3d-io/k3d-proxy:5.4.9>"
            ],
            "RepoDigests": [
                "<http://ghcr.io/k3d-io/k3d-proxy@sha256:538f5f0223ef455031ad311565b0ab6bee28961d9a4eac249f6aa930c7640bf5|ghcr.io/k3d-io/k3d-proxy@sha256:538f5f0223ef455031ad311565b0ab6bee28961d9a4eac249f6aa930c7640bf5>"
            ],
            "Parent": "",
            "Comment": "buildkit.dockerfile.v0",
            "Created": "2023-03-17T05:35:47.663228988Z",
            "Container": "",
            "ContainerConfig": {
                "Hostname": "",
                "Domainname": "",
                "User": "",
                "AttachStdin": false,
                "AttachStdout": false,
                "AttachStderr": false,
                "Tty": false,
                "OpenStdin": false,
                "StdinOnce": false,
                "Env": null,
                "Cmd": null,
                "Image": "",
                "Volumes": null,
                "WorkingDir": "",
                "Entrypoint": null,
                "OnBuild": null,
                "Labels": null
            },
            "DockerVersion": "",
            "Author": "",
            "Config": {
                "Hostname": "",
                "Domainname": "",
                "User": "",
                "AttachStdin": false,
                "AttachStdout": false,
                "AttachStderr": false,
                "ExposedPorts": {
                    "80/tcp": {}
                },
                "Tty": false,
                "OpenStdin": false,
                "StdinOnce": false,
                "Env": [
                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                    "NGINX_VERSION=1.19.10",
                    "NJS_VERSION=0.5.3",
                    "PKG_RELEASE=1",
                    "OS=",
                    "ARCH="
                ],
                "Cmd": null,
                "Image": "",
                "Volumes": null,
                "WorkingDir": "",
                "Entrypoint": [
                    "/bin/sh",
                    "-c",
                    "nginx-proxy"
                ],
                "OnBuild": null,
                "Labels": {
                    "maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>",
                    "org.opencontainers.image.created": "2023-03-17T05:35:06.435Z",
                    "org.opencontainers.image.description": "Little helper to run CNCF's k3s in Docker",
                    "org.opencontainers.image.licenses": "MIT",
                    "org.opencontainers.image.revision": "18967282633144120abcf75a3dacc110543cc00c",
                    "org.opencontainers.image.source": "<https://github.com/k3d-io/k3d>",
                    "org.opencontainers.image.title": "k3d",
                    "org.opencontainers.image.url": "<https://github.com/k3d-io/k3d>",
                    "org.opencontainers.image.version": "5"
                },
                "StopSignal": "SIGQUIT"
            },
            "Architecture": "amd64",
            "Os": "linux",
            "Size": 42393610,
            "VirtualSize": 42393610,
            "GraphDriver": {
                "Data": {
                    "LowerDir": "/var/lib/docker/overlay2/5403b1da8c213266850221b39a5b7694787a6e2fb4e09a8060429d0cefeca1af/diff:/var/lib/docker/overlay2/c18d416723b59a3a56abed94f5472706204ac9bfb5ef095f7fda89dc1288027c/diff:/var/lib/docker/overlay2/debf956e07b8c023dfea14d03f2a37e2c99862f722cc2482d4084873e1249a80/diff:/var/lib/docker/overlay2/048f1a158f4fc8fab2863d6c6360063410e7a1e5a880fc0821d263df2c72a170/diff:/var/lib/docker/overlay2/abf0683460b29ffea0fbfaec50417fab7d0c21c2733ca54398311182f328717c/diff:/var/lib/docker/overlay2/1ab7bf768eb4a5e1d253a1c2d4c445d8fb2d481763285a22b15bd667650fa405/diff:/var/lib/docker/overlay2/9b6fc7c3365d31fd2058d7602bcb21874b2015f31c017ae91c1a993398dea62f/diff:/var/lib/docker/overlay2/f94023589ffa7c12cceb331dcfb354b7ea102ae6a8bd1a77ad9915be60bda038/diff:/var/lib/docker/overlay2/895db9d4cbcc8a2bbfd2d6b9642ae6b4579817c12a43a338f9d3320e090aa4a8/diff:/var/lib/docker/overlay2/cbc3025ed5cc787342dea6a99b45544a846d7e5991a04039c8eadec79dc3139e/diff",
                    "MergedDir": "/var/lib/docker/overlay2/0e2bdc9e22a9e51d7e1cf50c8db83c49576294055a943c76acd6c777e7d5810a/merged",
                    "UpperDir": "/var/lib/docker/overlay2/0e2bdc9e22a9e51d7e1cf50c8db83c49576294055a943c76acd6c777e7d5810a/diff",
                    "WorkDir": "/var/lib/docker/overlay2/0e2bdc9e22a9e51d7e1cf50c8db83c49576294055a943c76acd6c777e7d5810a/work"
                },
                "Name": "overlay2"
            },
            "RootFS": {
                "Type": "layers",
                "Layers": [
                    "sha256:b2d5eeeaba3a22b9b8aa97261957974a6bd65274ebd43e1d81d0a7b8b752b116",
                    "sha256:ed3fe3f2b59f88def4ef31f4020793e2dc613571bebd4e8ee1241185fd2e6945",
                    "sha256:4531e200ac8d238b1f09c9b6f6aa2283532ec011de255b28cce9cf4aabf758f0",
                    "sha256:3c369314e0038454cd2849c5efac02812cea5a3b575f4e74cf6a9d1790360b24",
                    "sha256:3480549413ea041ba4a469d540b4bd3fe579029098f0a02038c0d1f4f0c25bdf",
                    "sha256:4689e8eca613fc415093e337de5d9194fbe1f767cc474f07b73ff08d2efcdac0",
                    "sha256:431f7c4eab0e9d18fd5c897ea975792eb1855fea6b9d7e88ca279c1babefdb9c",
                    "sha256:676e91d8739fb0ba44bfcb78ffda8029c6de70fd9d7502e606b33bee08cdce6a",
                    "sha256:4665183fb7306ec6497057862674343e5626bd47c83758dd93169de529b24394",
                    "sha256:263972c7e626a3bc7e9a3c9b10b3ff33cc52f44b84894e98cedff3be76e36824",
                    "sha256:5b390e0d0fdd660e6edd31b9f6e3b912b42da4d2e3dafe5da90a35fb48f23ce3"
                ]
            },
            "Metadata": {
                "LastTagTime": "0001-01-01T00:00:00Z"
            }
        }
    ]
  • v

    victorious-insurance-69234

    03/21/2023, 7:19 AM
    $ docker inspect <http://ghcr.io/k3d-io/k3d-tools:5.4.9|ghcr.io/k3d-io/k3d-tools:5.4.9>
    [
        {
            "Id": "sha256:a8c27040a7215ed60e81a628ee859746b9ba56a49d6bd034821a32a909033657",
            "RepoTags": [
                "<http://ghcr.io/k3d-io/k3d-tools:5.4.9|ghcr.io/k3d-io/k3d-tools:5.4.9>"
            ],
            "RepoDigests": [
                "<http://ghcr.io/k3d-io/k3d-tools@sha256:0814db158c5027e1e19b17423ea9473d51cb9d044707aaea926d8c4f61d7843a|ghcr.io/k3d-io/k3d-tools@sha256:0814db158c5027e1e19b17423ea9473d51cb9d044707aaea926d8c4f61d7843a>"
            ],
            "Parent": "",
            "Comment": "buildkit.dockerfile.v0",
            "Created": "2023-03-17T05:38:20.016590142Z",
            "Container": "",
            "ContainerConfig": {
                "Hostname": "",
                "Domainname": "",
                "User": "",
                "AttachStdin": false,
                "AttachStdout": false,
                "AttachStderr": false,
                "Tty": false,
                "OpenStdin": false,
                "StdinOnce": false,
                "Env": null,
                "Cmd": null,
                "Image": "",
                "Volumes": null,
                "WorkingDir": "",
                "Entrypoint": null,
                "OnBuild": null,
                "Labels": null
            },
            "DockerVersion": "",
            "Author": "",
            "Config": {
                "Hostname": "",
                "Domainname": "",
                "User": "",
                "AttachStdin": false,
                "AttachStdout": false,
                "AttachStderr": false,
                "Tty": false,
                "OpenStdin": false,
                "StdinOnce": false,
                "Env": [
                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
                ],
                "Cmd": null,
                "Image": "",
                "Volumes": null,
                "WorkingDir": "/app",
                "Entrypoint": [
                    "/app/k3d-tools"
                ],
                "OnBuild": null,
                "Labels": {
                    "org.opencontainers.image.created": "2023-03-17T05:35:06.931Z",
                    "org.opencontainers.image.description": "Little helper to run CNCF's k3s in Docker",
                    "org.opencontainers.image.licenses": "MIT",
                    "org.opencontainers.image.revision": "18967282633144120abcf75a3dacc110543cc00c",
                    "org.opencontainers.image.source": "<https://github.com/k3d-io/k3d>",
                    "org.opencontainers.image.title": "k3d",
                    "org.opencontainers.image.url": "<https://github.com/k3d-io/k3d>",
                    "org.opencontainers.image.version": "5"
                }
            },
            "Architecture": "amd64",
            "Os": "linux",
            "Size": 19671204,
            "VirtualSize": 19671204,
            "GraphDriver": {
                "Data": {
                    "LowerDir": "/var/lib/docker/overlay2/669645796ca9daf5a8ed3d4682ccbed3da34c022f3467aa61fad614a5079f1c4/diff:/var/lib/docker/overlay2/e3fde4f6583f1447ee0c2a5676b3e7b2402ddc69cdadd696e7958c96fc190a2c/diff:/var/lib/docker/overlay2/e12e7552de51115a0a26ecaf3a21a0b5bfad4a25734de603e304273cb140e027/diff",
                    "MergedDir": "/var/lib/docker/overlay2/e92c280f35dfc0bcf78d06b96bbebe0fb1d4c85dbe6f2bfacb8b5096d72913ab/merged",
                    "UpperDir": "/var/lib/docker/overlay2/e92c280f35dfc0bcf78d06b96bbebe0fb1d4c85dbe6f2bfacb8b5096d72913ab/diff",
                    "WorkDir": "/var/lib/docker/overlay2/e92c280f35dfc0bcf78d06b96bbebe0fb1d4c85dbe6f2bfacb8b5096d72913ab/work"
                },
                "Name": "overlay2"
            },
            "RootFS": {
                "Type": "layers",
                "Layers": [
                    "sha256:7cd52847ad775a5ddc4b58326cf884beee34544296402c6292ed76474c686d39",
                    "sha256:d71d7baaafe0115e02911d2daf8b1ae87ee99965b01ed3547854ae3edce0d479",
                    "sha256:20066afd5f4d9b611fcf22207abd70940a838a7f2b73436934daab6384c458e8",
                    "sha256:9d10632f2a10bf03aac076d9a95bef1cc3d1d5896279f6091e611799180bb8b5"
                ]
            },
            "Metadata": {
                "LastTagTime": "0001-01-01T00:00:00Z"
            }
        }
    ]
    w
    • 2
    • 4
  • v

    victorious-insurance-69234

    03/21/2023, 8:23 AM
    Solved, please disregard. I can delete if it's too much clutter.
  • b

    boundless-wolf-10738

    03/21/2023, 10:29 AM
    Hi everyone, I'm raising again the question. We have deployed the Rancher monitoring on our clusters. We have set a dedicated ingress for Grafana and configure azureAD authentication. We have disabled the anonymous logging and so Rancher integration is obviously no more working. I'm trying to find a way to keep the Rancher integration as it is really convenient. Did someone run into the same and is able to help? Many thanks in advance.
  • b

    bored-analyst-33695

    03/21/2023, 10:48 AM
    Hey guys, I’m having trouble starting RD. I have done:
    rdctl factory-restore
    Removed every trace (that i could find) of RD on my system and done a fresh install. But still getting the same issue when starting. _“Fixing binfmt_misc qemu”._ What is causing this and how can i resolve it? it just keeps going….
    • 1
    • 1
  • l

    limited-jelly-33018

    03/21/2023, 12:37 PM
    hi Team, I'm using Ranger desktop on windows, While running docker build command in my cmd , m getting the following error . Could you please let me know how to solve this issue.. Err:1 https://deb.debian.org/debian bullseye InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] Err:2 https://deb.debian.org/debian-security bullseye-security InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] Err:3 https://deb.debian.org/debian bullseye-updates InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] Reading package lists... W: Failed to fetch https://deb.debian.org/debian/dists/bullseye/InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] W: Failed to fetch https://deb.debian.org/debian-security/dists/bullseye-security/InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] W: Failed to fetch https://deb.debian.org/debian/dists/bullseye-updates/InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] W: Some index files failed to download. They have been ignored, or old ones used instead. + ln -s /lib /lib64 + apt-get install -y tini Reading package lists... Building dependency tree... Reading state information... The following NEW packages will be installed: tini 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 255 kB of archives. After this operation, 776 kB of additional disk space will be used. Err:1 https://deb.debian.org/debian bullseye/main amd64 tini amd64 0.19.0-1 Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] E: Failed to fetch https://deb.debian.org/debian/pool/main/t/tini/tini_0.19.0-1_amd64.deb Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: 146.75.30.132 443] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
    a
    • 2
    • 1
  • s

    swift-library-66826

    03/21/2023, 12:59 PM
    Hello, in Releases Notes from rancher version 2.7 i see "KDM" - what this means?
    b
    • 2
    • 2
  • w

    worried-electrician-89379

    03/21/2023, 1:47 PM
    hello here I have a question that may sound basic, but for which I can't find the answer: where does Rancher persist information about the calls made to Rancher API when Rancher server is installed on a pre-existing kubernetes cluster (e.g. thanks to the official Helm chart documented https://ranchermanager.docs.rancher.com/v2.5/reference-guides/installation-references/helm-chart-options) ? there are things which are backed by a kubernetes custom resource, so for those the answer is obvious ...but what about Rancher API resources that are not backed by a k8s CRD ?
    b
    • 2
    • 14
  • p

    polite-piano-74233

    03/21/2023, 2:04 PM
    if i have a cluster.yaml (add cluster custom yaml file) how can i apply it via cli in/to rancher?
    b
    h
    • 3
    • 15
  • p

    prehistoric-solstice-99854

    03/21/2023, 8:03 PM
    I have a question about Rancher 1.6 and the Elasticsearch 2 catalog install. It might be too old for anyone to help but I thought I’d ask here since I haven’t found any help searching online.
    h
    • 2
    • 4
  • p

    polite-piano-74233

    03/21/2023, 8:39 PM
    when i upgrade my rancher version, does that also upgrade the hypercube version?
  • w

    wooden-spoon-95626

    03/22/2023, 2:00 AM
    Hi team, I would like to allocate more cpu and memory to the VM run by rancher desktop, how can I do that? Thanks!
  • b

    bored-farmer-36655

    03/22/2023, 2:12 AM
    @wooden-spoon-95626 in the Preferences (you should ask questions in #rancher-desktop)
    w
    • 2
    • 1
  • c

    clever-butcher-21731

    03/22/2023, 6:29 AM
    Hello, cluster v1.21.5+k3s2 is installed, after restarting the service ( k3s) it applies the default kernel parameters ( net.netfilter.nf_conntrack_max). Ubuntu 18.04.5 LTS root@drm-set1-master01:~# sysctl -p net.core.somaxconn = 65535 net.ipv4.ip_local_port_range = 1024 65535 net.nf_conntrack_max = 4194304 net.netfilter.nf_conntrack_max = 4194304 fs.file-max = 2097152 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 65535 net.ipv4.ip_forward = 1 net.ipv4.ip_local_reserved_ports = 30000-32767 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 root@drm-set1-master01:~# systemctl restart k3s root@drm-set1-master01:~# sysctl -a | grep net.netfilter.nf_conntrack_max sysctl: reading key "net.ipv6.conf.all.stable_secret" sysctl: reading key "net.ipv6.conf.cni0.stable_secret" sysctl: reading key "net.ipv6.conf.default.stable_secret" sysctl: reading key "net.ipv6.conf.ens160.stable_secret" sysctl: reading key "net.ipv6.conf.flannel/1.stable_secret" sysctl: reading key "net.ipv6.conf.kube-ipvs0.stable_secret" sysctl: reading key "net.ipv6.conf.lo.stable_secret" sysctl: reading key "net.ipv6.conf.veth000ece3e.stable_secret" sysctl: reading key "net.ipv6.conf.veth1412c3ee.stable_secret" sysctl: reading key "net.ipv6.conf.veth3d3b54df.stable_secret" net.netfilter.nf_conntrack_max = 131072
    c
    • 2
    • 2
  • r

    rapid-scientist-25800

    03/22/2023, 9:41 AM
    Hello! I am starting to look into setting up Rancher Management in HA mode. What is the best way for the management cluster setup if starting from scratch ? RCE2 as its newest, or something else ?
    • 1
    • 1
  • m

    microscopic-holiday-21640

    03/22/2023, 11:29 AM
    i ve just install rancher desktop in my pc and i use wsl with linux, i have this alert, how can i already have some config clusterin in the .kube/config how can i solve it?
    r
    • 2
    • 2
Powered by Linen
Title
m

microscopic-holiday-21640

03/22/2023, 11:29 AM
i ve just install rancher desktop in my pc and i use wsl with linux, i have this alert, how can i already have some config clusterin in the .kube/config how can i solve it?
r

rapid-scientist-25800

03/22/2023, 11:36 AM
Delete the file its complaining about and let rancher create it for you
m

microscopic-holiday-21640

03/22/2023, 11:56 AM
it work, thanks mate
🙌 1
View count: 1