Argo CD can't delete an app if it cannot generate manifests. You need to either:
- Reinstate/fix your repo.
- Delete the app using
--cascade=false
and then manually deleting the resources.
See Diffing documentation for reasons resources can be OutOfSync, and ways to configure Argo CD to ignore fields when differences are expected.
Argo CD provides health for several standard Kubernetes types. The Ingress
and StatefulSet
types have known issues which might cause health check
to return Progressing
state instead of Healthy
.
-
Ingress
is considered healthy ifstatus.loadBalancer.ingress
list is non-empty, with at least one value forhostname
orIP
. Some ingress controllers (contour, traefik) don't updatestatus.loadBalancer.ingress
field which causesIngress
to stuck inProgressing
state forever. -
StatefulSet
is considered healthy if value ofstatus.updatedReplicas
field matches tospec.replicas
field. Due to Kubernetes bug kubernetes/kubernetes#68573 thestatus.updatedReplicas
is not populated. So unless you run Kubernetes version which include the fix kubernetes/kubernetes#67570StatefulSet
might stay inProgressing
state. -
Your
StatefulSet
orDaemonSet
is usingOnDelete
instead ofRollingUpdate
strategy. See #1881.
As workaround Argo CD allows providing health check customization which overrides default behavior.
For Argo CD v1.8 and earlier, the initial password is set to the name of the server pod, as per the getting started guide.
For Argo CD v1.9 and later, the initial password is available from a secret named argocd-initial-admin-password
.
To change the password, edit the argocd-secret
secret and update the admin.password
field with a new bcrypt hash. You
can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash. For example:
# bcrypt(password)=$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa
kubectl -n argocd patch secret argocd-secret \
-p '{"stringData": {
"admin.password": "$2a$10$rRyBsGSHK6.uc8fntPwVIuLVHgsAhAX7TcdrqW/RADU0uh7CaChLa",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
Another option is to delete both the admin.password
and admin.passwordMtime
keys and restart argocd-server. This will generate
a new password as per the getting started guide, so either to the name of the pod (Argo CD 1.8 and earlier)
or a randomly generated password stored in a secret (Argo CD 1.9 and later).
Add admin.enabled: "false"
to the argocd-cm
ConfigMap (see user management).
Argo CD might fail to generate Helm chart manifests if the chart has dependencies located in external repositories. To solve the problem you need to make sure that requirements.yaml
uses only internally available Helm repositories. Even if the chart uses only dependencies from internal repos Helm might decide to refresh stable
repo. As workaround override
stable
repo URL in argocd-cm
config map:
data:
# v1.2 or earlier use `helm.repositories`
helm.repositories: |
- url: http://<internal-helm-repo-host>:8080
name: stable
# v1.3 or later use `repositories` with `type: helm`
repositories: |
- type: helm
url: http://<internal-helm-repo-host>:8080
name: stable
I've configured cluster secret but it does not show up in CLI/UI, how do I fix it?
Check if cluster secret has argocd.argoproj.io/secret-type: cluster
label. If secret has the label but the cluster is still not visible then make sure it might be a
permission issue. Try to list clusters using admin
user (e.g. argocd login --username admin && argocd cluster list
).
Use the following steps to reconstruct configured cluster config and connect to your cluster manually using kubectl:
kubectl exec -it <argocd-pod-name> bash # ssh into any argocd server pod
argocd-util kubeconfig https://<cluster-url> /tmp/config --namespace argocd # generate your cluster config
KUBECONFIG=/tmp/config kubectl get pods # test connection manually
Now you can manually verify that cluster is accessible from the Argo CD pod.
To terminate the sync, click on the "synchronisation" then "terminate":
Is some cases, the tool you use may conflict with Argo CD by adding the app.kubernetes.io/instance
label. E.g. using Kustomize common labels feature.
Argo CD automatically sets the app.kubernetes.io/instance
label and uses it to determine which resources form the app. If the tool does this too, this causes confusion. You can change this label by setting the application.instanceLabelKey
value in the argocd-cm
. We recommend that you use argocd.argoproj.io/instance
.
!!! note When you make this change your applications will become out of sync and will need re-syncing.
See #1482.
Kubernetes has normalized your resource limits when they are applied, and then Argo CD has then compared the version in your generated manifests to the normalized one is Kubernetes - they won't match.
E.g.
'1000m'
normalized to'1'
'0.1'
normalized to'100m'
'3072Mi'
normalized to'3Gi'
3072
normalized to'3072'
(quotes added)
To fix this use diffing customizations settings.
Argo CD uses a JWT as the auth token. You likely are part of many groups and have gone over the 4KB limit which is set for cookies. You can get the list of groups by opening "developer tools -> network"
- Click log in
- Find the call to
<argocd_instance>/auth/callback?code=<random_string>
Decode the token at https://jwt.io/. That will provide the list of teams that you can remove yourself from.
See #2165.
Maybe you're behind a proxy that does not support HTTP 2? Try the --grpc-web
flag:
argocd ... --grpc-web
Your not running your server with correct certs.
If you're not running in a production system (e.g. you're testing Argo CD out), try the --insecure
flag:
argocd ... --insecure
!!! warning "Do not use --insecure
in production"
Most likely you forgot to set the url
in argocd-cm
to point to your ArgoCD as well. See also
the docs.