Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

neaten svc yaml could not be applied in a different svcCIDR environment #86

Closed
JasperDiShu opened this issue Nov 25, 2022 · 3 comments
Closed

Comments

@JasperDiShu
Copy link

Description:
When I use the kubectl-neat 2.0.3 to do the clean of yaml file, I found that after neaten the yaml of a svc, the yaml still contains the part of clusterIP and clusterIPs. In the scenario that I generated the yaml file in a svcCIDR range is 172.21.0.0/16 k8s environment, but apply these svc yaml files in a new environment that svcCIDR is 192.168.0.0/16, it will occur the error that 'failed to allocated ip:172.21.132.103 with error:provided IP is not in the valid range. The range of valid IPs is 192.168.0.0/16'.

What I expected to happen:
I think these part 'clusterIP and clusterIPs' in the yaml is not the mandatory part, perhaps it could be remove by the kubectl-neat function. Thanks for your time to look at this issue.

Some addition output I can provide:
Original svc yaml content:

apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: minimum
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-09-07T02:19:41Z"
labels:
app: im
app.kubernetes.io/managed-by: Helm
grp: xxxx
managedFields:

  • apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:meta.helm.sh/release-name: {}
    f:meta.helm.sh/release-namespace: {}
    f:labels:
    .: {}
    f:app: {}
    f:app.kubernetes.io/managed-by: {}
    f:grp: {}
    f:spec:
    f:ports:
    .: {}
    k:{"port":xxx,"protocol":"TCP"}:
    .: {}
    f:name: {}
    f:port: {}
    f:protocol: {}
    f:targetPort: {}
    k:{"port":xxx,"protocol":"TCP"}:
    .: {}
    f:name: {}
    f:port: {}
    f:protocol: {}
    f:targetPort: {}
    f:selector:
    .: {}
    f:app: {}
    f:sessionAffinity: {}
    f:type: {}
    manager: Go-http-client
    operation: Update
    time: "2022-09-07T02:19:41Z"
    name: im
    namespace: default
    resourceVersion: "4366696"
    uid: 8cf3fb18-6703-4e98-91f9-379f1abb0b50
    spec:
    clusterIP: 172.21.132.103
    clusterIPs:
  • 172.21.132.103
    ipFamilies:
  • IPv4
    ipFamilyPolicy: SingleStack
    ports:
  • name: xx
    port: xxxx
    ...............

Neat svc yaml content:

apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: minimum
meta.helm.sh/release-namespace: default
labels:
app: im
app.kubernetes.io/managed-by: Helm
grp: xxx
name: im
namespace: default
spec:
clusterIP: 172.21.132.103
clusterIPs:

  • 172.21.132.103
    ipFamilies:
  • IPv4
    ipFamilyPolicy: SingleStack
    ports:
  • name: xxx
    port: xxxx

The error info when I applied the neat yaml in a new env which svcCIDR changed:

The Service "xxx" is invalid: spec.clusterIPs: Invalid value: []string{"172.21.132.103"}: failed to allocated ip:172.21.132.103 with error:provided IP is not in the valid range. The range of valid IPs is 192.168.0.0/16

Environment:

OS:
CentOS Linux release 7.9.2009 (Core)

Kernel:
Linux iZuf638qwylkt54jqjznyuZ 3.10.0-1160.66.1.el7.x86_64 #1 SMP Wed May 18 16:02:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

@mattjoubert
Copy link

ditto ..

perl -i -ne 'if (/^ *clusterIP:/) { <>; <>; next; } print' filename.yaml

got rid of it for me.

@isaacnboyd
Copy link

isaacnboyd commented Dec 24, 2023

I'll take a shot at this

EDIT: actually there was a feature and PR made for this with some helpful discussion
#44 (comment)

@itaysk
Copy link
Owner

itaysk commented Jul 12, 2024

duplicate of #44

@itaysk itaysk closed this as completed Jul 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants