Creating a Simple Policy
Kyverno has two kinds of Policy resources: ClusterPolicy used for Cluster-Wide Resources and Policy used for Namespaced Resources. To gain an understanding of Kyverno policies, we'll start our lab with a label requirement on Deployments.
Below is a sample ClusterPolicy which will block any Deployment whose pod template doesn't have the label CostCenter:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: Enforce
rules:
- name: check-team
match:
any:
- resources:
kinds:
- Deployment
validate:
allowExistingViolations: false
message: "Label 'CostCenter' is required on the Deployment pod template"
pattern:
spec:
template:
metadata:
labels:
CostCenter: "?*"
spec.validationFailureAction tells Kyverno if the resource being validated should be allowed but reported (Audit) or blocked (Enforce). The default is Audit, but in our example it is set to Enforce
The rules section contains one or more rules to be validated
The match statement sets the scope of what will be checked. In this case, it's any Deployment resource
The validate statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked
allowExistingViolations: false ensures that updates to already-violating Deployments are also blocked. By default, Kyverno allows updates to pre-existing non-compliant resources to avoid disrupting workloads that existed before the policy was applied — setting this to false closes that gap and enforces the policy strictly on all admission requests
The message is what gets displayed to a user if this rule fails validation
The pattern object defines what pattern will be checked in the resource. In this case, it's looking for spec.template.metadata.labels with CostCenter — the pod template labels inside the Deployment spec
Create the policy using the following command:
clusterpolicy.kyverno.io/require-labels created
Next, take a look at the ui Deployment and notice its pod template labels:
{"app.kubernetes.io/component": "service",
"app.kubernetes.io/created-by": "eks-workshop",
"app.kubernetes.io/instance": "ui",
"app.kubernetes.io/name": "ui"
}
The pod template is missing the required CostCenter label. Now try to force a rollout of the ui Deployment:
error: failed to patch: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Deployment/ui/ui was blocked due to the following policies
require-labels:
check-team: 'validation error: Label ''CostCenter'' is required on the Deployment
pod template. rule check-team failed at path /spec/template/metadata/labels/CostCenter/'
The rollout failed with the admission webhook denying the request due to the require-labels Kyverno Policy.
Now add the required label CostCenter to the ui Deployment, using the Kustomization patch below:
- Kustomize Patch
- Deployment/ui
- Diff
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
spec:
template:
metadata:
labels:
CostCenter: IT
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/type: app
name: ui
namespace: ui
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: service
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
template:
metadata:
annotations:
prometheus.io/path: /actuator/prometheus
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
labels:
CostCenter: IT
app.kubernetes.io/component: service
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
spec:
containers:
- env:
- name: JAVA_OPTS
value: -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/urandom
- name: METADATA_KUBERNETES_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: METADATA_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METADATA_KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
envFrom:
- configMapRef:
name: ui
image: public.ecr.aws/aws-containers/retail-store-sample-ui:1.2.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 45
periodSeconds: 20
name: ui
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
memory: 1.5Gi
requests:
cpu: 250m
memory: 1.5Gi
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
fsGroup: 1000
serviceAccountName: ui
volumes:
- emptyDir:
medium: Memory
name: tmp-volume
prometheus.io/path: /actuator/prometheus
prometheus.io/port: "8080"
prometheus.io/scrape: "true"
labels:
+ CostCenter: IT
app.kubernetes.io/component: service
app.kubernetes.io/created-by: eks-workshop
app.kubernetes.io/instance: ui
app.kubernetes.io/name: ui
namespace/ui unchanged
serviceaccount/ui unchanged
configmap/ui unchanged
service/ui unchanged
deployment.apps/ui configured
deployment "ui" successfully rolled out
{"CostCenter": "IT",
"app.kubernetes.io/component": "service",
"app.kubernetes.io/created-by": "eks-workshop",
"app.kubernetes.io/instance": "ui",
"app.kubernetes.io/name": "ui"
}
The policy was satisfied and the rollout succeeded.
Mutating Rules
In the above examples, you checked how Validation Policies work in their default behavior defined in validationFailureAction. However, Kyverno can also be used to manage Mutating rules within the Policy, to modify any API Requests to satisfy or enforce the specified requirements on the Kubernetes resources. The resource mutation occurs before validation, so the validation rules will not contradict the changes performed by the mutation section.
Below is a sample Policy with a mutation rule defined:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-labels
spec:
rules:
- name: add-labels
match:
any:
- resources:
kinds:
- Deployment
mutate:
patchStrategicMerge:
spec:
template:
metadata:
labels:
CostCenter: IT
match.any.resources.kinds: [Deployment] targets this ClusterPolicy to all Deployment resources cluster-wide
mutate modifies resources during creation (vs. validate which blocks/allows). patchStrategicMerge.spec.template.metadata.labels.CostCenter: IT automatically adds CostCenter: IT to the pod template labels of every Deployment
Go ahead and create the above Policy using the following command:
clusterpolicy.kyverno.io/add-labels created
To validate the Mutation Webhook, let's roll out the carts Deployment without explicitly adding a label:
deployment.apps/carts restarted
deployment "carts" successfully rolled out
Validate that the label CostCenter=IT was automatically added to the carts Deployment pod template to meet the policy requirements:
{"CostCenter": "IT",
"app.kubernetes.io/component": "service",
"app.kubernetes.io/created-by": "eks-workshop",
"app.kubernetes.io/instance": "carts",
"app.kubernetes.io/name": "carts"
}
The label was automatically injected into the pod template of the carts Deployment. It's also possible to mutate existing resources in your Amazon EKS Clusters with Kyverno Policies using patchStrategicMerge and patchesJson6902 parameters in your Kyverno Policy.
This was just a simple example of validating and mutating Deployments with Kyverno. In the upcoming labs, you will explore more advanced use-cases such as enforcing Pod Security Standards and restricting container image registries.