Glossary

#Falco - Host Intrusion Detection tool (only log the detection)

https://falco.org/

Detects threats at runtime by observing the behaviour of your applications and containers. It is installed on every node. It works between the container and kernel.

Falco uses system calls to secure and monitor a system, by:

  • Parsing the Linux system calls from the kernel at runtime

  • Asserting the stream against a powerful rules engine

  • Alerting when a rule is violated

Kubernetes Audit Rules

Rules devoted to Kubernetes audit events are given in the default k8saudit plugin rules. When installed as a daemon, falco installs this rules file to /etc/falco/, so they are available for use.

#seccomp - restrict system calls within a container

Valid options for type include RuntimeDefault, Unconfined, and Localhost. localhostProfile must only be set if type: Localhost. It indicates the path of the pre-configured profile on the node, relative to the kubelet's configured Seccomp profile location (configured with the --root-dir flag).

...
securityContext:
  seccompProfile:
    type: RuntimeDefault

Here is an example that sets the Seccomp profile to a pre-configured file at <kubelet-root-dir>/seccomp/my-profiles/profile-allow.json:

...
securityContext:
  seccompProfile:
    type: Localhost
    localhostProfile: my-profiles/profile-allow.json

#AppArmour - a layer between userspace and kernel syscalls

AppArmor is a Linux kernel security module that allows the system administrator to restrict programs' capabilities with per-program profiles

Profiles:

  • AppArmor must be installed on every node

  • AppArmor profiles need to be available on every node.

  • AppArmor profiles are specified per container and done using annotations.

ProfileModes:

  • Unconfined (no profiles will be loaded - disables AppArmor)

  • Complain (profile violations are permitted but logged)

  • Enforce (profile violations are not permitted)

apiVersion: v1
kind: Pod
metadata:
  name: hello-apparmor
  annotations:
    container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>

The profile_ref can be one of:

  • runtime/default to apply the runtime's default profile

  • localhost/<profile_name> to apply the profile loaded on the host with the name <profile_name>

  • unconfined to indicate that no profiles will be loaded

apiVersion: v1
kind: Pod
metadata:
  name: hello-apparmor
  annotations:
    container.apparmor.security.beta.kubernetes.io/<container_name>: localhost/<profile_name>

#Network Policy

  • Ingress

  • Egress

#Ingress

  • host

  • annotation - rewrite

  • don't forget to specify

    spec:

    ingressClassName: nginx

#CIS Center for Internet Security (secure OS)

#CIS Benchmark (secure Kubernetes)

Tools:

  • kube-bench

curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.12/kube-bench_0.6.12_linux_amd64.deb -o kube-bench_0.6.12_linux_amd64.deb

sudo apt install ./kube-bench_0.6.12_linux_amd64.deb -f
kube-bench -c 1.3.2
kube-bench -g 1.3
  • (docker bench)

#ETCD - Encrypting Secret Data at Rest

HashiCorp - Vault

kube-apiserver

--encryption-provider-config=/etc/kubernetes/etcd/encrypt_config.yaml
--encryption-provider-config-automatic-reload=true
head -c 32 /dev/urandom | base64
/etc/kubernetes/etcd/encrypt_config.yaml
/etc/kubernetes/etcd/encrypt_config.yaml

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
      - configmaps
      - pandas.awesome.bears.example
    providers:
      - identity: {}
      - aesgcm:
          keys:
            - name: key1
              secret: c2VjcmV0IGlzIHNlY3VyZQ==
            - name: key2
              secret: dGhpcyBpcyBwYXNzd29yZA==
      - aescbc:
          keys:
            - name: key1
              secret: c2VjcmV0IGlzIHNlY3VyZQ==
      - secretbox:
          keys:
            - name: key1
              secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=yam
kubectl get secret -A -o yaml | kubectl replace -f -

Name
Encryption
Strength
Speed
Key Length
Other Considerations

identity

None

N/A

N/A

N/A

no encryption

secretbox

XSalsa20 and Poly1305

Strong

Faster

32-byte

A newer standard and may not be considered acceptable in environments that require high levels of review.

aesgcm

AES-GCM with random nonce

Must be rotated every 200k writes

Fastest

16, 24, or 32-byte

Is not recommended for use except when an automated key rotation scheme is implemented.

aescbc

AES-CBC with PKCS#7 padding

Weak

Fast

32-byte

Not recommended

kms

Uses envelope encryption scheme: Data is encrypted by data encryption keys (DEKs) using AES-CBC with PKCS#7 padding (prior to v1.25), using AES-GCM starting from v1.25, DEKs are encrypted by key encryption keys (KEKs) according to configuration in Key Management Service (KMS)

Strongest

Fast

32-bytes

The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. Configure the KMS provider.

#Container Runtime Sandboxes ????

  • more resources needed

  • might be better for smaller containers.

  • not good for syscall heavy workloads

  • no direct access to hardware

Dirty Cow exploit.

  • gVisor

Installation

sudo apt-get update && \
sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null
sudo apt-get update && sudo apt-get install -y runsc
// enable runsc

/etc/containerd/config.toml

version = 2
[plugins."io.containerd.runtime.v1.linux"]
  shim_debug = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
  runtime_type = "io.containerd.runsc.v1"

or

???containerd config default > /etc/containerd/config.toml
systemctl restart containerd.service 

Define RuntimeClass

# RuntimeClass is defined in the node.k8s.io API group
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  # The name the RuntimeClass will be referenced by.
  # RuntimeClass is a non-namespaced resource.
  name: gvisor 
# The name of the corresponding CRI configuration
handler: runsc

Use in a pod

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  runtimeClassName: myclass
  # ...
  • kata (lightweight vm) (QEMU)

#Security Contexts

//Pod
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    fsGroupChangePolicy: "OnRootMismatch"
//Container
   securityContext:
     runAsNonRoot: true ????
     #privileged: true ???
     allowPrivilegeEscalation: false
     capabilities:
       add: ["NET_ADMIN", "SYS_TIME"]

#Open Policy Agent and Gatekeeper (OPA)

Install
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml

create the Constraint Template

create the Constraint

https://github.com/killer-sh/cks-course-environment/tree/master/course-content/opa

https://play.openpolicyagent.org

https://medium.com/axons/admission-control-on-kubernetes-9a1667b7e322

https://github.com/BouweCeunen/gatekeeper-policies

#Image footprint (security)

  • reduce image footprint with Multistage

  • secure and harden images

    • use specific package versions

    • don't run as a root

    • make the filesystem read-only ???

    • remove shell access

    • follow the docker best practices

#Static Analysis

  • manual approach - visual check

  • kubesec (https://kubesec.io/)

    • kubesec scan pod.yaml

  • Conftest - OPA ???

LATEST_VERSION=$(wget -O - "https://api.github.com/repos/open-policy-agent/conftest/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/' | cut -c 2-)
wget "https://github.com/open-policy-agent/conftest/releases/download/v${LATEST_VERSION}/conftest_${LATEST_VERSION}_Linux_x86_64.tar.gz"
tar xzf conftest_${LATEST_VERSION}_Linux_x86_64.tar.gz
sudo mv conftest /usr/local/bin
conftest help
conftest pull https://raw.githubusercontent.com/open-policy-agent/conftest/master/examples/compose/policy/deny.rego
conftest test deployment.yaml

kubesec run as:

#Image Vulnerability Scanning

crictl pull python:3.6.12-alpine3.11
trivy image python:3.10.0a4-alpine --output /root/python_alpine.txt 
trivy image --input ruby-2.3.0.tar  --output /root/python_alpine.txt -f json
  • trivy-operator

https://github.com/aquasecurity/trivy-operator/blob/main/docs/index.md

kubectl apply -f 
https://github.com/aquasecurity/trivy-operator/blob/main/deploy/static/trivy-operator.yaml
kubectl get vulnerabilityreports -o wide
kubectl get configauditreports -o wide

#Secure supply chain

Kubernetes - private registry

kubectl create secret docker-registry my-private-registry \
--docker-server my-private-server \
--docker-username my-username \
--docker-password secretPass \
--docker-email [email protected]
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-private-registry}]}'

Whitelist Registries with OPA Gatekeeeper

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8strustedimages
spec:
  crd:
    spec:
      names:
        kind: K8sTrustedImages
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8strustedimages

        violation[{"msg": msg}] {
          image := input.review.object.spec.containers[_].image
          not startswith(image, "docker.io/")
          not startswith(image, "k8s.gcr.io/")
          msg := "not trusted image!"
        }
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sTrustedImages
metadata:
  name: pod-trusted-images
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]

??? how to allow the default registry??????

ImagePolicyWenhook

  • /etc/kubernetes/admission/admission_configuration.yaml

    apiVersion: apiserver.config.k8s.io/v1
    kind: AdmissionConfiguration
    plugins:
      - name: ImagePolicyWebhook
        configuration:
          imagePolicy:
            kubeConfigFile: /etc/kubernetes/admission/kubeconf
            allowTTL: 50
            denyTTL: 50
            retryBackoff: 500
            defaultAllow: true # Should be false. 
            # If it is set to true and the external server is not reachable, allow pod creation
  • /etc/kubernetes/admission/kubeconf

    apiVersion: v1
    kind: Config
    
    # clusters refers to the remote service.
    clusters:
    - cluster:
        certificate-authority: /etc/kubernetes/admission/external-cert.pem  # CA for verifying the remote service.
        server: https://external-service:1234/check-image                   # URL of remote service to query. Must use 'https'.
      name: image-checker
    
    contexts:
    - context:
        cluster: image-checker
        user: api-server
      name: image-checker
    current-context: image-checker
    preferences: {}
    
    # users refers to the API server's webhook configuration.
    users:
    - name: api-server
      user:
        client-certificate: /etc/kubernetes/admission/apiserver-client-cert.pem     # cert for the webhook admission controller to use
        client-key:  /etc/kubernetes/admission/apiserver-client-key.pem             # key matching the cert
  • /etc/kubernetes/manifests/kube-apiserver.yaml

  • --admission-control-config-file=/etc/kubernetes/admission/admission_configuration.yaml

    --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook

--admission-control-config-file=/etc/kubernetes/admission/admission_configuration.yaml
--enable-admission-plugins=NodeRestriction,ImagePolicyWebhook

    volumeMounts:
    - mountPath: /etc/kubernetes/admission
      name: k8s-admission
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/admission
      type: DirectoryOrCreate
    name: k8s-admission

???? Find an example of an external WebHook service ????

Last updated