How to Secure Your Kubernetes Cluster with OpenID Connect and RBAC

A Kubernetes (k8s) cluster comprises worker machines called nodes and a control plane consisting of the API server, scheduler, etcd, controller manager, and in the case of a PaaS (platform as a service), the cloud controller manager. The containers deployed to the cluster run in pods on the worker nodes. At the same time, the control plane takes care of scheduling, responding to requests, and managing the cluster.


This is a companion discussion topic for the original entry at https://developer.okta.com/blog/2021/11/08/k8s-api-server-oidc

Hi,

I have followed your instructions carefully. The last step “kubectl --user=oidc get nodes” failed with this error: “error: You must be logged in to the server (Unauthorized)”.

I can successfully run “kubectl oidc-login setup --oidc-issuer-url=https://dev-54891300.okta.com/oauth2/aus8znbrivtIvvUbe5d7 --oidc-client-id=<client_id>” and receive a claim as shown below. The necessary RBAC role and clusterrolebindings have been created. I’m stumped so any help is appreciated. This fails both on Linux and Windows machines.

Sample claim:
{
“sub”: “00u8znd93j05UO03W5d7”,
“ver”: 1,
“iss”: “https://dev-54891300.okta.com/oauth2/aus8znbrivtIvvUbe5d7”,
“aud”: “0oa8znhaqq3Lel6Wc5d7”,
“iat”: 1680739122,
“exp”: 1680742722,
“jti”: “ID.a_e5RgJZ3wKOg8GDMn2qfQZMNhMigu_vnGNL4E4zvCc”,
“amr”: [
“pwd”
],
“idp”: “00o86ydr2iCI2XyxZ5d7”,
“nonce”: “4pu2RGFN-cyPEmC9kwZLnc_G4WGfAt9R8QC1IBGF69M”,
“auth_time”: 1680738431,
“at_hash”: “zOalIT4_F5pa3IsGPlkitA”,
“groups”: [
“k8s-admins”
]
}

Never mind, I had to add the following lines to the client kubernetes config file, these are missing when “kubectl oidc-login setup --oidc-issuer-url= --oidc-client-id=” is executed and prints out the " kubectl config set-credentials" command:

- --oidc-extra-scope=email
- --oidc-extra-scope=offline_access
- --oidc-extra-scope=profile
- --oidc-extra-scope=openid

Thanks for letting us know you figured it out!

@@deepu105

In the article, you mentioned " For this, we are leaving the values at default, which is 1 hour lifetime and unlimited refresh interval, but you can change them as per your needs." but based on my experiments, setting the timeouts has no effects and the user is sent to the log-in page after 2 hours. My settings is shown below. I suspect that kubelogin uses its own 2 hour timeout and ignores everything else:

resource “okta_auth_server_policy_rule” “auth_policy_rule” {
name = “AuthCode + PKCE”
auth_server_id = okta_auth_server.oidc_auth_server.id
policy_id = okta_auth_server_policy.auth_policy.id
priority = 1
grant_type_whitelist = [
“authorization_code”
]
scope_whitelist = [“*”]
group_whitelist = [“EVERYONE”]

access_token_lifetime_minutes = 6
refresh_token_lifetime_minutes = 10
refresh_token_window_minutes = 7

}