Cloud Console

Pradeep Tammali
7 min readNov 30, 2020

Prerequisites:

  • Docker
  • Kubernetes cluster
  • Python 3

Overview:

Sometimes when we are using some clouds or connecting to some servers, tools or application we need some prerequisites installed in our system. If you are admin for a platform then you might need all your users to install some plugins in their systems to connect to the platform. You might also have some custom binaries developed which needs to be installed in client machines in order to connect to your platform or system. This might be sometimes an issue of security or users may feel like unwanted installations in their machine. To avoid these kind of situations you can build a common console for all your users with all the prerequisites, plugins and configuration installed on a system and make it public for all the users. Usually we use seed nodes to achieve this.

When you are a Kubernetes cluster admin, you might put some restrictions on the way your users access the cluster. You may have defined different RBAC for each group or user. In such cases you may have to provide different configuration for each user. This would create a lot work for the admin to generate configs for all the users. When you have enabled OIDC on your cluster then users might have to use some tools such as kubelogin to generate the configuration. And also users have to install some plugins such as kubectl and helm in their host environment to access and perform operation on the cluster. To avoid all these you can install all these plugins in a pod and generate config for each user and keep it in their in home directory and make it publicly exposed on browser for users to access. This you can do by using Cloud Console.

Cloud Console:

I have a Kubernetes cluster for which I have configured OIDC with Keycloak. Keycloak in integrated with AD with specific Groups and Users. All the users form config file manually to access the Kubernetes cluster. They install kubelogin in their host machines to generate config and kubectl to access the kubernetes cluster. The API server is exposed publicly. When the users generate config using kubelogin the token generated are valid for 5 hours. After 5 hours they again have to generate new token using kubelogin. To make all this automated and have a common shell for all the users, I have build a pod with all the required plugins installed. When a user logged into this pod they will be prompted with Username and Password. These credentials are validated against the Keycloak configured and generates a kubeconfig for the user and stored in ~/.kube/config file.

Configure Cloud Console:

As I explained before, I don’t want my users to install any kind of plugins or tools in their machine to access the Kubernetes cluster. Even if they don’t have a machine also they should be able to access the console from browser. To achieve this first I need to build a pod, which will ask for the username and password on login and validate against the Keycloak. Along with the validation we also fetch the id token and refresh token which will be used to generate the kubeconfig for the Users.

Let’s build the console.

git clone https://github.com/PradeepTammali/Cloud-Console.git
cd Cloud-Console/Console/

Here, you see some files. The file setup.sh is executed whenever user logins into the pod which will force the users to authenticate. When the credentials given by the user are invalid, they will be logged out of the shell immediately. Any control characters such as CTRL-C or CTRL-Z or any other aborting signal sent while entering the credentials will lead to the termination of the shell. To validate the credentials that user has provided we need to configure the following Keycloak details in the script.

# Keycloak Details
KEYCLOAK="Keycloak URL with port <cloudconsole.keycloak.com:8080>"
REALM="Keycloak REALM"
CLIENT="Keycloak Client"
CLIENT_SECRET="Keycloak Client Secret"
ROOTCA_PEM="Root Certificate for the Keycloak to authenticate with k8s"
K8S_APISERVER="Kubernets Api server URL"

The rootCA file is the certificate which you will use while configuring the OIDC of api server with Keycloak. For OIDC it is required to run Keycloak on HTTPS. The same certificate which will be used in configuration of OIDC in --oidc-ca-file parameter should be used here. The certificate should be base64 encoded and kept here.

The script will create the users if not does not exist the pod and adds them to a default group available. The system umask is configured as 077 to secure the data among the users.

The file banner contains the text that needs to be displayed whenever you login into the pod. This can be a welcome message or information about the shell or how it can be used. The content of the shell looks like this.

###############################################################
# Welcome to Cloud Console #
# This is the client shell initialized with required #
# tools and configuration. Please start using the shell #
# by logging with your credentials #
###############################################################

You can customize the message according to your requirement.

The file kubeconfig is a template configuration file where the values are replaced dynamically when user login into the pod and the generated kube configuration is placed in~/.kube/config file. All the details which are dynamically replaced are configured in setup.sh except id token and refresh token. These values are fetched from Keycloak when user logs in.

Build Cloud Console:

The file Dockerfile will install the plugins and dependencies required in the image. The plugins kubectl , kubelogin and helm are configured to installed in this image.

If you have private docker registry running on cloudconsole.registry.com:5000 then you can build and push the image.

sudo docker build -t cloudconsole.registry.com:5000/cloud-console:v1 .
sudo docker push cloudconsole.registry.com:5000/cloud-console:v1

Deploy Cloud Console:

The file cloud-console.yaml have a deployment of Cloud Console with replica 1. If you have Kubernetes cluster available you can deploy simply as following. You can use kind if you do not have any Kubernetes cluster available.

kubectl apply -f cloud-console.yaml

The file rbac.yaml contains the Role and RoleBinding defined which allows only specific Groups and Users to access the Cloud Console. You can configure the RoleBinding to make it accessible for everyone.

kubectl apply -f rbac.yaml

Once the pod comes to the running state you can login into the pod which prompts for username and password along with the banner.

Now the Console is ready which can be used by all the users of the platform. Each user will have their own protected data, Kubernetes configuration along with all the plugins and commands available for them which avoid installations or configuration in their machine.

Cloud Console Server

Still, the console is not yet accessible to users. The console is running as pod inside the same Kubernetes cluster which they need access to. So make this console available for all the Users in the browser we deploy Cloud Console Server which will create a connection to the console pod using python-kubernetes client and open a shell in the browser. This web server is exposed publicly and all the users will have access to it to login into the console. This Cloud Console Server is built using Python to connect to console pod in Kubernetes and Javascript to visualize terminal in the browser with xterm.

Build Cloud Console Server:

We build the Cloud Console Server now.

git clone https://github.com/PradeepTammali/Cloud-Console.git
cd Cloud-Console/ConsoleServer/

If you are running this server locally then you need export the environment variable API_SERVER with Kubernetes API server URL as value and run the server.

export API_SERVER="https://cloud-console.apiserver.com"
python src/app.py

If you have a private docker registry running on cloud-console-server.registry.com:5000 you can build the push the docker image.

sudo docker build -t cloud-console-server.registry.com:5000/cloud-console-server:v1 .
sudo docker push cloud-console-server.registry.com:5000/cloud-console-server:v1

Deploy Cloud Console Server:

Update the API server URL in the YAML file cloud-console-server.yaml and change the exposed port as per your need. The server will be running on HTTP protocol.

env:
- name: API_SERVER
value: "https://cloud-console.apiserver.com"

Deploy the Cloud Console Sever in Kubernetes cluster as following.

kubectl apply -f cloud-console-server.yaml

Now if you are the server locally then you can reach the sever on http://localhost:5000 and if you have deployed in K8s cluster then you can reach the server on http://<external or public IP>:30002. This will open a page that looks like this.

Fill the fields, give namespace name where Cloud Console is deployed, Cloud Console pod name, Container name and ID token to connect to Cloud Console pod. Here the Cloud Console Server does not have privileges to connect to Cloud Console pod. That is the reason I am explicitly taking the ID token and using it to connect to the Cloud Console pod. If the Server is deployed in same namespace as console or if the Server has permissions to access to console pod then you can try to use load_incluster_config method to get the Kubernetes configuration as explained here and avoid passing the id token from the browser.

That’s it. Your platform users now can use this common shell.

Thanks you for reading.

Github Repo:

https://github.com/PradeepTammali/Cloud-Console.git

--

--