Photo by Armand Khoury on Unsplash

An often-asked question that I get during discussions with team members and mentees is, what it takes to progress to the next level in their career? While this question is simple on the face, there is no prescriptive answer to it. The context, available opportunities and most importantly the journey to the next career level is unique to each individual.

However there are still few general patterns that if well understood builds a solid foundation for career growth.

Here in this article, let me take you through some of these aspects that have worked for me and my mentees, over…

Red Hat Enterprise Linux CoreOS (RHCOS) is the default operating system for OpenShift. Recently OpenShift added support for RHCOS extensions which allows extending RHCOS in a controlled fashion.

In this article let’s see how to enable a new RHCOS extension. This is only required if you are an OpenShift engineer working on enhancements. As an OpenShift user you won’t need this.

The RHCOS contents are part of the OpenShift release image itself and is referred to as machine-os-content. Consequently adding a new RHCOS extension will require working with machine-os-content.

Let’s take an enhancement scenario and work through the steps.



This is a quick how-to guide for deploying a very simple hello-world web application on OpenShift and expose it over HTTPS . You can follow the same approach to expose any application securely overly HTTPS.

The POD and Service YAML for the hello-world application is given below.

kind: Pod
apiVersion: v1
name: hello-world
app: hello-world
- name: hello-world
- "-text=hello-world"
- name: pull-secret
---kind: Service
apiVersion: v1
name: hello-world-service
app: hello-world
- port: 5678

Once this is deployed you’ll have a “hello-world-service”

$ oc get svcNAME…

Automated OpenShift UPI Deployments on Power

Few months back we had the first release of Red Hat OpenShift 4 on IBM Power platform. This was a joint engineering effort between Red Hat and IBM teams, which continues as we work towards making latest OpenShift releases available for our enterprise Power customers.

In addition it’s been our constant endeavour to provide tools and assets to make it easier to spin up OpenShift 4.x clusters.

Our current focus is to simplify deployment of required services for OpenShift 4.x UPI (User Provisioned Infrastructure) based deployments on Power.

This article aims to provide a summary of the available options.

OpenShift 4.x UPI Deployment Options


Recently few of my colleagues got hit with OOMKilled issues for their application PODs when migrating from OpenShift cluster with smaller worker nodes (in terms of CPUs and Memory) to another cluster with bigger worker nodes (more CPUs and Memory).

The application was working absolutely fine when running on smaller cluster however many of the application PODs failed to start and threw OOMKilled errors when migrated to the bigger cluster. This is when some of us got pulled into to help analyse the issue OpenShift and container runtime.

This article is a summary of the work with the hope…

OpenShift provides a built in Container Image Registry for working with container images. The Registry is configured and managed by the Image Registry Operator. It runs in the openshift-image-registry namespace.

In case you want to see detailed information about the image registry operator, run the following command:

oc describe

In case you want to see the PODs created by the image registry operator, run the following command:

oc get pods -n openshift-image-registry

Follow these steps to use NFS storage for the Image Registry

Create NFS Persistent Volume Claim

Following is an example YAML to create NFS PVC:

apiVersion: v1
kind: PersistentVolumeClaim
name: registrypvc

This article lists the steps needed to run Intel Containers on Power (ppc64le). You can use this approach for a quick dev/test scenario.

The Linux system needs to be setup to use Qemu user mode emulation. For this, we need to register the appropriate Qemu binary as the interpreter/handler for any Intel binaries. Ensure that you are using the latest OS release with 4+ series kernel.

Perform the following steps on your Power (ppc64le) system

  • Download Qemu static binary
# wget rpm2cpio qemu-user-static-5.2.0-5.fc34.1.ppc64le.rpm | cpio -idmv# cp usr/bin/qemu-x86_64-static /usr/bin/qemu-x86_64-static# chmod +x /usr/bin/qemu-x86_64-static
  • Register with ‘F’…

Trusting Kubernetes Cluster in the Cloud

Cowritten by Harshal

Currently, we have two common Kubernetes cluster deployment scenarios in the cloud:

  1. Customer Managed — Kubernetes control plane and nodes are owned by the customer
  2. Provider Managed — Kubernetes control plane is owned by provider and nodes are owned by the customer

So, as a user, how can I be sure of the following?

  1. The application runs exactly the way the specification (eg. Pod or Deployment YAML) is written — image, start program, arguments, input data, output etc.
  2. No un-authorised modification to the application specification (Pod or Deployment YAML).
  3. Kubernetes secrets are not read by un-authorised entities…

Secure containers on Kubernetes cluster

Raksh — Secure Containers on Kubernetes
Raksh — Secure Containers on Kubernetes

Securing data and code have been an area of focus across CPU architectures. For example:

  • Intel provides SGX and Total Memory Encryption (TME/MKTME)
  • AMD provides Secure Memory Encryption and Secure Encrypted Virtualization (SEV)
  • IBM Power Secure virtual machine (SVM) and Protected Execution Facility (PEF)

Recently our focus has been to secure containerised workload by leveraging VM based Trusted Execution Environment with the aim of protecting in-use data and code without changing application code. Our team has a long history in working towards isolating and securing container workloads for our customers. The picture below gives a quick timeline overview.

Cowritten by @rahulchenny

For an effective digital transformation, IT organizations are increasingly adopting a microservices-based architecture. Containers-based infrastructure provides the basis for this new architecture, which enables deployment in minutes, workload portability across multiple clouds, and several other benefits.

You might be familiar with the Twelve-Factor App methodology that provides a well-defined guideline for developing microservices. If not, see “The Twelve-Factor App” for an introduction. While the Twelve-Factor App is a commonly used pattern to run, scale, and deploy cloud-native applications, there are seven additional factors that are essential for a production environment. These factors are: Observable, Schedulable, Upgradable, Least…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store