An often-asked question that I get during discussions with team members and mentees is, what it takes to progress to the next level in their career? While this question is simple on the face, there is no prescriptive answer to it. The context, available opportunities and most importantly the journey to the next career level is unique to each individual.
However there are still few general patterns that if well understood builds a solid foundation for career growth.
Here in this article, let me take you through some of these aspects that have worked for me and my mentees, over…
In this article let’s see how to enable a new RHCOS extension. This is only required if you are an OpenShift engineer working on enhancements. As an OpenShift user you won’t need this.
The RHCOS contents are part of the OpenShift release image itself and is referred to as
machine-os-content. Consequently adding a new RHCOS extension will require working with
Let’s take an enhancement scenario and work through the steps.
This is a quick how-to guide for deploying a very simple hello-world web application on OpenShift and expose it over HTTPS . You can follow the same approach to expose any application securely overly HTTPS.
The POD and Service YAML for the hello-world application is given below.
- name: hello-world
- name: pull-secret---kind: Service
- port: 5678
Once this is deployed you’ll have a “hello-world-service”
$ oc get svcNAME…
Few months back we had the first release of Red Hat OpenShift 4 on IBM Power platform. This was a joint engineering effort between Red Hat and IBM teams, which continues as we work towards making latest OpenShift releases available for our enterprise Power customers.
In addition it’s been our constant endeavour to provide tools and assets to make it easier to spin up OpenShift 4.x clusters.
Our current focus is to simplify deployment of required services for OpenShift 4.x UPI (User Provisioned Infrastructure) based deployments on Power.
This article aims to provide a summary of the available options.
Recently few of my colleagues got hit with OOMKilled issues for their application PODs when migrating from OpenShift cluster with smaller worker nodes (in terms of CPUs and Memory) to another cluster with bigger worker nodes (more CPUs and Memory).
The application was working absolutely fine when running on smaller cluster however many of the application PODs failed to start and threw OOMKilled errors when migrated to the bigger cluster. This is when some of us got pulled into to help analyse the issue w.r.to OpenShift and container runtime.
This article is a summary of the work with the hope…
OpenShift provides a built in Container Image Registry for working with container images. The Registry is configured and managed by the Image Registry Operator. It runs in the openshift-image-registry namespace.
In case you want to see detailed information about the image registry operator, run the following command:
oc describe configs.imageregistry.operator.openshift.io
In case you want to see the PODs created by the image registry operator, run the following command:
oc get pods -n openshift-image-registry
Follow these steps to use NFS storage for the Image Registry
Following is an example YAML to create NFS PVC:
This article lists the steps needed to run Intel Containers on Power (ppc64le). You can use this approach for a quick dev/test scenario.
The Linux system needs to be setup to use Qemu user mode emulation. For this, we need to register the appropriate Qemu binary as the interpreter/handler for any Intel binaries. Ensure that you are using the latest OS release with 4+ series kernel.
Perform the following steps on your Power (ppc64le) system
# wget http://www.rpmfind.info/linux/fedora-secondary/releases/34/Everything/ppc64le/os/Packages/q/qemu-user-static-5.2.0-5.fc34.1.ppc64le.rpm# rpm2cpio qemu-user-static-5.2.0-5.fc34.1.ppc64le.rpm | cpio -idmv# cp usr/bin/qemu-x86_64-static /usr/bin/qemu-x86_64-static# chmod +x /usr/bin/qemu-x86_64-static
Cowritten by Harshal
Currently, we have two common Kubernetes cluster deployment scenarios in the cloud:
So, as a user, how can I be sure of the following?
Securing data and code have been an area of focus across CPU architectures. For example:
Recently our focus has been to secure containerised workload by leveraging VM based Trusted Execution Environment with the aim of protecting in-use data and code without changing application code. Our team has a long history in working towards isolating and securing container workloads for our customers. The picture below gives a quick timeline overview.
Cowritten by @rahulchenny
For an effective digital transformation, IT organizations are increasingly adopting a microservices-based architecture. Containers-based infrastructure provides the basis for this new architecture, which enables deployment in minutes, workload portability across multiple clouds, and several other benefits.
You might be familiar with the Twelve-Factor App methodology that provides a well-defined guideline for developing microservices. If not, see “The Twelve-Factor App” for an introduction. While the Twelve-Factor App is a commonly used pattern to run, scale, and deploy cloud-native applications, there are seven additional factors that are essential for a production environment. These factors are: Observable, Schedulable, Upgradable, Least…