Ah.. holiday’s are coming very fast and New Zealand observes a shutdown period from December until mid January. Brownouts are gonna be quite prevalent from Q4 until early Q1 next year. If you’re like me, a technologist, delve into attempting new things and actively seeking new innovative ways of solving real-world problems during a stand-down period. Then this window of opportunity is something to consider to improve our skills ourselves (or the team) and discover ways to generate value for our investment in the technology we chose and unravel the solutions that would solve our day-to-day problems.
My main focus was OpenShift during my time at Red Hat, including all the operators that transformed OpenShift into a truly amazing product that I believe is influencing how the world of technology operates.
OpenShift and its associated products have matured tremendously. I get extremely excited about helping individuals or organizations to help gain a better understanding of its power and how they can rapidly harness its full capabilities. Thus, attaining the full adoption of the product to its fullest potential and achieving an Open Hybrid Cloud (OHC) strategy.
But, what does the open hybrid cloud mean and what does it look like from a technical perspective?
“ … recommended strategy for architecting, developing, and operating a hybrid mix of applications, delivering a truly flexible cloud experience with the speed, stability and scale required for digital business transformation.”
Frankly, there are many technologies involved in reaching this goal and Red Hat has a lot of partner’s that will help you achieve open hybrid cloud too. But, should you wish to achieve this strategy using the Red Hat suite, then you are in luck. Red Hat invested in a tremendous amount of resources and content that helps audiences to acquire the skill set, the mindset, and gain an immersive experience from its core value proposition by deep-diving the practical know-how developing, running, and operating an OpenShift.
Analogous context to this is turning a casual driver into a rally driver.
I’ll be honest, I made a crucial mistake way back, because I jumped immediately to the administration courses thinking along the lines: “Hey, I’ve done Linux for many, many years, right?”. You’ll have to stop right here because OpenShift/Kubernetes is a totally different beast. My advice to my customers was to park, for now, everything we knew to allow the flow of new information to be absorbed. Otherwise, like myself, I got extremely confused at the very beginning because I made the assumptions I knew from my past.
So, here are my thoughts and experiences during the RHCA III journey and I hope this will give you clarity why the course is ordered in that way, what the finish line looks like, and what do you expect to gain afterwards. In this article, I will try and incorporate the persona of a developer, security, and operations.
In case you didn’t know, one of the many great things about becoming a Red Hat partner is that you can have access to all of the technical materials for free. Upon gaining two accreditations to meet the minimum eligibility criteria for as a Ready Business Partner.
DO180(crawl) -> DO280(walk) -> DO288(run) -> DO380(sprint) -> DO480(marathon)
Red Hat OpenShift I: Containers & Kubernetes | DO180
If you have not done any containerization at all or have an idea what it is or are struggling to imagine what it looks like, then this is for you. Platform owners would have a better understanding of the developers’ experience because it’s an immersed training program on how a developer would start building an application from the most basic unit or maybe you are contemplating containerizing applications that you are familiar with, from top-to-bottom which you are ready to convert to.
If you’re a developer who is eager to create very quickly using containers and would want to apply the skills gained using an OpenShift environment, then this is for you too. If you are an administrator or security team, think that this is irrelevant to you. I believe that this course is essential for you too, because all the skills and knowledge that you’ve acquired here is what you will need once you’ve started using operators!
The key benefit that stands out for me and the most observed in the past, is that the barriers to communication between devs, sec, and admins could be reduced if these key players were to enroll because they would gain a deeper and meaningful understanding of the development life-cycle and the common terminologies used in this paradigm.
Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster | (DO280)
Whether you are a Software Architect, System Administrator, or a Site Reliability engineer, and even if you are a Security guy (IMHO). Then this one’s for you if you want to know the architecture and how to operate an OpenShift platform. Let’s say you are a development team that is convinced that OpenShift is the right path for your organization. But I realized that there will be challenges along the way to persuading other parts of the business to follow suit, since there are requirements that you need to be able to fulfill beforehand. Or maybe you are an enthused Infrastructure Architect seeking future proof the infrastructure, speed, simplicity, and scalability is conducive to your requirements at maximal velocity. Or a Security guy, who wants to understand the aspects of the development and administration to obtain what to protect and harden.
For example, assuming all three together sat in the room and spoke about these aforementioned requirements,. In the Kubernetes/OpenShift world, rapid deploy and scale an application with minimum intervention, in parallel, the infrastructure can be scaled easily with minimum effort. Able to maintain robust network policies that platform, dev, and security can both agree upon with tools that can be used to easily understand what they are trying to protect. For which are applied with confidence regardless of where OpenShift resides in cloud, on-prem, virtual, and etc. The best part of this is that the same YAMLs can be reused, forked, extended, updated, and distributed.
Red Hat OpenShift Developer II: Building Kubernetes Applications | DO288
IMHO, this is the RHCE of the containerization world and the true test of your OpenShift, the depth and breadth of your OpenShift knowledge. Especially, when things get out-of-hand, such as:
- What should look out for before it breaks, what are the most common problems that you can encounter when managing your application or operators you’ve applied?
- Why does quota strategy matter now more than tomorrow?
- How can I best guide or communicate to the end-users the rationale for default policies applied within the projects?
- Best practices when creating deployment files?
Or let’s say, you’ve discovered a new Operator that will increase your visibility but the documentation is too esoteric to follow? This training will ensure you’ll be able to figure this out quite quickly than relying on someone to write you a blog.
The list goes on, and on that will surely benefit everyone. Frankly, if you’re up for the challenge, go for the exam. The finish line will feel like you’ve reached the highest of mountains, head up high and with full confidence. I can guarantee that any new enhancement in OpenShift will become a breeze and any operators that turn OpenShift into a more amazing product will gonna be a cakewalk for you.
Red Hat OpenShift Administration III: Scaling Kubernetes Deployments in the Enterprise | (DO380)
The start of a more automated way of operating a large and complex OpenShift deployment and implementation. However, It is important to get past 288 before you pursue this one. For example, There will be instances in the real-world environment when you may have to customize OpenShift or use the operators to satisfy your needs. So, it is prudent that you have a solid foundation from the beginning to avoid frustration.
Try imagining this: Today I have 2 clusters, but tomorrow I have 10 more OpenShift across different hypervisors or hyperscalers.
- Do I still have to operate by hand or should I start to develop our own GitOps process?
- What if Security compliance changes tomorrow? How do we respond to these changes?
- If we scale more and more, how about everyone’s experience? How does it affect them and me? What are the pit-falls that might occur which we, as a collective, must be aware of?
- How do we maintain ease-of-use and benefits while we maintain simplicity, stability, scalability and security?
Multicluster Management with Red Hat OpenShift Platform Plus | (DO480)
IMHO, this is a preview of the OHC and where you want to be at because it includes a Red Hat ACM, Red Hat ACS, and Quay, which is the basic building block to achieve OHC in an RH product-focused way. This course will attest that what-used-to-be complicated installation is a thing of the past and what you’d have to focus on more is the overall end-to-end architecture of your OpenShift. However, this course doesn’t show you how you can use Submariner (one of my favorites) for multi-cluster networking that allows you to create a bridge between clusters, which are conducive to moving applications almost seamlessly by simply applying labels. Also, the course doesn’t go deep dive too with GitOps. I highly recommend the free courses from Codefresh.io for GitOps.
Frankly, you can jump right at DO480 to enhance your OpenShift environment from governance, security policies, and application deployment point-of-view or if you want to have an idea how these tools can improve the OpenShift experience at scale. However, you will miss a lot of little details that are crucial to everyone’s end-users’ interests which were heavily discussed during the previous courses.
Let’s say you’re a security guy that decided to push for the end of all runtime policies across the entire fleet by using RHACS. Have you thought of the effects on stability or speed? Would there be an impact on inter-departmental relationships if these policies were too strict? Then, as a developer, what are my options when faced with this challenge? Or if you’re the platform owner that keeps spinning up new clusters and blindly knowing where their location is. How do you ensure that the application owners are utilizing the right cluster and are they prepared to utilize the labeling mechanism?
Frankly, IMHO, this course is one of the best starting points for building a DevSecOps and GitOps uses case. Referring back to the aforementioned questions in DO380, this is the spot that provides you with some answers
So, I’ve done everything. What now? What does the future hold?
By incorporating OpenShift in your environment now or in the future, you are no longer constrained by where the application should reside and all the skills required remain static. As long as OpenShift exists in any of the supported infrastructure, whether it’s on-prem, cloud, baremetal, edge, and etc. Subsequently, if you can picture, your organization will organically grow or will likely rapidly utilize OpenShift once the value is realized. Then it is prudent to build the correct foundation now, for all the key players of OpenShift. So, they’d be able to collaborate with each other and communicate more effectively because they will be armed with prior knowledge from the solid foundation and immersive experience of the Red Hat enablement programs.
It is true that it requires a lot of prior-knowledge, but Red Hat already catered the content that is conducive to you or your organization’s successful full-adoption of OpenShift.