Kubernetes

433 readers
1 users here now

Welcome to Kubernets community. The CNCF graduated project.

founded 4 years ago
MODERATORS
1
 
 

Do you know about using Kubernetes Debug containers? They're really useful for troubleshooting well-built, locked-down images that are running in your cluster. I was thinking it would be nice if k9s had this feature, and lo and behold, it has a plugin! I just had to add that snippet to my ${HOME}/.config/k9s/plugins.yaml, run k9s, find the pod, press enter to get into the pod's containers, select a container, and press Shift-D. The debug-container plugin uses the nicolaka/netshoot image, which has a bunch of useful tools on it. Easy debugging in k9s!

2
 
 

I got a Kubernetes survey from The Linux Foundation today and it had a question that asked how Kubernetes impacted my professional life. Good question. Here's my answer:

Kubernetes renewed my love for infrastructure and DevOps! As cloud platforms grew more popular, my job function changed; instead of managing services, I was managing "managed" services. I was losing the ability to actually build infrastructure with free, open source software (FOSS), and I was forced to become an expert in proprietary solutions. I switched to FOSS years ago because the idea of writing software for anyone in the world to use and modify - for free - was really inspiring! When I discovered Kubernetes, I was able to work with FOSS infrastructure again! Once I have compute, storage, and networking from the cloud platform, I can install Kubernetes, and actually manage my own services again!

3
 
 

I've been working with Kubernetes since 2015 and I've mangled with handcrafted manifests including almost duplicate manifests for staging/production environments, played around with stuff like Cue, built lots glue (mostly shell script) to automate manifest-handling and -generation and I also enjoy parts of Kustomize. When Helm started to appear it seemed like a terrible hack, especially since it came with the Tiller-dependency to handle Helm-managed state-changes inside of the clusters. And while they dropped Tiller (thankfully), I still haven't made my peace with Helm.

Go-templating it awful to read, a lot of Helm charts don't really work out of the box, charts can be fed values that aren't shown via helm show values ./chart, debugging HelmChart $namespace/$release-$chartname is not ready involves going over multiple logs spread over different parts of the cluster and I could go on and on. And yet, almost every project that goes beyond offering Dockerfile+docker-compose.yaml just releases a Helm Chart for their app.

Am I the only one who is annoyed by Helm? Have I been using it wrongly? Is there something I've been missing?

In case you're a Helm maintainer: Please don't take it personally, my issue is not with the people behind Helm!

4
5
 
 

The KBOM project provides an initial specification in JSON and has been constructed for extensibilty across various cloud service providers (CSPs) as well as DIY Kubernetes.

6
 
 

Hello world!

I want to release to internet my custom immutable rolling-release extreme-simple Linux distribution for Kubernetes deployments.

I was using this distribution for about the last 6 years on production environments (currently used by a few startups and two country's public services). I really think that it could be stable enough to be public published to internet before 2024.

I'm asking for advice before the public release, as licensing, community building, etc.

A few specs about the distribution:

  • Rolling release. Just one file (currently less than ~40Mb) that can be bootable from BIOS or UEFI (+secure boot) environments. You can replace this file by the next release or use the included toolkit to upgrade the distribution (reboot/kexec it). Mostly automated distribution releases by each third-party releases (Linux, Systemd, Containerd, KubeAdm, etc).

  • HTTP setup. The initial setup could be configured with a YAML file written anywhere in a FAT32 partition or through a local website installer. You can install the distribution or configure KubeAdm (control-plane & worker) from the terminal or the local website.

  • Simple, KISS. Everything must be simple for the user, this must be the most important aspect for the distribution. Just upstream software to run a production ready Kubernetes cluster.

  • No money-driven. This distribution must be public, and it must allow to be forked at any time by anyone.

A bit of background:

I was using CoreOS before Redhat bought them. I like the immutable distro and A/B release aspect of the CoreOS distribution. After the Redhat acquisition, the CoreOS distribution was over-bloated. I switched to use my own distribution, built with Buildroot. A few years later, I setup the most basic framework to create a Kubernetes cluster without any headache. It's mostly automated (bots checking for new third-party releases as Linux, Systemd, Containerd, KubeAdm, etc; building, testing & signing each release). I already know that building a distribution is too expensive, because of that I programmed few bots that made this job for me. Now days, I only improve the toolkits, and approve the Git requests from thats bots.

Thank you for your time!

7
 
 

Looking for the best way to learn kubernetes, given that I have plenty of years of engineering (Java, python) and a solid experience with AWS.

any format works - paid/free courses, working through articles, getting started guides, etc..

8
 
 

For benefit of anyone who needs to go back to the basics. Certainly a need I sense in the Kubernetes community around me.

9
 
 

Tried it out in the past couple of days to manage k8s volumes and backups on s3 and it works surprisingly well out of the box. Context: k3s running on multiple raspberry pi

10
 
 

CNCF has posted their playlist from all the talks at the 2022 conference in Detroit

11
12
 
 

Kubernetes Ingress Controllers Compared. warning takes you to Google docs