techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.9K
active users

#gke

0 posts0 participants0 posts today

Kubernetes Storage Without the Pain: Simplyblock in 15 Minutes

Whether you're building a high-performance cloud-native app or running data-heavy workloads in your own infrastructure, persistent storage is necessary. In Kubernetes, this means having storage that survives pod restarts, failures, and rescheduling events—and that’s precisely what simplyblock brings to the table: blazing-fast, scalable, and software-defined storage with cloud economics. A hyper-converged storage solution, like simplyblock enables Kubernetes storage par excellence. In […]

simplyblock.io/blog/install-si

Continued thread

Finally, here are some practical suggestions. If you use #GKE, bring up a small cluster first just to pull containers, which are cached across clusters. If you are using AWS, #SOCI might help if you don't need to load large data during runtime. On any cloud a SSD is going to always help.

This week’s post is the third and final in my series about running tests on #Kubernetes for each pull request. In the 1st post, I described the app and how to test locally using #Testcontainers and in a #GitHub workflow. The second post focused on setting up #GKE and running end-to-end tests on Kubernetes.

In this post, I’ll show how to benefit from the best of both worlds with #vCluster: a single cluster with testing from each PR in complete isolation from others.

blog.frankel.ch/pr-testing-kub

I’m continuing my series on running the test suite for each PR on #Kubernetes. In the previous post, I laid the groundwork for our learning journey.

This week, I will raise the ante:

* Create and configure a #GoogleKubernetesEngine instance
* Create a Kubernetes manifest for the app, with #Kustomize for customization
* Allow the #GitHub workflow to use the #GKE instance
* Build the Docker image and store it in the GitHub Docker repo
* Finally, run the end-to-end test

blog.frankel.ch/pr-testing-kub

Continued thread

HOLY MOLEY. I think I've done it. Given the amount of docs on this, I wouldn't be surprised to hear I'm basically the only person on the planet who's got this working!

The crucial bit I'd missed is that even though the GCE ingress terminates SSL, it will - for HTTP/2 only - re-encrypt the connection to your backend. Your backend service therefore needs to be able to talk SSL. Any old cert will do - I generated some self-signed ones.

The thing that took me a day to figure out is that a bad SSL handshake just looks like a network connection failure to the load balancer, so I spent ages doing network debugging.

What. A. Palaver.

#grpc#gke#gcp

⚠️ Heads up #velero users!

When you're relying on velero using #prometheus metric-based alerts make sure to not only alert on
`velero_backup_failure_total` but also on
`velero_backup_partial_failure_total` 🧐

After running velero reliably for years on #gke our backups suddenly started failing *partially*.

Turns out #GCP must have changed something in AuthN, requiring an additional role to perform disk snapshots.
As this resulted in partial failures only we almost missed it.

Continued thread

So I couldn't find any way to reduce the ephemeral storage requested by either the mysql service or my own image :(

Any help with #Gitlab Runners' services config on #k8s (#gke autopilot, to be specific) would help.

tl;dr; if I include a mysql service on one of my jobs, the job doesn't run bc the cluster refuses to create a pod w/ ephemeral storage bigger than 10Gi (which I definitely don't need)

Is anybody around here using #gitlab runners deployed on #GKE? We have an autopilot cluster and we've been using gitlab's helm chart for a long time and everything was great. For the last weeks, there's one specific job on our pipelines that keeps failing with

"Total ephemeral-storage requested by containers for workload 'xxxx-yyyy-zzzzz' is higher than the Autopilot maximum of '10Gi'."

But I can't find anywhere that we request a specific amount of ephemeral-storage.

We just published a blog post on how insecure default settings in Google Kubernetes Engine (GKE) can be exploited to gain control over cloud environments. Learn how chaining multiple vulnerabilities can lead to significant risks and discover practical tips for securing your GKE clusters. Don't miss out on our detailed attack chain analysis and essential recommendations for robust GKE security.
Read the full post here: assured.se/posts/exploiting-in

www.assured.seExploiting insecure GKE (Google Kubernetes Engine) defaults - Assured Security ConsultantsThis blog post will guide you through an attack chain exploiting insecure defaults in GKE, and explain how to harden a Kubernetes cluster to reduce the risk of compromise.