Once upon a time in a simpler world, there were but system admins. All these middleware, DNS, SMTP, firewall, network and security admins hadn’t yet splintered - the sysadmin responsable for all, gluing all teams together, making a system of discrete parts made by engineers and DBAs. Rare is the organization which has reached the scale to benefit from this specifications. More commonly, we poor moderns create systems too complex to wrap our heads around or escape (shackled to the costliest compute known to man).
tl;dr: Kubernetes isn’t for everyone. It:
- has a questionable cost-benefit ratio at small scale (e.g. the stereotypical resume driven developer running a pair of containers on it…)
- quickly moves with breaking changes, a clear turn off
- peer pressure encourages overenginering. Gatekeepers encourage service meshes, ingress etc. (which also leads to pushback)
- demands a specific development lifecycle, single images, microservices… to allow easy EKS management (while most software is legacy), lest the asset be a liability.
But I am a fair man and can admit to some benefits at scale. EKS:
- scales across regions. ECS cannot
- runs 737 pods/instance, ECS “only” 120 tasks/instance
- provides extended health checks (readiness, startup and liveness probes)
- offers useful metrics (pod restarts, OOM events etc..) while ECS is limited to basic CPU, Mem, #tasks at cluster/task level.
- supports AWS/k8s plugins, pre- and postdeploy jobs helm charts and cron schedulers, helping with data migrations
- delivers higher security with RBAC
- locks you in, atrophying your in-house system admin and operations skills. (EKS can further atrophy your Kubernetes skills, locking you in further!) Remember, it’s a complex platform geared towards complex problem spaces. Other cloud native products exist to run containers, with easier set ups, management stories, learning curves…
Kubernetes' chief benefit is enabling cloud provider agnosticism/multicloud admittedly quite useful in this cost enviroment..
Alas, even in the appropriate (massive) contexts… DevOps itself has sucumbed to enshittification, with many jobs focused on K8s alone, without talking to engineering. How precious few jobs actually involve the engineering, culture etc. (or even just writing code with actual tests) needed to deliver its promises? And aren’t just wrangling DSLs/yaml? Matt Rickard’s covered this further: https://matt-rickard.com/advanced-configuration-languages-are-wrong
Yeah, perhaps I’m blamming Kubernetes unfairly, but this is propaganda!