A real-world production migration from DigitalOcean to Hetzner dedicated, handling 248 GB of MySQL data across 30 databases, 34 Nginx sites, GitLab EE, Neo4j, and live mobile app traffic — with zero downtime.
One thing is that I don’t know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I’m doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.
For another, I’ll say that I’ve probably seen more people getting screwed up because they didn’t understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can’t understand it. Also when they kind of mindlessly divide a flow for “microservices”, they get lost in the debug.
They are useful, but I think people might do a lot better if they:
More carefully considered how they split things up
Go ahead and use host networking, it’s pretty good
unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
Be wary of random dockerhub “appliances”, they tend to be poorly maintained.
If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn’t like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always “virtualenv” like, but even worse for fickle dependencies.
One thing is that I don’t know for sure if it is containerized or not
They wrote:
Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7
If they used some kind of containerization, the native packages available for the hosts do not affect the specific version of MySQL that they want to use
I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)… But you do raise a pretty good indicator that at least a key thing is not running in container.
But I’m not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I’ve seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because ‘they work’.
In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.
Totally fine. Containerization comes at a cost too. It’s a matter of system design, knowing your risks and complexities, and handling them accordingly.
With such a size, before containerization I’m wondering if these services are not independent enough to split them onto multiple servers.
Having everything together reduce system complexities in some ways, but not in other ways.
Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.
If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.
The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
So people careless enough to “just container it” for old, possibly security-compromised software - you call that a “non-issue”? How about upgrading and configuring for compatibility?
They’re the ones running a 10 years old database on a 11 years old os in a public facing server “because it just works”, not me
If it was a container, they could just tag a new version when the database went EOL 5 years ago, without being locked on what the package manager was offering
Because they used MySQL 5 on CentOS 7 from the package manager and couldn’t easily upgrade
With this small of a deployment you’re just moving your issue to the containerisation layer. Unless you use some saas kubernetes or other managed solution.
you can just set up containers on your bare metal server. in fact if you’re going to install insecure services you definitely want to containerize them, though tbh you need to run really far away from whatever it is you’re doing that requires sql5, or at least don’t let it be reachable on the internet, that should be network-isolated, which really limits its utility.
in fact if you’re going to install insecure services you definitely want to containerize them,
While this is true, if you’re running a platform that is root by default (looking at you, docker), you’re not shielding yourself as much as you might think you are.
If you’re running an insecure app as root, you better hope they don’t also have an exploit to get out of the container after the app is popped, otherwise you’re fucked.
You do realize there are plenty of bare metal infrastructure deployments out in the world, yeah? Being in a container solves no problems in this scenario at all.
Not a sysadmin but just an hobbyist: is it ok to have such a large install bare metal and not containerized?
For example the issue of MySQL 5 being unavailable would be a non-issue with a container
One thing is that I don’t know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I’m doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.
For another, I’ll say that I’ve probably seen more people getting screwed up because they didn’t understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can’t understand it. Also when they kind of mindlessly divide a flow for “microservices”, they get lost in the debug.
They are useful, but I think people might do a lot better if they:
If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn’t like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always “virtualenv” like, but even worse for fickle dependencies.
They wrote:
If they used some kind of containerization, the native packages available for the hosts do not affect the specific version of MySQL that they want to use
I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)… But you do raise a pretty good indicator that at least a key thing is not running in container.
But I’m not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I’ve seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because ‘they work’.
In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.
Totally fine. Containerization comes at a cost too. It’s a matter of system design, knowing your risks and complexities, and handling them accordingly.
With such a size, before containerization I’m wondering if these services are not independent enough to split them onto multiple servers.
Having everything together reduce system complexities in some ways, but not in other ways.
Baremetal is definitively making a comeback. But you should have some way of orchestrating the deployment as code regardless.
Yes it’s ok, in general. It’s not the most modern or efficient way of managing infrastructure but it’s worked for decades now. It all depends on what you’re hosting, for who, and for how many people.
If you’re hosting internal company infrastructure for a relatively static number of users in a single of set few regions to deliver to, bare metal monolithic stuff is absolutely fine. It’s when you’re an app or service company and your infrastructure is for the back end for a public service that needs to be able to scale dynamically, and you’re worried about high 24/7 uptime, and latency to end users is a global issue that things like microservice architecture, containerization, and iac starts becoming important.
The whole containerization crazy is important for microservices architecture where you split your app into different pieces. This lets you scale different parts of you app as needed, it prevents your entire app from failing just because one part of it failed, it allows for lifecycle management like blue/green deployments with no downtime, allows developers to work on different parts of the app and update at a faster cadence than one big release for the entire thing every time you update one small part of it, things like that.
So people careless enough to “just container it” for old, possibly security-compromised software - you call that a “non-issue”? How about upgrading and configuring for compatibility?
They’re the ones running a 10 years old database on a 11 years old os in a public facing server “because it just works”, not me
If it was a container, they could just tag a new version when the database went EOL 5 years ago, without being locked on what the package manager was offering
Because they used MySQL 5 on CentOS 7 from the package manager and couldn’t easily upgrade
My point was that they upgraded to a newer database (also old, but newer), which is arguably better than containerization.
With this small of a deployment you’re just moving your issue to the containerisation layer. Unless you use some saas kubernetes or other managed solution.
Don’t quote me, but as far as I know containers can’t fix the issue if the host kernel is too old.
you can just set up containers on your bare metal server. in fact if you’re going to install insecure services you definitely want to containerize them, though tbh you need to run really far away from whatever it is you’re doing that requires sql5, or at least don’t let it be reachable on the internet, that should be network-isolated, which really limits its utility.
While this is true, if you’re running a platform that is root by default (looking at you, docker), you’re not shielding yourself as much as you might think you are.
If you’re running an insecure app as root, you better hope they don’t also have an exploit to get out of the container after the app is popped, otherwise you’re fucked.
Wha?
You do realize there are plenty of bare metal infrastructure deployments out in the world, yeah? Being in a container solves no problems in this scenario at all.
They may not. Hence hobbyist. Relax.