• jj4211@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    One thing is that I don’t know for sure if it is containerized or not. The topic was migration, and that facet would be not relevant to the core. When I’m doing a write up of things like this, I tend to omit details like that unless it is core to the subject at hand. Including replacing a funky ingress situation with a more universally recognizable nginx example. The users of a container setup would understand how to translate to their scenario.

    For another, I’ll say that I’ve probably seen more people getting screwed up because they didn’t understand how to use containers and used them anyway. Most notably they make their networking needlessly convoluted and can’t understand it. Also when they kind of mindlessly divide a flow for “microservices”, they get lost in the debug.

    They are useful, but I think people might do a lot better if they:

    • More carefully considered how they split things up
    • Go ahead and use host networking, it’s pretty good
    • unix domain sockets can be your friend instead of binding to tcp for everything. I much favor reverse proxy to unix domain instead of handling IP/ports, which is what the container networks buy most people but the flow is too gnarly
    • Be wary of random dockerhub “appliances”, they tend to be poorly maintained.

    If you are writing in rust or golang, containers might not really buy you much other than a headache, so long as you distinct users for security isolation. For something like python, it might be a more thorough approach that virtalenv, though I wouldn’t like to keep a python stack maintained with how fickle the ecosysyem is. Node is pretty much always “virtualenv” like, but even worse for fickle dependencies.

    • Wispy2891@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      One thing is that I don’t know for sure if it is containerized or not

      They wrote:

      Being stuck on CentOS 7 meant we were also stuck on MySQL 5.7

      If they used some kind of containerization, the native packages available for the hosts do not affect the specific version of MySQL that they want to use

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I suppose they might have used the common mysql instance for containerized infrastructure, or a crufty base image for their container(s)… But you do raise a pretty good indicator that at least a key thing is not running in container.

        But I’m not going to judge too hard on container/no container. The vintage of the platform is broadly problematic either way. I’ve seen particularly in enterprise IT some shockingly old container bases, with teams unwilling to refresh those because ‘they work’.

        In fact, teams that once would be forced to rebase their crufty dependencies ever so often because they were bundled with an unacceptable OS, now gleefully push their ancient 12 year old stack because containers let it keep running no matter what kernel is running.