Ok, have some (anecdotal) data!
It can be easily bypassed by telling the docker daemon in it’s json configuration file to not modify iptables rules. I have done so on two deployments where I wanted to try the paranoid way (there was a bunch of rules for dropping packets that didn’t come from cloudflare). I think it’s useful to be able to do it but it isn’t my go to option. Nowadays I much prefer to define specific networks that I flag as internal and configure routing between them on a container basis.
Simple example that I’ve been using for several months:
a proxyed network behind an nginx RP: it’s internal so nothing there can communicate with the outside, the nginx container has a leg in a normal network so it can receive data from the outside.
Have a look at jwilder’s nginx proxy, it’s great and supports having separate containers so the one with listening on docker isn’t exposed to the internet. I’ve been using it along dockergen and nginx proxy letsencrypt companion to easily deploy https and tor hidden services (yup, it’s that flexible). Usually it’s the first container I run on servers that I think will run anything using http/https. Using external networks as described before allow me to basically do plug and play with containers.
Personal experience: used it to deploy a wordpress, then a personal redmine, then some test website for a friend, it all went up automagically and everything got a certificate from let’s encrypt without any fuss or headache.
I’ve had bad experiences with storing data on the host with mounts so I don’t use it if I can get away with it. My advice: use docker volumes. They are easy to setup, easy to backup (I wrote two ansible roles that are quite short and whose only goal is to backup docker volumes to tar.bz2 or to a borg repository, both work well and I use them extensively when migrating from one server to another). So my advice here: allow the user to specify a number of docker volumes to be backed up/restored and don’t go any deeper, because you don’t want to have to mess with ownership and rights for many containers that might run as different users.
My 2 cents, keep in mind that those deployment I talk about are only for services where the main goal was to have them running quickly, backed up and restored easily in case something went wrong. If you want to set up nethserver as a node in a cluster (either swarm or kubernetes or anything else) then your end setup would look different and I’d advocate using other facilities for backups, monitoring and such. Also those deployments took place on servers where EVERYTHING was in a docker container. No service running directly on the host.