I’m not really sure how to ask this because my knowledge is pretty limited. Any basic answers or links will be much appreciated.
I have a number of self hosted services on my home PC. I’d like to be able to access them safely over the public Internet. There are a couple of reasons for this. There is an online calendar scheduling service I would like to have access to my caldav/carddav setup. I’d also like to set up Nextcloud, which seems more or less require https. I am using http connections secured through Tailscale at the moment.
I own a domain through an old Squarespace account that I would like to use. I currently have zero knowledge or understanding of how to route my self hosted services through the domain that I own, or even if that’s the correct way to set it up. Is there a guide that explains step by step for beginners how to access my home setup through the domain that I own? Should I move the domain from Squarespace to another provider that is better equipped for this type of setup?
Is this a bad idea for someone without much experience in networking in general?
Love docker. Updating has never been easier.
I actually wanted to ask about that… Is it considered best practice to run a bunch of different compose files, and update them all separately? Or do you just throw all of them into a single compose file, and refresh the entire stack when updating?
The latter definitely seems like it would be more streamlined in terms of updating, but could potentially run into issues as images change. It also feels like it would result in a bunch of excess pulls. Maybe only two images out of a dozen need to be updated, but you just pulled your entire stack. Maybe you want to stay on a specific version of one container, while updating all the others. Sure you could go edit the version number in the compose, but that means actually remembering to edit the compose before you update.
tl;dr I do one compose file per application/folder because I found that to suite me best.
I knew about docker and what is was for a long time, but just recently started to use it (past year or so) so I’m no expert . Before docker, I had one VM for each application I wanted and if I messed something up (installed something and it broke or something), I just removed the entier VM and made a new one. This also comes with the problem that every VM needs to be stopped before the host can be shutdown, and startup took more work to ensure that it worked correctly.
Here is a sample of my layout:
I considered using one compose file and put everything in it by opted to instead use one file for each project. Using one compose file for everything would make it difficult to stop just one application. And by having it split into separate folders, I can just remove everything in it if I mess up and start a new container.
As for updating, I made script that pulls everything:
#!/bin/bash function docker_update { cd $1 docker compose down && docker compose pull && docker compose up -d } docker_update "/path/to/app1" docker_update "/path/to/app2" docker_update "/path/to/app3"
Here is a small sample from my n8n compose file (not complete file):
services: db: container_name: n8n-db image: postgres ... networks: - n8n-network adminer: container_name: n8n-db-adminer image: adminer restart: unless-stopped ports: - 8372:8080 networks: - shared-network - n8n-network n8n: container_name: n8n networks: - n8n-network - shared-network depends_on: db: condition: service_healthy volumes: db_data: networks: n8n-network: shared-network: external: true
shared-network
is shared between Caddy and any containter I need to access to externally (reverse proxy) and then one network that is shared between the applications.