• 5 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle








  • I’m not using a private CA for my internal services, just plain self-signed certs. But if I had to, I would probably go as simple as possible in the first time: generate the CA cert using ansible, use ansible to automate signing of all my certs by the CA cert. The openssl_* modules make this easy enough. This is not very different from my current self-signed setup, the benefit is that I’d only have to trust a single CA certificate/bypass a single certificate warning, instead of getting a warning for every single certificate/domain.

    If I wanted to rotate certificates frequently, I’d look into setting up an ACME server like [1], and point mod_md or certbot to it, instead of the default letsencrypt endpoint.

    This still does not solve the problem of how to get your clients to trust your private CA. There are dozens of different mechanisms to get a certificate into the trust store. On Linux machines this is easy enough (add the CA cert to /usr/local/share/ca-certificates/*.crt, run update-ca-certificates), but other operating systems use different methods (ever tried adding a custom CA cert on Android? it’s painful. Do other OS even allow it?). Then some apps (Web browsers for example) use their own CA cert store, which is different from the OS… What about clients you don’t have admin access to? etc.

    So for simplicity’s sake, if I really wanted valid certs for my internal services, I’d use subdomains of an actual, purchased (more like renting…) domain name (e.g. service-name.internal.example.org), and get the certs from Let’s Encrypt (using DNS challenge, or HTTP challenge on a public-facing server and sync the certificates to the actual servers that needs them). It’s not ideal, but still better than the certificate racket system we had before Let’s Encrypt.



  • get the certificates from Let’s Encrypt manually

    https://httpd.apache.org/docs/2.4/mod/mod_md.html just add MDomain myapp.example.org to your config and it will generate Let’ Encrypt certs automatically

    it’s kind of a pain in the ass every time I add something new.

    You will have to do some reverse proxy configuration every time you add a new app, regardless of the method (RP management GUIs are just fancy GUIs on top of the config file, “auto-discovery” solutions link traefik/caddy require you to add your RP config as docker labels). The way I deal with it, is having a basic RP config template for new applications [1]. Most of the time ProxyPass/ProxyPassReverse is enough, unless the app documentation says otherwise.








  • Graylog and elasticsearch might fit on that, depending on how much is already used, and if you set the heap sizes at their bare minimum… but it will perform badly, and it’s overkill anyway if you just need this simple stat.

    I would look into writing a custom log parser for goaccess (https://goaccess.io/man#custom-log) and let it parse your bridge logs. This is how the geolocation section looks in the HTML report (each continent can be expanded and it will reveal the stat by country).

    I update the report every hour via cron, as I don’t need real-time stats (but goaccess can do that).




  • how networks work

    http://tcpipguide.com/free/index.htm and lookup terms/protocols on wikipedia as you go.

    But as others said, I think you would learn faster if you pick a specific project and try to implement it from scratch. A matrix server is a nice project, but it will have you dig into matrix-specific configuration which is not particularly relevant if you’re just trying to learn system administration and networking.

    I would start with a more “basic” project and ensure you got the fundamentals right, and document or automate (shell scripts, ansible…) all steps:

    • install a virtualization platform (hypervisor)
    • create a VM and install Debian inside it, using LVM for disk management, and a static IP address
    • practice with creating/restoring snapshots, add/remove hardware and resources (vCPUs, RAM, disk storage) from the VM
    • set up an SSH server and client using SSH keys
    • setup a firewall with some basic rules (e.g. only accept SSH connections from a specific IP address and DROP all other SSH connections, forward all HTTPS connections to another IP address…)
    • setup monitoring with a few basic rules (alert if the SSH server is down, alert if disk space or free memory is low…)
    • automate security updates

    Then you can work your way up to more complex services, lookup security hardening measures on your existing setup (as always, document or automate all steps). To give you some ideas, you can find ansible roles I wrote for these tasks here. The common role implements most of what I listed above. The monitoring role implements the monitoring part. There are a few other roles for middleware/infrastructure services (web server/reverse proxy, DNS server, database services, VPN…) and a few more for applications (matrix+element, gitea, jellyfin, mumble…). Start at tasks/main.yml for each role, follow the import_tasks statements from there, and read at least the name: for each task to get a good overview of what needs to be done, and implement it yourself from a shell in the first time. If you break your setup, restore the initial VM snapshot and start again (at this point you’ve automated or documented everything, so it should not take more than a few minutes, right?) .

    Each of these tasks will require you to research available software and decide for yourself which is best for your requirements (which hypervisor? which firewall frontend? which monitoring solution? etc)