• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: October 22nd, 2023

help-circle
  • Let’s Encrypt uses what is called “ACME protocol” for proof of owner when generating certificates.

    There are various challenges they use to prove ownership of the domain. The default one just places a special file on your web server that Let’s Encrypt then reads.

    However there are a number of different types of challenges.

    If you don’t want to expose anything to the internet then a common one to use is ‘DNS Challenge’.

    With DNS challenge the certbot uses your DNS server/provider’s API to update DNS records as a response to the challenge. Let’s Encrypt reads the special TXT response and verifies that you own the domain.

    So to use this you need two things:

    1. A DNS domain

    2. A DNS domain provider that has a API that certbot can use.

    AWS Route53 is a good one to use. But I have used Digital Ocean’s free DNS Service, Bind servers, Njalla, and other things. Most commonly used DNS providers are supported one way or the other.

    You can also get fancy and designate sub domains or other domains to respond to the challenges. So if your DNS is locked down you can still add a record to point to a different one on a different server.

    The big win for going with DNS Challenge is that you can do wildcard certificates.

    So say you are setting up a reverse proxy that will serve vault.home.example.com, fileshare.home.example.com, torrent.home.example.com, and a bunch of others… all you need to do is configure your reverse proxy with a single *.home.example.com cert and it’ll handle anything you throw at it.

    You can’t do that with the normal http challenge. Which makes doing the DNS challenge worth it, IMO, even if you do have a public-facing web server.


  • The problem with hosting kubernetes on VPSes is that exposing the Kubernetes API to the public is pretty sketchy. I know a lot of people do it, but I don’t like the idea.

    I also like having multiple smaller Kubernetes clusters then a single big one. Easier to manage and breakage is more isolated. You can incorporate external services into kubernetes pretty easily using kubes services and endpoints.

    I suggest using K3s as it is very lightweight, easy to deploy, and is k8s compliant. There are default set of services k3s deploys by default and are designed for more ‘IOT’ applications. Things like service-lb. These things can be disabled if you want during install time.

    For managing it I like to ArgoCD on a ‘administrative’ kubes cluster local to you. It has no problem connecting to multiple clusters and has a nice declarative yaml files for configuring things that work well with a git-based workflow. The web UI is nice and is used widely.