Just an explorer in the threadiverse.

  • 2 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle


  • You misunderstand what the Hot rank is doing. It’s not balancing newness vs hotness, it’s scaling hotness according to community size. This might feel like newness if you’re focused on vote counts as a proxy for post age, but it’s a different approach. See https://github.com/LemmyNet/lemmy/issues/3622 for details.

    There’s a couple ways to think about this:

    1. There are a handful of Lemmy communities that are just WAY more active than everything else. The main feeds are kind of lame if you have to scroll 300 posts it to find anything other than a shit post from the same 3 communities. Scaled Hot rank shows a greater variety of communities by making it easier small communities to get ranked hotly.
    2. Or you can consider Hotness to be a rough measure of what percentage of people who have seen the post interacted with it. A post with 500 upvotes in a community with 10,000 active users is kind of popular, but only 5% of the people likely to have scrolled passed it cared about it. A post with 50 upvotes in a community with 200 active members is much MORE popular relatively even though the absolute numbers are smaller.

    At any rate, this preference toward smaller communities in hot is a recent change and deliberate. While they might further tweak the scaling factors, I wouldn’t expect it to be drastically different. It sounds to me like what you want is Top, Active, or Most Comments. All these are unscaled according to community size and will get you top posts by their absolute metric rather than posts that are doing well relative to their community size.


  • PriorProject@lemmy.worldtoSelfhosted@lemmy.worldWoL through Wireguard
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    This is a very strong explanation of what’s going on. And as a follow-up, I believe that ZeroTier present a single Ethernet broadcast domain, and so WoL tricks are more likely to work naturally there than with Wireguard.

    I haven’t used ZeroTier, and I do use Wireguard via Tailscale/Headscale. I’ve never missed the Ethernet features of ZeroTier and they CAN result in a very chatty wan if you’re not careful. But I think ZT would make this straightforward.

    Though as other people note… the simplest/least-disruptive change is probably to expose some scripty thing on the rpi that can be triggered via be triggered over a routed protocol and then emit the Ethernet broadcast from the physical network.


  • No no, sorry. I mean can I still have all my network traffic go through some VPN service (mine or a providers) while Tailscale is activated?

    Tailscale just partnered with Mullvad so this works out of the box for that setup: https://tailscale.com/blog/mullvad-integration/

    For others, it’s a “yes on paper” situation. It will probably often not work out of the box, but it seems likely to be possible as an advanced configuration. At the end of the line of possibilities, it would definitely be possible to set up a couple of docker containers as one-armed routers, one with your VPN and one with Tailscale as an exit node. Then they can each have their own networking stack and you can set up your own routes and DNS delegating only the necessary bits to each one. That’s a pretty advanced setup and you may not have the knowhow for it, but it demonstrates what’s possible.


  • To a first approximation, Tailscale/Headscale don’t route and traffic.

    Ah, well damn. Is there a way to achieve this while using Tailscale as well, or is that even recommended?

    Is there a way to achieve what? Force tailscale to route all traffic through the DERP servers? I don’t know, and I don’t know why you’d want to. When my laptop is at home on the same network as my file-server, I certainly don’t want tailscale sending filserver traffic out to my Headscale server on the Internet just to download it back to my laptop on the same network it came from. I want NAT traversal to allow my laptop and file-server to negotiate the most efficient network path that works for them… whether that’s within my home lab when I’m there, across the internet when I’m traveling, or routing through the DERP server when no other option works.

    OpenVPN or vanilla Wireguard are commonly setup with simple hub-and-spoke routing topologies that send all VPN traffic through “the VPN server”, but this is generally slower path than a direct connection. It might be imperceptibly slower over the Internet, but it will be MUCH slower than the local network unless you do some split-dns shenanigans to special-case the local-network scenario. With Tailscale, it all more or less works the same wherever you are which is a big benefit. Of course excepting if you have a true multigigabit network at home and the encryption overhead slows you down… Wireguard is pretty fast though and not a problematic throughout limiter for the vast majority of cases.


  • Have a read through https://tailscale.com/blog/how-nat-traversal-works/

    You, and many commenters are pretty confused about out tailscale/Headscale work.

    1. To a first approximation, Tailscale/Headscale don’t route and traffic. They perform NAT traversal and data flows directly between nodes on the tailnet, without traversing Headscale/Tailscale directly.
    2. If NAT traversal fails badly enough, it’s POSSIBLE that bulk traffic can flow through the headscale/tailscale DERP nodes… but that’s an unusual scenario.
    3. You probably can’t run Headscale from your home network and have it perform the NAT traversal functions correctly. Of course, I can’t know that for sure because I don’t know anything about your ISP… but home ISPs preventing Headscale from doing it’s NAT traversal job are the norm… one would be pleasantly surprised to find that a home network can do that properly.
    4. Are younreally expecting 10gb/s speeds over your encrypted links? I don’t want to say it’s impossible, people do it… but you’d generally only expect to see this on fairly burly servers that are properly configured. Tailscale just in April bragged about hitting 10gb speeds with recent optimizations: https://tailscale.com/blog/more-throughput/ and on home hardware with novice configd I’d generally expect to see roughly more like single gigabit.



  • So I have a question, what can I do to prevent that from happening? Apart from hosting everything on my own hardware of course, for now I prefer to use VPS for different reasons.

    Others have mentioned that client-caching can act as a read-only stopgap while you restore Vaultwarden.

    But otherwise the solution is backup/restore. If you run Vaultwarden in docker or podman container using volumes to hold state… then you know that as long as you can restart Vaultwarden without losing data that you also know exactly what data needs to be backed up and what needs to be done to restore it. Set up a nightly cron job somewhere (your laptop is fine enough if you don’t have somewhere better) to shut down Vaultwarden, rsync it’s volume dirs, and start it up again. If you VPS explodes, copy these directories to a new VPS at the same DNS name and restart Vaultwarden using the same podman or docker-compose setup.

    All that said, keeypass+filesync is a great solution as well. The reason I moved to Vaultwarden was so I could share passwords with others in a controlled way. For single-user, I prefer how keypass folders work and keepass generally has better organization features… I’d still be using it for only myself.


  • My take echoes this. If one puts any stock in streamer recommendations, Baalorlord who has at various times held spire world record winstreaks, has recently cited Monster Train as his current favorite spirelike (other than spire itself), and also cited Griftlands as a playthrough a highlight.

    Baalor probably doesn’t have an opinion on Inscryption as he tends to avoid things with even a slight horror theme. I enjoyed what I played of Inscryption a lot, but very little about playing it evoked the vibe of playing spire. Monster Train is quite adjacent though, the mechanics are different enough to feel fresh but it slots into the same gameplay mood for me whereas Inscryption is just a different (and still very good) thing.

    Neither has the tight balance of Spire or feels quite as deep strategically to me (though in all honesty I’m probably not a strong enough player to be trusted in this regard), but both are fun.





  • It may seem kinda stupid to consider that an accomplishment, but I feel quite genuinely proud of myself for actually succeeding at this instead of just throwing in the towel…

    Way to go. I’ve been at this a decent while and do some pretty esoteric stuff at work and at home… but this loop of feeling stupid, doing the work, and feeling good about a success has been a constant throughout. I spent a week struggling to port some advanced container setups to podman a month or so ago, same feeling of pride when I got them humming.

    It’s not stupid to be proud of an accomplishment even if it’s a fundamental one that’s early in a bigger learning curve. Soak it in, then on to the next high. Good luck.



  • I use Headscale, but Tailscale is a great service and what I generally recommend to strangers who want to approximate my setup. The tradeoffs are pretty straightforward:

    • Tailscale is going to have better uptime than any single-machine Headscale setup, though not better uptime than the single-machine services I use it to access… so not a big deal to me either way.
    • Tailscale doesn’t require you to wrestle with certs or the networking setup required to do NAT traversal. And they do it well, you don’t have to wonder whether you’ve screwed something up that’s degrading NAT traversal only in certain conditions. It just works. That said, I’ve been through the wringer already on these topics so Headscale is not painful for me.
    • Headscale is self-hosted, for better and worse.
    • In the default config (and in any reasonable user-friendly, non professional config), Tailscale can inject a node into your network. They don’t and won’t. They can’t sniff your traffic without adding a node to your tailnet. But they do have the technical capability to join a node to your tailnet without your consent… their policy to not do that protects you… but their technology doesn’t. This isn’t some surveillance power grab though, it’s a risk that’s essential to the service they provide… which is determining what nodes can join your tailnet. IMO, the tailscale security architecture is strong. I’d have no qualms about trusting them with my network.
    • Beyond 3 devices users, Tailscale costs money… about $6 US in that geography. It’s a pretty reasonable cost for the service, and proportional in the grand scheme of what most self-hosters spend on their setups annually. IMO, it’s good value and I wouldn’t feel bad paying it.

    Tailscale is great, and there’s no compelling reason that should prevent most self-hosters that want it from using it. I use Headscale because I can and I’m comfortable doing so… But they’re both awesome options.


  • It’s worth considering some commercially developed options as well: https://prostasia.org/blog/csam-filtering-options-compared/

    The Cloudflare tool in particular is freely and widely available: https://blog.cloudflare.com/the-csam-scanning-tool/

    I am no expert, but I’m quite skeptical of db0’s tool:

    • It repurposes a library designed for preventing the creation of synthetic CSAM using stable diffusion. This library is typically used in conjunction with prompt scanning and other inputs into the generation process. When run outside it’s normal context on non-ai images, it will lack all this input context which I speculate reduces its effectiveness relative to the conditions under which it’s tested and developed.
    • AI techniques live and die by the quality of the dataset used to train them. There is not and cannot be an open-source test dataset of CSAM upon which to train such a tool. One can attempt workarounds like extracting features classified and extracted separately like trying to detect coexisting features related to youth (trained from dataset A using non sexualized images including children) and sexuality (trained separately from dataset B using images containing only adult performers)… but the efficacy of open source solutions is going to be hamstrung by the inability to train, test, and assess effectiveness of the open tools. Developers of major commercial CSAM scanners are better able to partner with NCMEC and other groups fighting CSAM to assess the effectiveness of their tools.

    I’m no expert, but my belief is that open tools are likely to be hamstrung permanently compared to the tools developed by big companies and the most effective solutions for Lemmy must integrate big company tools (or gov/nonprofit tools if they exist).

    PS: Really impressed by your response plan. I hope the Lemmy world admins are watching this post, I know you all communicate and collaborate. Disabling image uploads is I think I very effective temporary response until detection and response tooling can be improved.


  • My money is also on IO. Outside of CPU and RAM, it’s the most likely resource to get saturated (especially if using rotational magnetic disks rather than an SSD, magnetic disks are going to be the performance limiter by a lot for many workloads), and also the one that OP said nothing about, suggesting it’s a blind spot for them.

    In addition to the excellent command-line approaches suggested above, I recommend installing netdata on the box as it will show you a very comprehensive set of performance metrics without having to learn to collect each one on the CLI. A downside is that it will use RAM proportional to the data retention period, which if you’re swapping hard will be an issue. But even a few hours of data can be very useful and with 16gb of ram I feel like any swapping is likely to be a gross misconfiguration rather than true memory demand… and once that’s sorted dedicating a gig or two to observability will be a good investment.


  • I kind of want to settle on this read, and the thing that jumped out at me was also his weirdly small head. The trophy skulls everywhere look regular size, though. If this is a head-shrinking bit, that seems incongruous.

    I don’t have other ideas though, beyond “fat man on his way to get eaten”… which I guess is maybe enough. Sometimes they’re that simple. His head looks so weird though, I want it to mean something.

    Edit: I think Lux has it. He’s shrunken his own head to make it an undesirable trophy.