So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;
- Is Synology the best/easiest way to start? If not, what are the closest alternatives?
- What OS should i go for? OMV, Synology’s OS, or UNRAID?
- Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.
Appricate any tips :sparkles:
I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I’ve had good luck with this setup.
The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.
I’m running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.
This is a great way to set this up. I’m moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won’t let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It’s a great deal.) that will have more separate IOMMU groups.
My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I’m keeping my gateway on a separate PC from now on.
If you can’t pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it’s getting in the way you really don’t need it.
TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.
Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.
I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?
This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.
Usually:
Proxmox on bare metal
TrueNAS Core/Scale in a VM
Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there
If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)
If you run your app stack through LXCs, just set them up through Proxmox normally
Set up an NFS share through TrueNAS, and connect your app stack to that NFS share
(Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS
I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?
The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.
(TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)
Makes sense, thanks for the info
That was one of the things I got wrong at first as well. But it totally makes it much easier in the long run.
So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.
Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.
You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.
(I’m probably misunderstanding what you’re trying to do?)
I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.
Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)
That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.
This is 100% my experience and setup. (Though I run Debian for my docker VM)
I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.
I do run my plex and jellyfin on an LXC tough. No issues with that so far.