TL;DR - What are you running as a means of “antivirus” on Linux servers?
I have a few small Debian 12 servers running my services and would like to enhance my security posture. Some services are exposed to the internet and I’ve done quite a few things to protect the services and the hosts. When it comes to “antivirus”, I was looking at ClamAV as it seemed to be the most recommended. However, when I read the documentation, it stated that the recommended RAM was at least 2-4 gigs. Some of my servers have more power than other but some do not meet this requirement. The lower powered hosts are rpi3s and some Lenovo tinys.
When I searched for alternatives, I came across rkhunter and chrootkit, but they seem to no longer be maintained as their latest release was several years ago.
If possible, I’d like to run the same software across all my servers for simplicity and uniformity.
If you have a similar setup, what are you running? Any other recommendations?
P.S. if you are of the mindset that Linux doesn’t need this kind of protection then fine, that’s your belief, not mine. So please just skip this post.
I’m a senior Linux/Kubernetes sysadmin, so I deal with system security a lot.
I don’t run ClamAV on any of my servers, and there’s much more important ways to secure your server than to look for Windows viruses.
If you’re not already running your servers in Docker, you should. Its extremely useful for automating deployment and updates, and also sets a baseline for isolation and security that you should follow. By running all your services in docker containers, you always know that all of your subcomponents are up to date, and you can update them much faster and easier. You also get the piece of mind knowing, that even if one container is compromised by an attacker, it’s very hard for them to compromise the rest of the system.
Owasp has published a top 10 security measures that you can do once you’ve set up Docker.
https://github.com/OWASP/Docker-Security/blob/main/dist/owasp-docker-security.pdf
This list doesn’t seem like it’s been updated in the last few years, but it still holds true.
Don’t run as root, even in containers
Update regularly
Segment your network services from each other and use a firewall.
Don’t run unnecessary components, and make sure everything is configured with security in mind.
Separate services by security level by running them on different hosts
Store passwords and secrets in a secure way. (usually this means not hardcoding them into the docker container)
Set resource limits so that one container can’t starve the entire host.
Make sure that the docker images you use are trustworthy
Setup containers with read-only file systems, only mounting r/w tmpfs dies in specific locations
Log everything to a remote server so that logs cannot be tampered with. (I recommend opentelemetry collector (contrib) and loki)
The list goes into more detail.
Hey, kinda off topic but what’s the best way to get into a Linux/Kubernetes admin role? I’ve got a degree in networking, several years of helpdesk experience and I’m currently working as an implementation specialist.
Is that something I could simply upskill and slide into or are there specific certs that will blow the doors open for new opportunities?
Sure! I got my start with this sort of tech, just running docker containers on my home server for running stuff like nextcloud and game servers. I did tech support for a more traditional web hosting MSP for a while, and then I ended up getting hired as a DevOps trainee for a internal platform team doing Kubernetes. I did some Kubernetes consulting after that and got really experienced with the tech.
I would say to try running some Docker containers and learn the pros and cons with them, and then I would say to start studying for the CKAD certification. The CKAD cert is pretty comprehensive and it’ll show you how to run docker containers in production with Kubernetes. Kind is a great way to get a Kubernetes cluster running on your laptop. For more long term clusters, you can play around with k3s on-prem, or otherwise, I would recommend Digital Ocean’s managed Kubernetes. Look into ArgoCD once you want to get serious about running Kubernetes in production.
I think with a CKAD cert you can land a Kubernetes job pretty easily.
I would probably only recommend the CKA cert on the path to CKS. CKA gets into a lot of the nitty gritty of running a kubernetes cluster, that I think most small-to-medium companies would probably skip and just run a managed solution.
Kubernetes has a steep learning curve, since you need to understand Operations top-to-bottom to start using it, but once you have it in your tool belt, it gives you endless power and flexibility when it comes to IT Operations.
Big eyeroll on this shit here 🙄
Containers aren’t more secure, they are just less likely to be a propagation point to something that might ransack a Windows network.
A vulnerable runtime is a vulnerable runtime. If it’s exposed to a public network, it will eventually be found and breached. Stop spouting this “containers everywhere” bullshit in the name of security. It’s asinine, and makes you sound bad at your job. A bare metal server running any runtime version of anything is just as vulnerable as any container, you twat.
I dont see how anything I said justifies you calling me names and calling me bad at my job. Chill out.
Containers allow for more defense-in-depth, along with their multiple other benefits to maintability, updatability, reproducibility, etc. Sure, you can exploit the same RCE vuln on both traditional VMs and containers, but an attacker who gets shell access on a container is not going to be able to do much. There are no other processes or files that it can talk to or attack. There’s no obvious way for an attacker to gain persistence, since the file system is read-only, or at least everything will be deleted the next time the container is updated/moved. Containers eliminate a lot of options for attackers, and make it easier for admins to setup important security systems like firewalls and a update routine.
Obviously containers aren’t always going to be the best choice for every situation, architecting computer systems requires taking into account a lot of different variables. Maybe your application can never be restarted and needs to have a more durable, VM solution. Maybe your application only runs on Windows. Maybe your team doesn’t have experience with kubernetes. Maybe your vendor only supplies VM images. But running your applications as stateless containers in Kubernetes solves a lot of problems that we’ve historically had to deal with in IT Operations, both technically and organizationally.
The problem is you’re presenting this to people as a solution to a question that has zero to do with the valid applications of containers, some of which you just mentioned. Containers have a purpose, sure. I’m not arguing against that. What I’m incensed by is devs commenting similar awful solutions to a legit problem, and it’s increasingly becoming “use a container for that” for almost any concern, which is not only sending people down the wrong road, it’s just poor advice.
Another note on your response, which is essentially “access to container won’t get you much”. Compromising a container gives you access to whatever that container has access to. Your position that it is somehow more secure is just 100% wrong. Refined and granular access controls to resources is the security layer, NOT the container. Sure, you probably can’t affect the container host, but who cares when you’ll expose whatever that container has access to, which is data and services. Same as any VM or bare metal server.
Containers are a portable way to exploit resources more efficiently, and that’s it.
I respectfully disagree. Containers are 100% the right choice in this situation. They provide the defense-in-depth and access controls that combat the threats that OP is targeting by using ClamAV.
The goal isn’t securing a single database through a single attack vector. And it’s not like ClamAV would help you with that, either. The goal is preventing attackers from using your infra’s broad attack surface to get inside, and then persisting and pivoting to get to that database.
It’s just not true that you can get the same level of security by running everything bare-metal, especially as a one-man, self-hosted operation.
I don’t think we disagree at all? Mitigations like containers and jails are a far more appropriate tactic than antivirus. If this comment is with respect to my claim that Linux is not amazing at isolation by default, I am referring to the fact that applications have access to the full filesystem by default with the permissions of the user that they run under (which is scary as hell on plain desktop Linux), and additional mitigations like containers, jails, mandatory access controls, or even just daemon users are necessary to keep things more isolated. I think we’re just on the same page, sorry if it was unclear! I don’t think running ClamAV or any antivirus is particularly valuable, but I guess in theory a tool like that could help detect issues… but eh.