My Experiences with Arch Linux HIDS
I spent the weekend attempting to harden my home server a bit and was uncomfortably dissapointed with the options available to Arch Linux. To begin my search I started with the Arch Security . There are many great suggestions here, of which I've adopted many. The real hope for my searches though was to install an all encompassing host-based intrusion detection system (HIDS). I had hopes the solution would involve packet analysis, file integrity checks, and log aggregation and monitoring. I quickly came to realize no such tool existed. Upon furter research I gathered I would be able to roll multiple tools together and acheive the level of protection I desired.
File Integrity Checks
I've chosen to install AIDE to cover this requqirement. This is a tool that creates a baseline database during an initial run and is periodically executed to compare and update against the baseline database. The goal is to determine file integrity exploits, for example adding malicious code to a users .bashrc file so that it executes on each login. AIDE is manifested as a low-level command line utility that can be manually executed or scheduled as a cronjob (I've opted for the later).
There are many well known applications for network packet analysis. These include, but are not limited by, the two industry standards Snort and Suricata. I've had limited experiences with both but opted to bypass them for this deployment for a number of reasons.
- Keeping rules up to date is difficult
- Overkill for a single host
As an alternative, I'm going to continue running the simple iptables firewall with sshguard over the top. Iptables, for those unfamiliar, provides a simple rulebase for which sources are allowed to access which ports on the local system. Since my usecase has only a few open ports, this should suffice. Additionally, sshguard provides detection of failed login attempts for ssh. Also if can be configured to drop packets from sources that are obnoxiously crawling the services (in the case of HTTP/HTTPS).
This is the main location where I have been dangerously let down. It's common knowledge that sysadmins generally give up analysis of logs when they are large. In my case, I don't want to spend more than 5 minutes a day going over logs for my home server. An aggregation tool could easily provide this level of simplicy. The first failure came when looking at logwatch. The failures of systemd (of which the greater linux community agrees are plentiful) show their ugly head here. logwatch, being quite outdated, relies on log files instead of the default journald system. The most sane way to pipe journald logging to general files is using the syslog-ng applicaton. In my opinion, this is insane. Deliberatley duplicating logs is unfounded, which is why I've opted to forgo the logwatch route. The next option is a newer project called journalwatch. Again, this is an underdeveloped solution. Finally, I've fallen back on something custom written, logram. I'm hoping this will be a lightweight, cron-able, solution and will update as the project progresses.