As many would or should know..
I have a home server that I've been running for about 3 and a half years now that has been the home of my HIVE witness node(s).
It's been running solidly for all those years baring some downtime for maintenance, OS upgrades and the odd power hiccup.
It's been so stable I'm honestly reluctant to do any changes as you know the saying if it ain't broken don't fix it.
That being said, I've been thinking about spinning up a full API node once again, as the server is more than capable of handling a full node and serving a decent chunk of traffic. I've got a decent 100Mbit/s upload which I can hand off a fair chunk to the API node.
Naturally I'd implement a great deal of rate limiting and caching cause might as well do it properly and be efficient!
This issue?
None standard server setup..
The really awesome super simple HafNode setup suite that blocktrade and friends have been working on generally expects to be placed on a ZFS file system usually on a barebones machine.
My server is most certainly not that!. I for one don't like touching ZFS as I haven't been able to get it set up in a way that doesn't very quickly destroy the lifespan of my SSDs and NVMEs. It's also not absolutely required it's just really nice for backups and recovery among other things.
My server is also not a barebones machine but runs on Proxmox and is made up of many VMs (virtual machines).
It's not completely impossible for me to run this new suite however.
I could very well wangle my way around things and get HAF and various other apps running on a VM manually as I do for HiveD the core Witness software.
I could also very well try cut out the ZFS specific stuff from the HAF node suite code to bypass it and simple run it that way.
I'm not strapped for options here but I am very much not in the realm to be experimenting with setups. Primarily because it's all done on a production server and replaying all that DATA multiple time on NVMEs I fear for my drives lifespans as I'd like them to not wear out faster than necessary.
I'd very much prefer to just be able to spin up the OS and go replay just the once but that I feel will be unlikely.. because I'm dumb.
Another aspect I need to consider is do I continue running FULL HiveD nodes or do I have them running as light nodes?
Since light nodes are basically a thing now I can have my witnesses only keep the last 2 million blocks or so, saving a lot of space.
On that note however.. do I even care? I mean I have 16TBs of raid 0 NVME storage on my server. I'm hardly strapped for space.
There's a lot to think about, I really should get into researching how difficult it'd be to alter the HAF setup suite to not include ZFS related stuff. I know for a fact the manual route is full of many pitfalls that can lead to having to replay multiple times due to errors in setup and config so I'm hoping to avoid that.
There is also the possibility of setting up a VM with a ZFS filesystem and then loading up the snapshot that way.. I just don't know what the performance ramifications will be for running ZFS in VM ontop of a proxmox HOST where the drives are in raid 0 at the host level.
Either way there is only one real way to find out and that is to try and get this FULL API node up and running.
If there are any sysadmin people out there who have any tips and insights they'd like to share I'd love to hear it.
On another note
Somewhat good news is I do have a good chunk of cash put to once side as a server emergency fund so if my hardware explodes I should at least be able to replace it with relative ease.
Every vote helps to ensure HIVE is decentralised and secure! Please consider voting for me!