![]() ![]() Thanks for the insight, I appreciate it! Nearly everything I have is well backed up (B2 via Restic currently), from the internet (Linux ISOs lol, but also every version of Windows I’ve needed to test against, various other software and virtual machines ready for rapidly spinning up test environments, even local mirrors of things like WinRAR, OpenVPN, tor and similar so that I can load into virtual machines without connecting them to the internet - for all of this stuff, the backup literally is “the internet”).īut restoring is a pain (B2 costs money, although not actually that much), finding and re-downloading various software packages, etc. Thanks for the insight everyone, and if I think of it I'll update again once I have the RAM installed if there is any significant performance difference. I'll probably still look into a pair of M.2 cache drives in the future, but with the "free" (already owned) 480GB SSD I can have the best of both worlds for now. With that in mind, I configured the SSD cache in Random I/O mode, block size capped at 1MB - There's no point in putting larger chunks of consecutive data on the SSD as it's not much faster anyway, but if it can save me some seek times? I ran QNAP's performance test, my rotational drives are 240MB/s-190MB/s, the SSD is 370MB/s (not totally unexpected for this model), but the IOPS is 60886 vs the rotational drives in the 110 range. Then I noticed I have a Intel 540 series 2.5" 480GB SSD kicking around so I tossed it in. I decided on 32GB of RAM for the moment as I have a few things I could move into Docker containers that would make sense to have on the NAS (things that rely on the NAS anyway). Long term I expect to both add RAM and one or more SSDs, but in the short term, I’d like to get the best performance boost for the money. A Drobo was able to keep up comfortably, so I don’t anticipate any performance issues here. The 8TB is large, predominately static media files typically 1-4GB, plus a bunch of metadata. My workload is somewhat variable, the 10TB is live data for a small number of users working via SMB or Resilio Sync, with periodic dumps of virtual machine hard drives (backed up, not used live at this time). If I go the SSD route, I get the impression that the cache option may be safer because I can remove or re-allocate, but I can’t really find a good breakdown of when I would use one vs the other. I’m assuming any spare RAM gets used as a disk cache, so I would expect some direct speed benefits, but is this likely to be more or less noticeable than a SSD cache? Would I be better off in investing in 32GB RAM, or 1TB SSD? And if SSD, cache or Qtier? I’ve got a QNAP TS-873-4G with 2x-10TB drives in RAID-1, and 5x 8TB drives in RAID-6. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |