Plus S3 sleep doesn't seem to respect network activity, but it does respect disk activity status. All shares write directly to disk as there is no need for an intermediate step and no parity calc. backed up weekly.Īrray: NON parity protected disks (it's a backup server and I have offsite for the stuff that really matters). Pool 1: single 120GB SSD for appdata and small number of dockers. Also runs a backup plex server in case I royally F things up on my primary box, can just switch over to that and disable S3sleep. runs syncthing to accept changed / new / deleted files in a 1 way sync from primary, and shuts down once disks spin down after 30 minutes. would prefer tdarr to transcode to memory but multiple nodes require physical disk access Plex transcodes to memoryu (128GB in my server, so lots of room). Ssd2: scratch disk for downloads and tdarr cache. Also pfsense failover node is a VM here (primary FW is metal) with a 4 port NIC passed through. super lightly used mostly for coordinating some backups that don't work right in syncthing, and coordinates offsite backup overnight using providers windows app. Likely to convert scratch disk to a pool as well with cache prefer - to allow in very rare cases where it fills up, it can steal space from the array until cache clears out. Likely to convert my VM SSD to a pool in the coming weeks - single SSD now but I likely will add a second SSD there too. Might change that based on comments though to keep recent files on cache for a while longer and avoid spinning up the array. I just prefer stuff on the array I guess. Mover set to 75% and cron to move everything to the array overnight, every night. Pool 2: 2x500GB SSD for a landing spot for completed DL and all other file shares. Might split that out later if the number of dockers grow too much. Pool 1: 2x250GB SSD for appdata and docker. My use is somewhere in between all of you, but probably will change based on comments. Wow some super interesting thoughts here and way more ways to use cache that I ever imagined. Thus, causing issues for you docker apps as they need to write to the cache also. Also, usually appdata from your dockers is also on the cache drive. If the cache drive becomes completely full it will cause an error because it can't write the file anymore. "This lets it fall over to the array if cache becomes full and not cause errors." unRAID will write the file to the array instead of trying to write the file to the cache drive. But I have not tested the upper bound of that property. Setting it larger than your best guess largest file would just waste that much of the drive and if you set it to larger than the drive, then I assume it would never be used. However, if your largest file is only 20GB then set it to that. So if you have uncompressed blurays, I would suggest something like 80GB of minimum free space. Generally the large files used in unRAID are media/movie files. This lets it fall over to the array if cache becomes full and not cause errors. Also, CA Backup, takes a snapshot of all my unraid specific items that use the cache drive. I backup nightly so can afford to lose media if it dies on me. I currently run 1 - 2TB NVME drive (Inland premium with 3500 TBW). It has since grown in options as other users present use cases. (Seems to be my sweet spot, yours may differ). It checks for the cache drive to be more than 60% full, and for files to be older than 25 days old. This allowed me to keep my recent media files on the cache drive longer. It started with modifying the "percent used" and adding the "days old". I took over the plug-in from Squid as I wanted to more efficiently use my cache and system. (Should be working now, but there have been problems in the past in both my plug-in and unRAID's default code.) ![]() Avoid shares with spaces in their name.Use an SSD with the largest TBW you can find.Install, Setup, and test CA Backup plug-in. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |