To NAS or not to NAS, that is the question.

These are servers, not NASs. So they would require more effort to set up, but they are more flexible, and in some ways more powerful.

If you wanted plex, you'd set it up. If you wanted music sharing and video sharing that worked with your streaming devices (chrome, roku, apple tv, amazon fire stick, etc) you'd have to set up a server program for that.

The NAS has many/most of those things built int, you just have to enable them and configure them.

The server is a blank slate and other than basic file and print sharing you'll have to find software, install it, configure it, and update it as needed.
 

Dave

Staff member
I have ordered the server Gas linked and will be loading it with Amahi. I also ordered a UPS, something I've needed for some time.

So I'll let you know how it all goes once I get the stuff (about the 16th or so).

So much thanks to @GasBandit, @PatrThom, & @stienman. Cross your fingers.
 
I would still recommend NAS4Free or FreeNAS (in that order) over Amahi, but that is solely based on their stated capabilities as they relate to what *I* want them to do (i.e., my primary goal is for long-term preservation of all my data). There is also a 4th one to consider which is OpenMediaVault. All four packages support NAS functionality, all are free to use, all support use as a media library, and all are based off some flavor or Linux. Amahi is based on Fedora at this time, but it looks like they may be moving to Ubuntu (which is based on Debian) in the future. Both FreeNAS and NAS4Free are based on FreeBSD, and OMV is based directly on Debian. All of these OSes are widely used and very stable and well-supported, but you may want to make sure that all your hardware is 100% supported by checking the hardware compatibility pages for each respective OS. You're using an off-the-shelf server, so the hardware Dell used is most likely widespread enough that it will be supported, but there are times when e.g. your RAID or wireless card won't work (lookin' at you, Debian!). Granted, your server probably won't have a wireless card, so that's less of an issue.

Anyway, take a look over the feature pages for the server software, and decide what all you want the box to do.
Amahi
FreeNAS
NAS4Free
OpenMediaVault
There are many, many more Open Source NAS options out there (e.g. Rockstor, openfiler, and more) but the above four seem to be regarded as the easiest ones for non-techies to set up.

ONE VERY IMPORTANT THING regarding your NAS storage pool: One of the things the Synology and QNAP boxes manage for you that can be a pain in the ass with the roll-your-own server model is adding disks to expand the size of your storage pool. If you're planning to add more drives to your server later to grow its storage, make sure you research the process NOW to see whether it's something you'll be comfortable doing later. FreeNAS' and NAS4Free's reliance on the ZFS file system, for instance, means you can't just tell the storage pool to be larger by including the new disk, you have to copy your data elsewhere, add your new drive, reset the NAS pool to the new size, and then copy the data back...which can be a pain.

--Patrick
 
I hear your server came early, but you still have some time before you can play with it.
Are you going to set it up directly? Or are you going to virtualize?

—Patrick
 

Dave

Staff member
Don't know yet. Probably set it up directly. Right now I'm going through the documentation for the NAS4Free. I still am partial to the Amahi as all I'm looking to do is host the PLEX server for now.
 
Ok. I didn’t know whether or not you planned to share that machine’s extra processing power across any other purposes (by running several different VMs, for instance).

—Patrick
 

Dave

Staff member
Nope. And after booting it up and finding that Windows Server 2016 is already installed, I'm looking to see if there's other options besides the ones mentioned. I can't really do much else at this time because it came packaged with 4 Seagate Cheetah 15K.7 drives, which add up to a whole 1.2 GB. With Windows on it that takes it down to about 800 GB. Not even close to what I need for space. So before I do anything I need to get higher capacity drives.
 
before I do anything I need to get higher capacity drives.
Sounds like 4x300 which would be fast storage, but yes 1.2TB is not very large these days, especially if you enable any kind of redundancy (which would bring you down to 900GB max).
However, the Cheetah drives are SAS (and probably SAS-2 at that), not the more common SATA, so then the choice becomes one of whether to spring for SAS replacements (which will be a little more reliable) or SATA ones (which will be a little bit cheaper). Technically the SATA drives would probably be slower as well (SAS supports backward compatibility with SATA-2 but that would mean 300MB/s v. SAS’ 600MB/s) but I doubt that’ll limit you.
Also before plunking down cash for larger drives I would make sure your controller supports physical drive sizes > 2 TB. Some don’t (and some do but only after a firmware update).

—Patrick
 

Dave

Staff member
Good to know. I'm actually going to wait until such time as I have the $$ to do so. My discretionary spending is done for now. I'm going to have to piecemeal from here on out so we don't run out of money.
 

Dave

Staff member
I do need to update my controller. Damn it. That's another $70 or so (looking at the H700 controller).

Sigh.
 

GasBandit

Staff member
Sounds like 4x300 which would be fast storage, but yes 1.2TB is not very large these days, especially if you enable any kind of redundancy (which would bring you down to 900GB max).
However, the Cheetah drives are SAS (and probably SAS-2 at that), not the more common SATA, so then the choice becomes one of whether to spring for SAS replacements (which will be a little more reliable) or SATA ones (which will be a little bit cheaper). Technically the SATA drives would probably be slower as well (SAS supports backward compatibility with SATA-2 but that would mean 300MB/s v. SAS’ 600MB/s) but I doubt that’ll limit you.
Also before plunking down cash for larger drives I would make sure your controller supports physical drive sizes > 2 TB. Some don’t (and some do but only after a firmware update).

—Patrick
It's 4x300 in a RAID 5, which is 900, but if any one of the drives dies, you won't lose data, and access time is slightly faster than normal.
 
I do need to update my controller. Damn it. That's another $70 or so (looking at the H700 controller).
Are you sure the one you have can’t just be flashed to a newer firmware? Never mind, seems they can’t. Well, it looks like Newegg has H700 refurbished for < $50.
It's 4x300 in a RAID 5, which is 900, but if any one of the drives dies, you won't lose data, and access time is slightly faster than normal.
Yeah, but he said “1.2GB [sic]” which I assumed meant his didn’t come RAIDed.

—Patrick
 
I've had a windows box under mydesk for a few years now, acting as a file server and plex server. All it does is hold files and run plex. And it's in a mid-tower sized case that I repurposed from spare parts in the basement.

Replaced it a few days ago with this: Terramaster F4-420 and doubled my drive space running raid 10. This thing runs plex, so after a few days of testing it out, I have retired my old server. I'll be grabbing the drives out of it and moving them into my current desktop, because, why not?

Considering that it took me about 6 years to fill up 75% of the original drive space I had, I think I'm good for a while :)
2019-08-30 20.21.52sm.jpg
 
Why RAID 10 (50% of total capacity available)? Why not RAID 5 (75% of total capacity available)? Do you expect that high a level of random accesses? Or is the box not beefy enough to do the parity calcs?

--Patrick
 
The box will do RAID 5. But with RAID 5, you can survive one drive failure and with RAID 10 you can survive 2 drive failures, if the failures are in different sub arrays. RAID 10 also has faster read and write times, which seemed more desirable since I have 3 rokus in the house, all of which can be playing movies simultaneously.

As mentioned previously, it took me 6 years to fill up one drive that wasn't in a RAID to 75%. I now have double that storage available, and so it looks like it'll take me (if usage stays constant) over 10 years to fill up the new available space. I expect that by then, drive sizes will have gone up, and prices down. So if I come close to filling the storage, it won't be too much of an issue to upgrade the drives.
 
Yep, those are the kinds of things I was getting at. I figured you were doing it for a reason, I just didn't know which reason.

For those who think of RAID as a mystery:

RAID5 = 4 equal drives ABCp where you get a capacity of A+B+C and p holds the parity info (magic math that helps recomstruct what's on ABC), the benefit is that if any one drive fails, the rest can keep you going with no data loss, albeit at noticeably reduced speed.

RAID10 = This is actually a combination of RAID1 and RAID0 (it's "raid one-zero" not "raid ten"), 4 equal drives arranged AaBb, but you only get a capacity of A+B. Much like RAID5, any single drive can fail and still have everything without any speed penalty, you just can't lose both drives in a pair (even if both in the other pair survive). It is noticeably faster than RAID5, but with that lower capacity.

...I assume all the forum regulars know about RAID levels, but we did get some new non-spam members recently, so it can't hurt.

--Patrick
 
Last edited:
You assume wrongly sir!
Well, all the forum regulars who care about the stuff in this thread, at least. ;)

I wouldn't even bother with the disclaimer, but I get hit so often with complaints about how I hafta go and restate the obvious, soo...

--Patrick
 
For anyone still following this conversation, that's why "RAID 6" was invented. RAID 6 is just RAID 5 but with two parity drives instead of just one, meaning any two drives can fail without data loss.
All the other stuff about RAID 5 still applies, though. you just get the extra safety net of an additional drive.
I didn't care about trying to eke an extra 4tb out of the setup by going raid 5.
I'm still putting together what'll be very similar to a RAID 6 (3 data drives, 2 parity drives) but each one is only 2TB (6TB total storage available) because I want to build a system as cheaply as possible (5 HDD @ $40 ea plus $250 for the computer) that lets me learn how all of this works, how to configure and administer it, how all the options work, whether I should virtualize or run bare metal, etcetera BUT be capable and "real" enough that I can actually put my faith in it. I'll build myself a premium storage tower stuffed with 8TB drives later, once I level up on the training NAS (though I probably won't get as silly as this...not at first, anyway).

--Patrick
 
So you say you have accumulated a somewhat largish collection of media files, but you still want to game or model seismic data?

HDDDD.jpg

source

--Patrick
 
I don't think that includes the drives, either.
Current going rate for 10TB drive = ~US$250, so 11 of those would add almost an additional US$3k.

--Patrick
 

Dave

Staff member
Yeah but just look at the system you'd have for 1.1 PB.

If I ever win the lottery I'd click that link again.
 
https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt
That is a more in depth description about why raid 10 is superior to 5.
Here's another, more recent article explaining why RAID10 is superior in performance and reliability (but NOT capacity) to RAID5/6.
Understanding RAID: How performance scales from one disk to eight

But really, I'm bumping this thread to talk about so-called "shingled" (SMR) disks, how the technology is now showing up in drives specifically labeled as "NAS-ready" or "NAS-optimized" (especially the WD Red line), and how this is bad, bad, bad.
...that 2TB-6TB “NAS” drive you’ve been eyeing might be SMR
and
Network Attached Storage and SMR don’t mix

--Patrick
 
Further confirmation about the SMR "scandal," courtesy of ServeTheHome.
As a single drive, the SMR ones are a little slower than normal, but otherwise serviceable.
However, here's why they recommend not using them in NASes. One of the things that can happen in a NAS is the failure of a drive, requiring a fresh one to be substituted. The NAS then has to shuffle data around to rebuild or "resilver" the array before another drive fails, ruining the entire array. And what happens when you try doing this with an SMR disk?

SMR-RAIDZ-Rebuild-v2.png


...in short, it takes longer. How much longer? The other three disks (all CMR) finished in about 15-16hrs. The SMR disk? It finished in nine and a half DAYS. According to the article, this concerned them so much that they tried a second drive just to make sure. That means this one test probably took almost a month to complete.

So yeah, don't put SMR disks into a NAS. It's a bad idea and leaves open a wiiiide window of time where an additional disk failure could toast the entire array.

--Patrick
 
Well, after the lawsuits were filed and spotlights were shone deep into WD's Red line, they have decided to change the labeling scheme so you can tell which drives use which technology.
And they have decided to do this NOT by relabeling the SMR disks, but instead by renaming the CMR models as their new "WD Red 'Plus' and 'Pro'*" lines.
...in other words, they took the existing Red CMR drives, the ones that used to just be called plain ol' "Red" drives, and gave them a new "Plus" or "Pro"* suffix. I have not yet looked to see whether or not they also elevated the pricing for their "new" Plus/Pro* models, but I am not ruling out the possibility. Caveat empstor, and all that.

Linus decided to weigh in on this as well, and does a decent job of explaining why people really do have a legitimate reason to be upset about this:




--Patrick
*Edited after discovering that the "Pro" line already existed, it is only the "Plus" name that is new. Doesn't really change the situation, though.
 
Last edited:
Top