/g/, tell me a reason not to get one of these along with 16GB of ECC RAM and use it to build a FreeNAS/ZFS home server. They're on Amazon for 240 yurodollars, and the 16GB of ECC RAM would cost me 150 bucks on top.
In comparison, a motherboard (excluding processor!) with the C226 chipset (needed for ECC RAM) already costs 180 euros, and one with a pre-installed quad core atom C2550 is 300.
Does it even have hardware RAID?
Either way, GigE will bottleneck the shit out of you so any performance improvements after that will be wasted. My D510 supermicro board doesn't even get bottlenecked on the CPU regardless of whether I use Samba or any of the possible LIO backstores.
It uses the RAM to prevent filesystem corruption, which is why ECC is crucial - if some filesystem related shit gets corrupted in RAM, the filesystem itself will get fucked.
It does have HW RAID, but FreeNAS can't use it. I'd just go with ZFS' RAID.
And I'm not really worried about data throughput as much as data integrity - as long as it's fast enough to stream 1080p videos from it and do backups to it etc I'll be fine.
By HP support, you mean some kind of paid support plan? What benefit would I get from these updates once my system is up and running?
WTF, just use normal Linux and set up whatever sharing you want then. My home server gets along comfortably with 2 GB cheapass RAM and runs torrents and IRC bot too. Then again I'm using ext4 since I don't really need a filesystem that can backup multiple versions of my movies, porn and isos.
>It does have HW RAID, but FreeNAS can't use it
Then there's a 99% chance it's not hardware RAID, it's fakeraid.
I'm using a Supermicro board that I got used (with 4GB of ram included) for $65. There's not much reason to pay for more than that when your GbE is going to be the bottleneck.
This. If you want easy redundancy and automatic failover, toss in $30 for an older HW RAID card.
>It uses the RAM to prevent filesystem corruption, which is why ECC is crucial - if some filesystem related shit gets corrupted in RAM, the filesystem itself will get fucked.
What happens if power is lost? Do you need a UPS as well to use ZFS safely?
You're probably right about the fake RAID. However, ZFS is really appealing to me because of the filesystem integrity thing and I have no idea where I could find used ECC compatible motherboards. I guess I could scavenge the dumpsters at my university, but I have no idea how often they throw out server grade hardware.
No, I think it would still be fine. I'm not an expert but from my research, you can pull the plug at any point and the FS will still be good, you just might lose some transactions that were still in RAM only.
no, zfs can survive power outages
problem with non-eec ram in zfs is the filesystem operations (e.g. checksumming) happens in ram
so if you get a bitflipped you're fucked as you're going to write corrupted data to the disk
keep in mind zfs was designed for enterprise systems where ecc is the standard
No. it's just that there's no way to detect errors introduced from faulty ram or cosmic radiation bit flips other than with ecc ram. So the whole point of running this super safe file system becomes kind of moot if you still can't trust your RAM to return the data intact.
They're more expensive due to being newer, but the Supermicro A1SRM lineup has atom C2000s and supports ECC.
I can't wait for the X10 line, some of them have onboard 10GbE which is way overdue.
I've got this with the standard hardware configuration. 2x WD Red 4TB in Linux software RAID 1, plus a Crucial M4 64 GB SSD running CentOS 7.
Ask me anything you need to know OP.
/g/ee, what's a good place to read about servers and become less of a fucking retard about 'em?
I also plan on buying this with 16 Gig RAM and run ESXi on for FreeNAS and a bunch of other servers.
I'm kind of turned off by the fact that it has 4 drive bays but doesn't support RAID 5...
Servers are just like most other computers but they are designed to have maximum uptime and fault tolerance (or should be anyway) in most cases. Hence dual PSUs, hardware RAID, dual ethernet (or more) connections, etc.
You can always do soft raid in LVM or with mdadm in Linux. Not a big fan of LVM soft raid though because it seems to be buggy right now, ie setting up an LV with "lvcreate ... -m1 --type raid1" or something like it.
There's not that much to learn. They're just normal computers for the most part. The only difference is that for multiprocessor setups, the CPUs themselves need to support the level of MP you're using(2P, 4P, or rare 8P setups).
RAM is sometimes different. Some RAM supports ECC which allows it to find and correct most RAM errors. Then, normal desktop RAM is unbuffered, which is used in some server equipment. Others use registered DIMMs, and some uses fully buffered DIMMs.
SAS is often used instead of SATA on servers, but you can plug SATA drives into SAS controllers (but not vice versa). Proper RAID controllers do all of the RAID on the hardware level, and only expose the RAID device to the system. Fakeraid (which is what you generally get when a motherboard supports "RAID") relies on the (windows) driver to do the RAID, so it may as well not exist.
Servers often have lower-level management interfaces that bypass the OS, like IPMI or several vendor-specific ones.
The easiest way to learn shit is to get a used server (not ancient, try about ~5 years old) and play around with it.
It doesn't support any RAID. It's just fakeraid.
I never understand why people don't at least wrap the outside of a case in some tinfoil to prevent cosmic radiation?
It's HP's new look and I fucking hate it. Looking to add new servers to my racks at work and they all have the cosmic ray shit. The fucking racks already have doors, now you're just putting shit in my way when I need to access the server.
What if I bought it to get cosmic radiation?
Behold the fancy foil on many of them. I can take it off, but the point is it does nothing except get in the way in a rack. Why bother including it at all? Plus you're obviously going to pay more just because it's one more thing they have to make and include with it.
bezel isn't standard option
>photo on hp website displays the server you're going to order
Oh yeah, for sure, but you often have to use their site to look up shit.
I'm not responsible for buying the servers at my shop, but I doubt we specify if we want doors/bezel shit or not. We just get it..
>mfw hp and ibm is going down the shitter anyway :^)
IBM hardware has always been solid, I think their x86 division will decline since it's been sold off to Lenovo.
I have no idea how they are going to survive with 'just' their POWER/i series/big iron businesses, they seem to be dead ends at as I doubt new customers are going to throw money down the drain by going proprietary IBM (or oracle/sparc)
HP enterprise looks bleak as well, gross mismanagement of the company in the last 10 years
What's left? Dell? Fujitsu? Supermicro?
I have no idea how good supermicro support is though
then again the saved support costs, you could buy another 2 clone boxes for backup
dell's vrtx looks pretty sexy
>In comparison, a motherboard (excluding processor!) with the C226 chipset (needed for ECC RAM) already costs 180 euros, and one with a pre-installed quad core atom C2550 is 300.
A C206 board with an i3 is plenty sufficient & a lot cheaper. And yes, it does ECC.
Why not its cheap enough? Plus then I can run a few VM's on it too.
my HS has 64gb of ram because I'm not a poorfag.
OP here, I was just browsing ebay for used servers and happened upon this thing:
It looks pretty neat, the only thing I'm concerned about is the noise it probably makes. Does anyone have experience with these processors in 1U cases? Are they incredibly loud?
That's too bad, I'm planning to have this set up in my mom's office since that's where the router sits. I just submitted a bid for a Dell T20 though, those seem even better than the HPs since they have a Xeon. If I could snag it for under 250 euros, that would be nice.
I have one. It's not running ZFS though.
12x 4TB Hitachi (4 internal, 8 external), SmartArray P412, 2x 1TB mSATA SSD on the onboard SATA (using 25SAT22MSAT). + 4x 5TB USB3
Each set of 4 drives is in a RAID 5. Storage Spaces tiered storage is fucking awesome.
I had a 1U in my basement for a few months. When they turn on they sound like a jet starting to take off. After a few minutes when it boots up it's borderline acceptable. I had to put the TV loud to compensate and definitely wouldn't want it in the same room if I'm having a normal conversation with people. I could tell it was on in the next floor above.
Do what I did and just buy server parts and use a regular micro ATX case. Much quieter and more compact as well as having more storage capability than a 1U. I have a 1TB WD RED boot drive and 2x 4TB Hitachi NAS data drives plus 2x 4TB external desktop drives that mirror the internal ones as a backup.
In a month or so I'm going to buy a used RAID card on ebay and buy two or three more 4TB NAS drives and make a RAID 6 for a total of 12TB of usable storage and only keep the most important stuff backed up on the two externals.
I'm considering using RAID 5 instead. Have any bad history with RAID 5 yet?
I've also thought about getting crashplan. Do you think there's a big enough possibility even if I use my own private key that they'll know about my loli and other hentai and report me?
No, but these are just storage pods. The magic of storage spaces / zfs comes from jbod. Given that I have a hardware RAID controller, I'm offloading the RAID5 parity calc to that. Rebuild priority is set to very high, muh tiered storage will drop priority on that pod if it has to rebuild.
So what's wrong with RAID5 + offsite backup?
ZFS is for corporate deployments IMO, unsuited for home use because of cost and complexity.
Not to mention Btrfs will replace it very soon. If you thought linux is obscure, you haven't met *BSD
Yes, the pods are presented to storage spaces as jbod. Basically 3 12TB spindles. storage spaces doesn't do any type of RAID with them.
> None of these are relevant in home usage.
99% of NIC bonding implementations (essentially everything except linux's NIC bonding set to balance-rr mode) will keep track of various aspects of the packets going across the bonded link so that any two packets going from A to B will always be sent over the same link.
balance-rr doesn't do that, but it shows why it's used to begin with. If you allow packets to be split up, they can arrive out of order, which best case causes the receiving system to have to do more work to reorder them, and worst case fucks things up. I've heard that 4xGbE with balance-rr only gets about 2.3 Gb/s TCP/IP throughput.
MicroServer G8s. Last test config for both servers:
Xeon 1220L v2
2x 1TB mSATA SSD (RAID1)
4x Hitachi 2TB (JBOD)
Server 2012 R2 Standard
Switch: HP PS1810-8G - Jumbo fames enabled
I pushed a sync of my MSDN downloads from one to the other. About 600GB. Yes, I'm aware that I'm still in the SSD storage tier.
Because you can get a 8GB DL360 G5 2x dualcore for $110 and having them load another 8GB will be like $20? (they have 2GB sticks for $2.65 Buy It Now)
Better deal is to get a 2x quad-core for $140. 12MB cache goodness, and you can run it 1P for lower power draw and always have the ability to go back to 2P to get some crunching power.
>Seconded. 1U servers are VERY loud.
No reason to keep them stock unless you're an uber nerd who actually has a rack of 'em, though. Take the top off, rip out the fans, and jury rig up some 80 or 120mm's. You can make ducts with cardboard and duct tape. (Hell, my main rig is being funneled cool air down a National Geographic cover.)