Jump to content

To what point? If I can't carry the data from my NAS to my PC faster than my gigabit connection, then why would I want faster drives?

Get a faster connection!!

Link to comment
Share on other sites

Link to post
Share on other sites

Perfectly adequate for data storage. I'm using "green" drives and I'm saturating gigabit easily.

 

 

All I run is Greens, lots of them :P

Quick question regarding those Greens: Do any of you know if the issues with frequent

head parking due to their power saving features have been solved in Linux? I'm looking

to buy some drives early next year and if those issues no longer persist I'd probably

go with the Greens.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

 Quick question regarding those Greens: Do any of you know if the issues with frequent

head parking due to their power saving features have been solved in Linux? I'm looking

to buy some drives early next year and if those issues no longer persist I'd probably

go with the Greens.

Never had any issues...

 

 

 

Get a faster connection!!

Pay it for me and I will get 100Gbps :)

Link to comment
Share on other sites

Link to post
Share on other sites

 

Upgraded. Bigger case, more drives and more RAM.

 

Original post: http://linustechtips.com/main/topic/21948-ltt-10tb-storage-show-off-topic/?p=662131

 

Hardware

CASE: Supermicro SC846BE26-R920B

PSU: Built-in redundant 920W

MB: Supermicro X9SCM

CPU: Xeon E3-1220

RAM: 32GB ECC

RAID CARD: IBM M1015 flashed to IT mode. Chassis backplane has built in SAS expanders.

SSD: Samsung 840 pro 256GB

HDD: 10x Seagate 4TB, 6x Hitachi 3TB, 4x Seagate 3TB

Software and Configuration:

Gentoo Linux

10x 4TB mdadm raid6

10x 3TB raidz2

Trying ZFS on Linux after seeing a number of them in this thread. Thanks for sharing.

 

Usage:

Host: NFS exports, BIND, DHCP, radius, iSCSI target, KVM

VM1: LAN to WAN routing, ddclient for afraid.org DDNS

VM2: VPN, LAN to VPN routing, web proxy, BIND slave

VM3: Icecast, wireless LAN controller (unifi), another web proxy

No difference here but I'm considering moving the master BIND server from the host to VM1. During the upgrade I planned for migrating VM1 to my desktop to do the routing there so I could continue to watch youtube during the build but I forgot that the primary DNS server was on the host. I lost name resolution and was a bit annoying working around it.

 

Backup:

None :(

 

Photo's:

Q1IPsDB.jpg

This was during the build. Mostly done. I ended up moving the m1015 up one slot as the current slot is x4

jBi86dX.jpg

Another during the build. Trying to figure out cable management. Had some success with the sff-8087 cables.

BWKhOKH.jpg

fs1 ~ # mdadm -D /dev/md0 /dev/md0:        Version : 1.2  Creation Time : Fri May 24 00:09:10 2013     Raid Level : raid6     Array Size : 31256123904 (29808.16 GiB 32006.27 GB)  Used Dev Size : 3907015488 (3726.02 GiB 4000.78 GB)   Raid Devices : 10  Total Devices : 10    Persistence : Superblock is persistent    Update Time : Thu Nov  7 23:18:49 2013          State : clean  Active Devices : 10Working Devices : 10 Failed Devices : 0  Spare Devices : 0         Layout : left-symmetric     Chunk Size : 64K           Name : vm1:0           UUID : 5af1fdb6:01fa8c99:97c064ff:2659caeb         Events : 14986    Number   Major   Minor   RaidDevice State       0       8       97        0      active sync   /dev/sdg1       1       8      241        1      active sync   /dev/sdp1       2      65        1        2      active sync   /dev/sdq1       3      65       17        3      active sync   /dev/sdr1       4      65       49        4      active sync   /dev/sdt1       5       8      193        5      active sync   /dev/sdm1       6      65       33        6      active sync   /dev/sds1       7       8      113        7      active sync   /dev/sdh1       8       8      209        8      active sync   /dev/sdn1       9       8      225        9      active sync   /dev/sdo1fs1 ~ # zpool status  pool: zpool state: ONLINE  scan: none requestedconfig:	NAME                                            STATE     READ WRITE CKSUM	zpool                                           ONLINE       0     0     0	  raidz2-0                                      ONLINE       0     0     0	    ata-Hitachi_HDS723030ALA640_MK0311YHG30EXA  ONLINE       0     0     0	    ata-Hitachi_HDS5C3030ALA630_MJ1311YNG6GE1A  ONLINE       0     0     0	    ata-ST3000DM001-1CH166_Z1F30325             ONLINE       0     0     0	    ata-ST3000DM001-9YN166_Z1F0DG1S             ONLINE       0     0     0	    ata-ST3000DM001-9YN166_W1F0XF7E             ONLINE       0     0     0	    ata-ST3000DM001-9YN166_W1F056KA             ONLINE       0     0     0	    ata-Hitachi_HDS5C3030ALA630_MJ1321YNG12Y1A  ONLINE       0     0     0	    ata-Hitachi_HDS5C3030ALA630_MJ1321YNG13M1A  ONLINE       0     0     0	    ata-Hitachi_HDS5C3030ALA630_MJ1311YNG2DZGA  ONLINE       0     0     0	    ata-Hitachi_HDS5C3030ALA630_MJ1311YNG2RZ9A  ONLINE       0     0     0errors: No known data errors

Does it sound like an airplane when it's on? :P I've been thinking about a supermicro case for my home NAS as well, but I'm a bit worried about the noise. I have several supermicro servers in a rack in a datacenter and boy they're loud. Did you disconnect / replace some of the fans? If so, what are the temperatures like? Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

Does it sound like an airplane when it's on? 
:P I've been thinking about a supermicro case for my home NAS as well, but I'm a bit worried about the noise. I have several supermicro servers in a rack in a datacenter and boy they're loud. Did you disconnect / replace some of the fans? If so, what are the temperatures like? Thanks!

I'm using all stock fans and letting the motherboard control fan speed and it is barely audible in a silent room... through a wall :P

 

Seriously, I would not want to be in the same room as this server for any length of time, but a single wall or door will pretty much block all noise.

 

Below are some temps and fan speed data I have at this time. I don't have the same configuration of drives as in my last post as I am shuffling drives and data around for a second machine I will be building soon to finally set up a 1:1 backup of all data.

 

All drives are pretty much idle. I can post temps during a ZFS scrub when I have that going at a later time.

coretemp-isa-0000Adapter: ISA adapterPhysical id 0:  +33.0°C  (high = +74.0°C, crit = +94.0°C)Core 0:         +29.0°C  (high = +74.0°C, crit = +94.0°C)Core 1:         +28.0°C  (high = +74.0°C, crit = +94.0°C)Core 2:         +33.0°C  (high = +74.0°C, crit = +94.0°C)Core 3:         +29.0°C  (high = +74.0°C, crit = +94.0°C)nct6776-isa-0a30Adapter: ISA adapterVcore:         +0.78 V  (min =  +0.60 V, max =  +1.49 V)in1:           +1.83 V  (min =  +1.62 V, max =  +1.99 V)AVCC:          +3.36 V  (min =  +2.96 V, max =  +3.63 V)+3.3V:         +3.36 V  (min =  +2.96 V, max =  +3.63 V)in4:           +1.54 V  (min =  +1.35 V, max =  +1.65 V)in5:           +1.27 V  (min =  +1.13 V, max =  +1.38 V)3VSB:          +3.34 V  (min =  +2.96 V, max =  +3.63 V)Vbat:          +3.18 V  (min =  +2.96 V, max =  +3.63 V)fan1:         2738 RPM  (min =  300 RPM)fan2:         2566 RPM  (min =  300 RPM)fan3:         1355 RPM  (min =  300 RPM)fan4:         3000 RPM  (min =  300 RPM)fan5:         2662 RPM  (min =  300 RPM)SYSTIN:        +33.0°C  (high = +75.0°C, hyst = +70.0°C)  sensor = thermistorCPUTIN:        +30.5°C  (high = +80.0°C, hyst = +75.0°C)  sensor = thermistorAUXTIN:         +2.0°C    sensor = thermistorPECI Agent 0:  +38.5°C  (high = +95.0°C, hyst = +92.0°C)cpu0_vid:     +0.000 Vintrusion0:   ALARMintrusion1:   ALARM2013.12.23.19:31:47 /dev/sda: SAMSUNG HD204UI: 29°C2013.12.23.19:31:47 /dev/sdb: SAMSUNG HD204UI: 30°C2013.12.23.19:31:46 /dev/sdc: ST2000DL004 HD204UI:  30°C or °F2013.12.23.19:31:47 /dev/sdd: SAMSUNG HD204UI: 30°C2013.12.23.19:31:47 /dev/sde: SAMSUNG HD204UI: 29°C2013.12.23.19:31:47 /dev/sdf: SAMSUNG HD204UI: 28°C2013.12.23.19:31:46 /dev/sdg: ST4000DM000-1F2168:  31°C or °F2013.12.23.19:31:46 /dev/sdh: ST4000DM000-1F2168:  32°C or °F2013.12.23.19:31:46 /dev/sdi: ST4000DM000-1F2168:  32°C or °F2013.12.23.19:31:46 /dev/sdj: ST4000DM000-1F2168:  30°C or °F2013.12.23.19:31:47 /dev/sdk: ST4000DM000-1F2168:  31°C or °F2013.12.23.19:31:47 /dev/sdl: ST4000DM000-1F2168:  31°C or °F2013.12.23.19:31:46 /dev/sdm: ST4000DM000-1F2168:  31°C or °F2013.12.23.19:31:46 /dev/sdn: ST4000DM000-1F2168:  30°C or °F2013.12.23.19:31:46 /dev/sdo: ST4000DM000-1F2168:  30°C or °F2013.12.23.19:31:46 /dev/sdp: ST4000DM000-1F2168:  29°C or °F

Note that the noise level I described is at the fan speeds noted above. If you let the fans spin up much higher they will very much be audible through a wall or door.

 

It is possible to lock down the PWM values of all fans to any value in software using the Supermicro MB I have though. Works in Linux and I'm assuming Windows would allow you to do it too. I just haven't felt the need to do so.

Link to comment
Share on other sites

Link to post
Share on other sites

@madcow

 

Love all that output from sensors. On my motherboard, it doesn't detect the sensors right :/

 

EDIT: is that even sensors?

Yeah, there's nothing like a nice wall of text for a stats whore, I'm totally addicted

to that stuff as well. On my MSI Z77 board, I only get 38 lines of output, whereas on

my Supermicro I need to scroll a few screens to view it all. :D

If it's not sensors, it looks at least very similar.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I'm using all stock fans and letting the motherboard control fan speed and it is barely audible in a silent room... through a wall :P

 

 

Thanks for the output, very useful. Also, 1 more question - since you mentioned zfs and linux, I assume you're using zfsonlinux. We tried that in our semi-production environment once and it was super bugy (issue filled by us - https://github.com/zfsonlinux/zfs/issues/1638). Have you experienced any hangs / kernel crashes / other problems?

 

I will probably go with openindiana + zfs, I don't trust zfsonlinux anymore :/

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the output, very useful. Also, 1 more question - since you mentioned zfs and linux, I assume you're using zfsonlinux. We tried that in our semi-production environment once and it was super bugy (issue filled by us - https://github.com/zfsonlinux/zfs/issues/1638). Have you experienced any hangs / kernel crashes / other problems?

 

I will probably go with openindiana + zfs, I don't trust zfsonlinux anymore :/

mdadm + XFS is you'r friend : )

Link to comment
Share on other sites

Link to post
Share on other sites

@madcow

 

Love all that output from sensors. On my motherboard, it doesn't detect the sensors right :/

 

EDIT: is that even sensors?

 

 

Yeah, there's nothing like a nice wall of text for a stats whore, I'm totally addicted

to that stuff as well. On my MSI Z77 board, I only get 38 lines of output, whereas on

my Supermicro I need to scroll a few screens to view it all. :D

If it's not sensors, it looks at least very similar.

 

Yes it is sensors. The drive temp output is a custom script.

 

Thanks for the output, very useful. Also, 1 more question - since you mentioned zfs and linux, I assume you're using zfsonlinux. We tried that in our semi-production environment once and it was super bugy (issue filled by us - https://github.com/zfsonlinux/zfs/issues/1638). Have you experienced any hangs / kernel crashes / other problems?

 

I will probably go with openindiana + zfs, I don't trust zfsonlinux anymore :/

Yes, I'm using ZFS on Linux. I monitor logs occasionally but have not seen any storage related issues. That's not to say there won't be issues if I put it through higher load. I'm not really making it work so hard in a home environment.

 

I'm using version 0.6.2 on kernel 3.10

Link to comment
Share on other sites

Link to post
Share on other sites

....

I'll be using Solaris 11 instead of CentOS (I was originally using mint).

...

 

How is Solaris?  I thought of it a few years back for using ZFS instead of FreeNas... but ended up with hardware Raid instead.

 

 

Here's my update, I tried to bold the changes

 

Hardware

 

HDDs:  2x WD Green 4tb

            5x Seagate 4tb

            14x WD Green 3TB (1 added)

            1x WD Red 3TB

            6x WD Green 2TB

            1x Seagate 2TB

            1x Samsung 2TB

            1x 500GB single-platter Seagate (Scratch)

 

Damn.  How does it sound when all thrashing?  :D

 

 

5400RPM DRIVES!

 

I just went through a set of WD SE (7K+ rpm drives) to replace my greens, but returned them for Reds.  The performance was there for indexing, searching, copying lots of small files like music and pictures, but for only the 1% of the time when needed... with 20% extra idle heat, as well as noticeably more  noise and vibration.  And that's just for testing 3 of the 6 drives.   Actually, I would have kept the WD SE drives, but getting a bad batch for me was a clear sign from above  :P .

 

 

I'd probably go with the Greens.

 

You gotta get REDS!.. they come with a cool "RED" sticker too!  :lol:

 

post-7162-0-50835600-1388080200.jpg

My Rigs (past and present)

Link to comment
Share on other sites

Link to post
Share on other sites

Here's my update, I tried to bold the changes

Hardware

CASE: Lian Li PC-A70F with Lian Li EX-23NB, EX-34NB, Sans Digital HDDRACK5, Mediasonic 4-bay Probox, and 8-bay Probox

PSU: Antec New Truepower 750w (Seasonic rebrand)

MB: Asus P8P67 Pro

CPU: Intel i7-2600k at 4.43ghz

HS: Noctua NH-C12P SE14

RAM: 16GB DDR3 1600

HBA: Intel SASUC8i, some cheap 2x sata pcie x1 card, Mediasonic 2x esata card, Supermicro SAS2LP-MV8

SSD: OCZ Vertex 3 120GB

HDDs: 2x WD Green 4tb

5x Seagate 4tb

14x WD Green 3TB (1 added)

1x WD Red 3TB

6x WD Green 2TB

1x Seagate 2TB

1x Samsung 2TB

1x 500GB single-platter Seagate (Scratch)

Software and Configuration:

Windows 7 - All JBOD. I use junctions to merge the drives, works perfectly for me.

Usage:

I use the storage for movies and series, i have media players around the house that access it. I also have a 150/75 home line, so I can stream it to any remote location or to my mobile devices.

Backup:

What is this "backup" you speak of?

Additional info:

Most 3TB drives were bought for $110 or less and most 4TB drives were bought for $130 or less.

Photo's:

8luMswb.jpg

Pgy3Cj7.png

That's a lot of space for porn.

Link to comment
Share on other sites

Link to post
Share on other sites

1388089685991-drives.PNG

 

Damn, my server doesn't get even close! That's just 8 TB (Data 2 TB is in RAID 1, so only 6 TB usable). Maybe I'll upgrade my whole server sometime, buy a rack just for HDDs and put 4 TB drives there. I really need to get rid of those external ones and get all the data organized. Unfortunately I need to get some money first :D (it's not easy to be 16!)

Link to comment
Share on other sites

Link to post
Share on other sites

How is Solaris?  I thought of it a few years back for using ZFS instead of FreeNas... but ended up with hardware Raid instead.

I don't know yet, I'm waiting for some random odds and ends to ship before I can have a functional server again. I should have her up and running by next week though, and then I can install/play with Solaris. I'm just annoyed with ZFSonLinux not being up to snuff.

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

5400RPM DRIVES!

 

Greens are more than adequate for general storage of files. ;) Especially once you get into the larger RAID arrays, they can only go as fast as the connection they're running on ^_^

                    Fractal Design Arc Midi R2 | Intel Core i7 4790k | Gigabyte GA-Z97X-Gaming GT                              Notebook: Dell XPS 13

                 16GB Kingston HyperX Fury | 2x Asus GeForce GTX 680 OC SLI | Corsair H60 2013

           Seasonic Platinum 1050W | 2x Samsung 840 EVO 250GB RAID 0 | WD 1TB & 2TB Green                                 dat 1080p-ness

Link to comment
Share on other sites

Link to post
Share on other sites

So I got some new hardware...

post-331-0-07715400-1388901885_thumb.jpg

Forgive the shitty phone quality pic

 

The chassis is a Supermicro CSE-822T-400LPB. It's a 2U rackmount casem and it's got a 400W psu. I've got the same 6x3TB WD reds stuffed in there along with 2x120 GB ssds mirrored using mdadm for boot and a 240GB intel 530 as the ZIL/L2ARC.

 

Here's what the zpool looks like now:

[root@######### ~]# zpool iostat -v                                                   capacity     operations    bandwidthpool                                            alloc   free   read  write   read  write----------------------------------------------  -----  -----  -----  -----  -----  -----tank                                            3.19M  16.2T      0      0     30      4  raidz2                                        3.19M  16.2T      0      0      3      3    sdc                                             -      -      0      0     24      5    sdb                                             -      -      0      0     24      6    sdf                                             -      -      0      0     34      5    sdg                                             -      -      0      0     24      6    sdh                                             -      -      0      0     24      5    sdi                                             -      -      0      0     26      6logs                                                -      -      -      -      -      -  scsi-3600605b0052bc3b01a5a6b2c099f2d74-part1      0  1016M      0      0     26      0cache                                               -      -      -      -      -      -  scsi-3600605b0052bc3b01a5a6b2c099f2d74-part2    41K   194G      0      0      2      1----------------------------------------------  -----  -----  -----  -----  -----  -----

Two array drives along with the 240GB intel 530  are running off of an lsi 9260-4i (which I REALLY need to replace with an HBA that allows JBOD mode)

 

The ZIL partition is 1GiB and the L2ARC partition is 195GiB. Note that I left a hell of a lot of NAND free on that drive so that if either partition fills up during heavy load the drive will still be snappy and responsive and the overall array performance won't suffer.

 

Also, the server is now running a mininstall of CentOS 6.5 instead of Mint 15 (which I was running before).

 

I had considered running Solaris 11 for its native ZFS, but after doing some reading I've come to realize that ZoL is actually pretty damn stable at this point. This does mean that my boot mdadm array isn't the safest thing in the world, but I can easily back that up, and I don't have any data on it anyway, just programs. (Fun fact: the guys at Lawrence Livermore National Laboratory use Luster with a ZFS on Linux backend for production use. Read here: http://computation.llnl.gov/research/project-highlights/zfs-improves-lustre-efficiency).

 

This was during setup:

post-331-0-03965500-1388903943_thumb.jpg

Workstation: 3930k @ 4.3GHz under an H100 - 4x8GB ram - infiniband HCA  - xonar essence stx - gtx 680 - sabretooth x79 - corsair C70 Server: i7 3770k (don't ask) - lsi-9260-4i used as an HBA - 6x3TB WD red (raidz2) - crucia m4's (60gb (ZIL, L2ARC), 120gb (OS)) - 4X8GB ram - infiniband HCA - define mini  Goodies: Røde podcaster w/ boom & shock mount - 3x1080p ips panels (NEC monitors for life) - k90 - g9x - sp2500's - HD598's - kvm switch

ZFS tutorial

Link to comment
Share on other sites

Link to post
Share on other sites

I had considered running Solaris 11 for its native ZFS, but after doing some reading I've come to realize that ZoL is actually pretty damn stable at this point. This does mean that my boot mdadm array isn't the safest thing in the world, but I can easily back that up, and I don't have any data on it anyway, just programs. (Fun fact: the guys at Lawrence Livermore National Laboratory use Luster with a ZFS on Linux backend for production use. Read here: http://computation.llnl.gov/research/project-highlights/zfs-improves-lustre-efficiency).

Haha, thanks for the Lustre link, that's actually pretty cool. :D

And yeah, I haven't had any issues with ZOL, and now that the opensource side of

ZFS is under the openzfs umbrella project I'm quite hopeful that things will

go in a good direction. :)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Haha, thanks for the Lustre link, that's actually pretty cool. :D

And yeah, I haven't had any issues with ZOL, and now that the opensource side of

ZFS is under the openzfs umbrella project I'm quite hopeful that things will

go in a good direction. :)

Maybe I should dabble in the ZoL pool (see what I did there? :D ) again... I tried it once a couple years ago, but it was unstable as all hell.

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe I should dabble in the ZoL pool (see what I did there? :D ) again... I tried it once a couple years ago, but it was unstable as all hell.

Hehe, I saw what you did there, yes. How very clever of you (one must seize

the opportunity to make puns, after all). :D

That actually makes sense (the unstable thing). ZoL was not declared

'production ready' by the project folks until last spring (see here).

I only decided to use it in ZEUS after coming across that announcement, before

that I was planning on XFS (did quite a bit of research on it before that, and

it has succeeded ext4 as my default non-ZFS filesystem in my machines).

But considering Livermore is one of the groups behind this and they're actually

using it in their farm(s?), I'm pretty confident that it's OK to use nowadays.

No software is ever free of bugs, obviously, but that goes for Oracle's ZFS as

well in the end.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I only decided to use it in ZEUS after coming across that announcement, before

that I was planning on XFS (did quite a bit of research on it before that, and

it has succeeded ext4 as my default non-ZFS filesystem in my machines).

Just checked out XFS... Might be something when I make the switch

to 10 Gb/s LAN in my rainforrest (I so hope you get this one).

I especially love the fact that it is designed with primarily speed in

mind.

 

Nothing is ever free of bugs, except for 100 line programs that have

stood the test of time. It all doesn't matter, as long as you have a

marketing departement that's good enough, so they can pass the

bugs of as features ;)

Link to comment
Share on other sites

Link to post
Share on other sites

Just checked out XFS... Might be something when I make the switch

to 10 Gb/s LAN in my rainforrest (I so hope you get this one).

Hm, I can't come up with an unambiguous interpretation of that

one, sorry. :(

 

I especially love the fact that it is designed with primarily speed in

mind.

This is a talk I watched by one of its devs (yep, I watched the whole

thing), quite interesting if you're into that sort of thing: ;)

 

Nothing is ever free of bugs, except for 100 line programs that have

stood the test of time. It all doesn't matter, as long as you have a

marketing departement that's good enough, so they can pass the

bugs of as features ;)

Haha, true dat. Marketing FTW! :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Hm, I can't come up with an unambiguous interpretation of that

one, sorry. :(

This will explain it for you:

 

 

 

This is a talk I watched by one of its devs (yep, I watched the whole

thing), quite interesting if you're into that sort of thing: ;)

Will do, probably tonight :)

Link to comment
Share on other sites

Link to post
Share on other sites

This will explain it for you:

Ah, that explains it, haven't yet watched that show. Started once, but didn't

finish. Just another thing on my TODO list. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×