Jump to content

Tophat

Where's the html output? Too many random numbers :D

 

Oh wait that's just the table view of it, no need :(

Edited by IdeaStormer

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

/edit: nvm, 500mb/s, looks good for an initial test with an SSD xD Cached re-reads i guess?

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

/edit: nvm, 500mb/s, looks good for an initial test with an SSD xD Cached re-reads i guess?

Maybe. I don't know what to expect since this is the first time ever working with an SSD drive in ZFS. It's a small SSD so I can't really expect performance to be earth-shattering.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

So list out your IOzone running parameters, lets start an LTT IOzone Server Benchmark table/listing/bragging/show-off/etc.

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

So list out your IOzone running parameters, lets start an LTT IOzone Server Benchmark table/listing/bragging/show-off/etc.

They're in the document, towards the top.

 

'iozone -a >> filename.txt'

 

I think the small SSD size (and subsequently small iSCSI target/CIFS share) might be the downfall of this system as it is. With small datasets I get good performance in iSCSI, but for larger datasets performance tends to suffer. Once I get actual drives I'll do a comprehensive post of iSCSI and CIFS numbers.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

They're in the document, towards the top.

 

'iozone -a >> filename.txt'

 

I think the small SSD size (and subsequently small iSCSI target/CIFS share) might be the downfall of this system as it is. With small datasets I get good performance in iSCSI, but for larger datasets performance tends to suffer. Once I get actual drives I'll do a comprehensive post of iSCSI and CIFS numbers.

 

OK, I usually specify a slew of parameters (hence my hatred of iozone, because I forget to jot them down). So running it now on a few different filesystems and servers, now I NFS mount the fileservers so that will factor in. I better make sure the users are sleeping when I run those (my cheat method of getting better numbers on NFS).

 

So how do we get the official LTT IOzone show off thread started?

 

We need a way to consolidate all those tests/numbers for an easier sorting criterion?

Edited by IdeaStormer

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Chipset Cooling, Mounting Holes, and Final Case Selection

 

I did promise last time that the next entry would be about the case and power supply extensions. No case in yet, but that will be mentioned.

 

One of the problems I've encountered (and found elsewhere on the internet) is the heat produced by the 5500 series chipset. To put the problem in perspective, the Tylersburg 5500 chipset has a TDP of 27 watts. That's insane, especially compared to the Z97 chipset (4 watts), or even a last-gen server chipset (C602, 8 watts). That's a lot of heat, especially considering the size of the heatsink (see previous pictures).

 

Leaving it without active cooling is a risk I'm not willing to take; I don't want any crashes. I couldn't find any stock active coolers for the chipset, or ones that would mount securely.

 

 

I ended up going with a 60mm Noctua A-series fan (linked here). Given Noctua's reputation, I was hoping even their 60mm fan would be silent.

 

Ordered off Amazon, it came in today.

 

post-653-0-98142200-1402449759_thumb.jpg

 

The package for the fan:

 

post-653-0-17537600-1402449880_thumb.jpg

 

Came with all the usual accessories: An extender, a molex to 3-pin, and two low-noise adapters.

 

post-653-0-77941600-1402449882_thumb.jpg

 

Here is the server, with the chipset cooling solution. I'll have to either superglue it (most likely) or epoxy it down, but it works great. It's incredibly quiet, even at full speed, and keeps the chipset cooler than my desk fan did.

 

post-653-0-39218700-1402449901_thumb.jpg

 

Case Choice, and Mounting Hole Compatibility

 

I really didn't want to buy a case, only to find out that the motherboard wouldn't work with it. I've seen images of SSI EEB motherboards in the Define XL R2 and Arc XL; they fit, but I wanted to know about mounting hole compatibility.

 

I looked at some cases that natively support SSI EEB, but they're generally very expensive (if you get a well-built one). I considered the

because it has awesome airflow through the PCI-E slots (which I will want if I start adding RAID and network cards), but the drive cage would have interfered with my CPU coolers. Nothing else looked good or fit other criteria I had.

 

I ended up choosing the Define XL R2, which I was considering before. I haven't bought it yet, partially because I wanted to solve the chipset cooling problem, but also because I wanted to ensure mounting hole compatibility. I did some math and diagraming with the dimensions listed here and here, and marked the holes for each standard in the image below.

 

post-653-0-92669900-1402450501_thumb.png

Here, blue holes represent standoffs for ATX only. Red holes are for SSI EEB only. Purple holes are for standoffs compatible with both standards. Circled holes are holes that the Z8NR-D12 uses.

 

This really helped me out. For my motherboard there will be no holes without support, but I will lose some cable routing holes.

 

Drive Plan and Cable Management

 

The plan can be seen in the picture below:

 

post-653-0-84815600-1402451661_thumb.jpg

 

If I end up needing more than 8 drives, I will be going with either this or, more likely, this (modifying the cooling fan to a Noctua fan, obviously). That'll put me at 12-13 hard drives. I do plan on using SSDs in this appliance (or 2.5" hard drives as a last resort), so I will be getting something like this eventually to house said drives/SSDs.

 

Given the layout of the motherboard, I don't actually need the cable routing holes that I lose. The real question is whether the power supply cables will be able to reach the top of the motherboard. If not, then there will be another section dedicated to just the power supply extensions mentioned previously.

 

I will be picking up the PIKE 2008 soon, so I can actually use those extra SATA ports on my motherboard. It's based on the LSI 2008 chip, which is used in the 9211-8i, so I should be able to use @alpenwasser's tutorial to flash it to IT mode. Supposedly (?) I shouldn't have to; according to sotech, the straight 2008 model acts like an IBM M1015 in IT mode out of the box (link). If someone could confirm this, I'd be grateful.

 

In Other News...

 

I think I finally have FreeNAS stable. I can start and reboot over and over again without weird boot errors occurring. It is loaded on the 64 GB SSD, so I should never ever have to worry about corruption of the OS; on a thumb drive there is no wear leveling, so the same cells get worn down, ending with a bunch of bad cells, but a drive that is mostly good. On an SSD, writes get written evenly to all parts of the drive, so I don't have this problem. In addition, boot times seem to be a bit faster, which is nice.

 

Yes, I lose a SATA slot, but I highly doubt I will actually end up needing enough drives for this to become a problem. If it is, I'll check out RAID controllers.

 

Thanks for reading, see you again soon.

post-653-0-98142200-1402449759_thumb.jpg

post-653-0-17537600-1402449880_thumb.jpg

post-653-0-77941600-1402449882_thumb.jpg

post-653-0-39218700-1402449901_thumb.jpg

post-653-0-92669900-1402450501_thumb.png

post-653-0-84815600-1402451661_thumb.jpg

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Nice choice of case, I contemplated going with that one myself

for APOLLO, but I couldn't figure out a satisfying way to place

the number of drives I wanted in it.

Very curious to hear about your experiences with the PIKE card.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

damn this is a great build just love the way you work thumbs up

hope to see more.

Link to comment
Share on other sites

Link to post
Share on other sites

From the Desk to the Case: Building in the Define XL R2

 

To my surprise, the case came within two days, when it was supposed to take 4-7 days. Props to Newegg.

 

post-653-0-21815600-1402705178_thumb.jpg

 

My first thought was "Wow, this thing is huge". My current PC was bought from Staples and supports only Micro-ATX, weighing in at 14" tall. This one is substantially larger and heavier (40 lbs).

 

post-653-0-54122700-1402705334_thumb.jpg

 

Here it is, finally off my desk where I worry about accidentally damaging it and where my girlfriend would call it an eyesore.

 

post-653-0-56947900-1402705371_thumb.jpg

 

Some thoughts:

 

Installed the motherboard, the diagrams in the previous post for motherboard standoffs were perfect. This case IS SSI-EEB compatible, you just lose those cable management holes.

 

The power cables for the 24-pin and the two EPS connectors are too short to run behind the motherboard tray, the case is too large. I'll look into power supply extensions, because those cables cut into all the airflow in the case. It should also make it look better.

 

If/when I add NICs and RAID cards, I will definitely need to add a fan to keep them cool. I might go with a side-panel fan, or mount a fan on the hard drive cages.

 

It's reasonably quiet with all the fans going. I have them at 7 volts right now.

 

I have the SSD mounted in a hard drive bay, but I'd like to get it mounted with Velcro on the back of the motherboard tray.

 

It comes with a fan in the back, bottom, and front. I moved the one from the bottom to the front for better airflow.

 

Next is the purchase of hard drives, and the rig will be complete. I think I'm going to start simple, with a 2x3TB RAID 1 for backups and media storage. As I need capacity I will add more (I need a little more than 2TB now), and I will probably be adding an SSD as an iSCSI device extent for VM storage. The power supply extensions will be icing on the cake.

 

Speaking of VM storage, there WILL be another build titled "Monocle" coming in the future that will be a VM host which uses this machine as network storage. It will likely not be a dual-socket system, but it might be Xeon-based (ECC RAM isn't high-priority). And there might be a third one titled "Moustache", which might be a dedicated Sophos UTM machine which you can thank @Ahnzh for.

 

I am really glad this is winding down, I want to start saving up for my first ever gaming PC.

post-653-0-21815600-1402705178_thumb.jpg

post-653-0-54122700-1402705334_thumb.jpg

post-653-0-56947900-1402705371_thumb.jpg

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Great start! Soon enough you'll need a Rack :P

I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to comment
Share on other sites

Link to post
Share on other sites

Great start! Soon enough you'll need a Rack :P

Not for a while, I don't have enough space in my apartment for a rack.

 

Definitely a more elegant solution than a bunch of PCs.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Storage and Cable Management

 

While visiting my parents, I found an old 160 GB notebook hard drive. It was destined to be recycled, but I took it, because I am using it to store my jails. Also ordered two 3TB Seagate NAS drives on sale, they came in last week. Added them, and here they are:

 

post-653-0-97264700-1403490431_thumb.jpg

 

Where's the SSD, you might ask? It's on the back of the motherboard tray, held on by Scotch fasteners.

 

post-653-0-61428800-1403490453_thumb.jpg

 

I could actually store 10 SSDs behind the motherboard tray, as I can stack two drives behind the tray and they won't interfere with the side panel. The mounting spots I would use are outlined here.

 

post-653-0-66954500-1403492177_thumb.jpg

 

I don't know if Fractal sells aftermarket drive cages, but I can place another 4-drive cage on the bottom 140 mm fan mount. Since my next PC might be in a Define R4, maybe I can move one of the cages from that case to that spot.

 

This is my first time doing cable management, I think it tuned out okay. As I add drives it's going to get more complex. I might order some Silverstone CP11 cables to make management easier, and a CP06-E4 SATA splitter, because the SATA power cables that came with my PSU are in a difficult orientation, so I have to have the cable start above the drive cage and go down. It also isn't very flexible, but the CP06-E4 is, so I'll have an easier time routing power neatly.

 

I may also need the CP11s because, once the PIKE card is installed here, it is going to make cable managing those SATA cables more difficult.

 

post-653-0-39268000-1403495328_thumb.jpg

 

The front:

 

post-653-0-20144900-1403495171_thumb.jpg

 

The back:

 

post-653-0-55256100-1403495193_thumb.jpg

 

You can see those two EPS extensions and the ATX extension. They turned out to be really useful. Also got my chipset cooling fan on a low-noise adapter; now there's substantially less noise at idle.

 

I'll be putting up numbers for performance testing tomorrow, as well as the procedure I used to stress-test the system. Sleepy time for me.

post-653-0-97264700-1403490431_thumb.jpg

post-653-0-61428800-1403490453_thumb.jpg

post-653-0-66954500-1403492177_thumb.jpg

post-653-0-20144900-1403495171_thumb.jpg

post-653-0-55256100-1403495193_thumb.jpg

post-653-0-39268000-1403495328_thumb.jpg

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know if Fractal sells aftermarket drive cages, but I can place another 4-drive cage on the bottom 140 mm fan mount. Since my next PC might be in a Define R4, maybe I can move one of the cages from that case to that spot.

Yes, they do sell their drive cages as spare parts, or at least they

did when I last checked. I'm not sure if they ship to the US, but it

can't hurt to ask. (link)

Then again, cannibalizing parts is a tried and proven concept too. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, they do sell their drive cages as spare parts, or at least they

did when I last checked. I'm not sure if they ship to the US, but it

can't hurt to ask. (link)

Doesn't look like they do, only to EU/EEA members (and not all of them). However I can use a Define R4 cage in the case without problem, they had a very helpful link.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Doesn't look like they do, only to EU/EEA members (and not all of them). However I can use a Define R4 cage in the case without problem, they had a very helpful link.

Yeah, the cages are interchangeable. There's a build in the

storage showoff topic with an R4 that has an additional drive

cage. I considered doing that myself at some point (putting

an additional drive cage into ZEUS instead of building APOLLO),

but in the end it would have required a rather substantial

rebuild of that rig (and I still would "only" have had space

for 13 HDDs max, by adding a five-drive cage), so I decided

to make a completely new one.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Nice build, but can it play Minesweeper? I had to do it sorry xD

Computer Specifications:

AMD Ryzen 5 3600  Gigabyte B550M Aorus Elite | ADATA XPG SPECTRIX D50 32 GB 3600 MHz | Asus RTX 3060 KO Edition CoolerMaster Silencio S400 Klevv Cras C700 M.2 SSD 256GB 

1TB Crucial MX500 | 1 TB SanDisk SSD Corsair RM650W

Camera Equipment:

Camera Bodies: 

Olympus Pen-F Panasonic GH3 (Retired)

Lenses:

Sigma 30mm F1.4 | Sigma 16mm F1.4 | Sigma 19mm F2.8 | Laowa 17mm F1.8 | Olympus 45mm F1.8

Link to comment
Share on other sites

Link to post
Share on other sites

wow its been a while since i have seen a green pcb

Specs

CPU: i5 4670k i won the silicon lottery Cooler: Corsair H100i w/ 2x Corsair SP120 quiet editions Mobo: ASUS Z97 SABERTOOTH MARK 1 Ram: Corsair Platnums 16gb (4x4gb) Storage: Samsun 840 evo 256gb and random hard drives GPU: EVGA acx 2.0 gtx 980 PSU: Corsair RM 850w Case: Fractal Arc Midi R2 windowed 

 

Link to comment
Share on other sites

Link to post
Share on other sites

wow its been a while since i have seen a green pcb

I see them everyday xD Also see a brown one too.

Computer Specifications:

AMD Ryzen 5 3600  Gigabyte B550M Aorus Elite | ADATA XPG SPECTRIX D50 32 GB 3600 MHz | Asus RTX 3060 KO Edition CoolerMaster Silencio S400 Klevv Cras C700 M.2 SSD 256GB 

1TB Crucial MX500 | 1 TB SanDisk SSD Corsair RM650W

Camera Equipment:

Camera Bodies: 

Olympus Pen-F Panasonic GH3 (Retired)

Lenses:

Sigma 30mm F1.4 | Sigma 16mm F1.4 | Sigma 19mm F2.8 | Laowa 17mm F1.8 | Olympus 45mm F1.8

Link to comment
Share on other sites

Link to post
Share on other sites

Performance and Stress Testing

 

The most important numbers of all.

 

Performance

 

I tested both CIFS and iSCSI performance, with all data going to and from the RAID 1 of the 3TB NAS drives.

 

post-653-0-94235700-1403555050.jpg

 

post-653-0-48706100-1403555299.jpg

In both cases, this dataset benches similarly to an SSD would over a network (and, in fact, early testing with the SSD that is now the boot drive supports this conclusion). Thanks to all that RAM, I won't need flash storage except for some very specific things.

 

In real-world file copies I get around 100-110 MB/s for large files. If I'm doing really large transfers (hundreds of GB) then the system will slow down to the 70-80 MB/s range.

For random performance, I had to create a RAM disk, because my main drive is an HDD, and you would have seen this:

 

Worst:

post-653-0-43562600-1403556693.jpg

 

Best:

post-653-0-48266100-1403556698.jpg

 

In reality it's not all that impressive. I copied about 1 GB worth of unpacked eclipse images, and got this (median) transfer speed:

 

post-653-0-34840500-1403557627.jpg

 

To be fair, this is an awful dataset. Not very compressible, so it basically ends up performing like the HDDs themselves. Reading the same dataset back from the NAS to the RAM disk is much faster, with this being about the median speed:

 

post-653-0-85358800-1403557964.jpg

 

The more often you use data in a dataset, the better it will perform. If I used these eclipse images every day for hours, I bet this would perform significantly better. This is why I am considering using an iSCSI target for my games installation: reading games back will be much faster than a hard drive, and even faster if they are played often. Also so I can have a huge game library without dealing with internal hard drives (and I don't go to LANs).

 

Stress

 

I've never heard what this thing sounded like under full load while in an enclosure, nor what the temperatures would be like.

 

To stress test the CPU, we run this command from the root shell:

 

post-653-0-33742500-1403555538.jpg

 

Make sure you write those numbers down. After you're done stress testing, you'll want to kill those processes by running: kill <PID>

 

After a few moments, the CPU will be 100 percent loaded. I didn't choose all 24 cores because I wanted one extra core free so the system could do maintenance and stuff.

 

To monitor CPU activity, you can run the 'top' command, and get something like this:

 

post-653-0-13183800-1403555599.jpg

 

After a few minutes, you can run this command, and get results as shown:

 

post-653-0-70816600-1403555607.jpg

 

This isn't a very good heat test, but it's the easiest one-liner to execute. A better way is to do lots of floating-point math as @MG2R pointed out. I ended up writing a python script to loop forever, doing floating point division, and ran it in 23 separate processes. The results are below:

 

post-653-0-99371000-1403573293.jpg

 

My coolers do get loud under full load, but it's to be expected; they're 60mm fans, and it's a server cooler. I'm glad they don't go above 75 though; I may get some fan speed reducers to keep the noise down, since these are rated for a little over 80 degrees Tcase, and in the 90s for processor temperatures. At idle the processors tend to stay between 35 and 45 degrees, which is fine. Once I get three more 140mm fans I can open up the top and increase the ventilation.

 

Next will be some more miscellaneous stuff, and maybe the ordering of the PIKE card. I may also look at some more advanced NICs to try and determine the upper limit of Tophat's capabilities.

post-653-0-94235700-1403555050.jpg

post-653-0-48706100-1403555299.jpg

post-653-0-33742500-1403555538.jpg

post-653-0-13183800-1403555599.jpg

post-653-0-70816600-1403555607.jpg

post-653-0-43562600-1403556693.jpg

post-653-0-48266100-1403556698.jpg

post-653-0-34840500-1403557627.jpg

post-653-0-85358800-1403557964.jpg

post-653-0-99371000-1403573293.jpg

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Haha, that stress testing thing is actually pretty neat, took me

a bit to figure out the BASH magic behind that. :D

Although if you substitute ':' for its more readable counterpart

it becomes a lot more obvious understandable.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

<all the stress tests>

 

 

You forgot the most important test:

061024_runs_notepad_fast.gif

 

In all seriousness though, just performing NOPs until the end of time won't

really make your CPU run hot, as the FPU isn't stressed. You should try

performing floating point calculations, it should make your CPU run hotter.

Link to comment
Share on other sites

Link to post
Share on other sites

You forgot the most important test:

061024_runs_notepad_fast.gif

 

In all seriousness though, just performing NOPs until the end of time won't

really make your CPU run hot, as the FPU isn't stressed. You should try

performing floating point calculations, it should make your CPU run hotter.

Yeah, I was trying to find something I could use, since I've never used a BSD system before.

 

Bash doesn't do floating point math though.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, I was trying to find something I could use, since I've never used a BSD system before.

 

Will definitely do something floating point based. Any other recommendations?

I usually go with mprime/Prime95. I haven't found a program that

reaches higher temps yet, and maybe more importantly it's also

multi-platform.

There's also a program called cpuburn, but I've never tried it

personally. According to google it's also available on BSD.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×