Jump to content

looney

Retired Staff
  • Posts

    1,884
  • Joined

  • Last visited

Reputation Activity

  1. Like
    looney got a reaction from albinorhino30 in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  2. Like
    looney got a reaction from dogwitch in Network layout showoff   
    I see your 6500 Chassis and I raise you with 3 more


     
    Okay okay, lets call it 2.5 more 😜
  3. Like
    looney got a reaction from pr1vatepiles in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  4. Like
    looney got a reaction from Peter1918HUN in LTT Storage Rankings   
    LTT Storage Rankings
    Topic
    Welcome to the LTT Storage Rankings! The purpose of this topic is to show off cool storage builds, to inspire with what you show and be inspired by what you see. And of course, also talk about those builds, ask their owners questions and all that good stuff. Criteria
    System capacity of 10 TB or more 5 storage drives or more: We don't count OS drives, we also do not count cache drives, nor external USB drives. External SAS enclosures connected via SFF-8088 count, however. Drobos, Thunderbolt enclosures etc.: If they have a management interface which makes them somewhat autonomous, they count. It does not matter whether your storage drives is an SSD or an HDD, as long as it is used for storage. Don't forget to indicate vendor and drive size for all your drives for our statistics (see also below). For cases not covered here, we reserve the right to adjust the rules as needed to protect the spirit of the thread. It has to be a single system (everything running off of a single motherboard). Do not post your company server (except the ones from Linus Media Group ;)), this is for private systems. Pictures of the hardware required. That's the primary point of this thread, after all. Use looney's post as a template. Write a nice post about your system, give us some details on the nitty gritty inner workings of your beast. Make sure to give all the needed info for the statistics (Operating system, storage system, HDD vendors and sizes etc.) Ranking System
    The rankings are based on ranking points, which are calculated as follows:
    ranking_points = system_capacity ⨯ ln(system_drive_count) with ln(system_drive_count) being the natural logarithm of the number of drives in the system. Rationales
    Minimum Requirements
    Having both the 10 TB and the 5 drive minimum rules allows us to prevent being spammed by systems with a single or two huge HDDs (10 terabyte HDDs are a reality now, after all), while still allowing systems which were put together with smaller drives (or SSDs) to get into the list.
    Ranking Points
    Basically, this thread is about awesome storage systems, and we think that capacity isn't the only thing which determines how cool a storage machine is. Chances are that a system with a bit less capacity but quite a few more drives might be more interesting to look at. Therefore, the number of drives also counts.
    However, we don't want somebody buying a ton of small cheap drives to outrank somebody who's bought a hugely expensive system with fewer big drives, which is why towards the upper end of the scale, the number of drives starts to no longer matter as much (see examples below).
    Example for Large Systems
    System 1: 100 TB capacity, 50 ⨯ 2 TB drives: 391.2 ranking points System 2: 150 TB capacity, 30 ⨯ 5 TB drives: 510.2 ranking points Amount of drives required for system 1 to surpass system 2, assuming capacities for both systems and drive count for system 2 stay the same: 165 drives, resulting in 510.6 ranking points. Capacity weighs much heavier than drive count as it grows, the influence of the number of drives on the ranking points falls prey to the law of diminishing returns. Example for Smaller Systems
    System 3: 14 TB capacity, 7 ⨯ 2 TB drives: 27.2 ranking points System 4: 15 TB capacity, 5 ⨯ 3 TB drives: 24.1 ranking points Drive count has a higher weight for such systems, system 3 ranks ahead of system 4 despite having less total capacity. Identical Ranking Points
    For systems with identical ranking points, post date is the ranking factor (more specifically: post number). Since no posts can have the same number, this is sufficient for unambiguous ranking.
    Script
    The rankings and plots are generated by a Python script, which can be found on alpenwasser's github here.
    Capacity Calculation
    We count raw storage as advertised on the drives. So, if you have 10 ⨯ 1 TB drives, that counts as 10 TB of storage.
    Smaller Systems Not everybody needs a big server, and even smaller systems can be cool and interesting (or noteworthy, as we put it below). For such systems we have a secondary list, so feel free to post your machine even if it doesn't quite meet the ranking criteria outlined above. Please still use the template post for your system, and stick to the rest of the criteria as applicable to your machine.
    External Sites This thread is inspired by the one on the [H]ard Forum, where you can find many huge systems as well. The Dutch thread on Gathering of Tweakers is also worth a look, although it is in Dutch, obviously. Serve the Home is also a great place for all things storage. Support/Bugs
    At the moment, @alpenwasser is the primary maintainer of the script and stats, so contact him about that kind of thing.
  5. Like
    looney got a reaction from Zengrath in How To: Format Usb Flash Drive Not Showing Up On Computer   
    How to: format USB flash drive not showing up on computer
     
    A member had a problem with one of his USB flash drives.
    Something went wrong when he formatted it with a format program to turn the flash drive into a linux live drive.
    As a result of this his USB flash drive no longer showed up in the "computer" folder. The USB flash drive did show up on the windows 7 Disk Management but it was grayed out and he could not format it via the Disk Management.
    This tutorial will fix the problem by formatting it using diskpart.
     
    WARNING: You will lose all data on the USB flash drive!
     
    Step 1:
    type "diskpart" in the windows search bar and open the program, preferably by right-clicking and selecting "Run as administrator".
     

     
     
    Step 2:
    Once you opened diskpart you will need to type in the following command: "list disk"
    This will list all the Hard drives and USB flash drives connected to the computer.
     

     
     
    Step 3:
    Locate you USB flash drive by looking at the size of the drive in the list, make sure you remember the Disk number. In my case it's a 32GB flash drive, so mine is the 29GB drive listed as "Disk 8"
     
    WARNING: Don't continue to step 4 unless you are 100% sure that you have the right drive or you will clean the wrong drive!
     
    Step 4:
    Select the troubled USB flash drive by typing the command: "select disk (insert the drive number here without brackets)" 
    In my example it is Disk 8.
     

     
     
    Step 5:
    Now you must clean the drive by giving the "clean" command.
     

     
     
    Step 6:
    Next thing you need to do is create a partition with the command "create partition primary"
     

     
     
    Step 7:
    Now you need to select the partition you just created by giving the following command: "select partition 1"
     

     
     
    Step 8:
    Last step, you need to format the flash drive with the following command: "format fs=ntfs label=USB quick"
     

     
     
    Victory:
    You can close diskpart.
    The drive will now show up on your computer listed as USB formatted with the NTFS file system.
     

     
     
     
     
    Please like this post if it was helpful.
     
    looney
  6. Like
    looney got a reaction from Windows7ge in Here's How to Save $45,000   
    Synchronous writes is indeed something you would get with databases, the database wants to know that the file has been saved to non volatile storage before it sends its next bit of data. 
     
    You can force zfs to treat all writes as synchronous, but synchronous is always slower as the sender has to know that the file is stored on non volatile medium. 
     
    With zfs + slog, all synchronous data goes to both the vdevs (through ram and cache) and the slog at the same time in parallel. 
    As the slog should be non volatile zfs can tell for example the database that it's data is stored securely and that it can send the next bit of data. 
     
    At the same time the data might also still exist in a volatile state in ram or hdd cache. 
     
    When zfs knows this data is no longer in ram or cache but actually on the disk itself it will remove the mirrored data from the slog. 
     
    So the slog, in normal operation, is only written to and purged. 
     
    It is only in case of a failure that it gets read from.
    So say zfs saved the file to slog but the copy that is meant to go to the drive is still in ram or cache and the power cuts then after boot zfs can still ask the slog for those files. 
     
    This is why you don't use consumer SSD's for slog 🙂
     
    For synchronous loads there can be an improvement where in case of small bursts the slog can function kind of like a buffer, but the moment you are writing more then one drive can handle (in a 1vdev system) for more than say 5 seconds you will stil hit that drive as a bottleneck, even if the slog is just chilling, zfs does not want to use it as a cache, if the vdev cache is full then it will stop new data coming in until it can write to both slog and vdev again.
     
    So a small burst can fill the cache and slog at the same time and the get written "slowly" to the drives. This would be slower without slog as it has to commit to drive straight away but this only applies to, I think, one drive worth of cache or something. 
  7. Like
    looney got a reaction from Electronics Wizardy in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  8. Informative
    looney got a reaction from Sauron in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  9. Informative
    looney got a reaction from Levent in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  10. Like
    looney got a reaction from Velcade in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  11. Informative
    looney got a reaction from Lurick in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  12. Informative
    looney got a reaction from WhitetailAni in Here's How to Save $45,000   
    Fun video, but the ZIL/SLOG part is flawed and might misinform people.

    First of all 2TB is insanely overkill, the ZIL gets flushed roughly every 5 seconds, therefore it can never have more then 5 seconds worth of ingress data on it (I highly doubt you are getting 400gigaBYTES per second of ingress 🙂 ).
    On top of that it is only used for true synchronous writes which are not common in most workloads.
     
    All the SLOG does is store the synchronous writes for a few seconds so it can immediately say with certainty to the sender "hey, I have received your file and I have stored it (non volatile), please send me the next bit of data".
    At the same time the data is also being written to the actual storage in parallel, the SLOG is just there so that in case of a power cut the data that is still on the HDD cache is also in the SLOG.
     
    In normal operation, the SLOG only gets written to, the only time it gets read form is after for example a power cut, at this time ZFS will call on the SLOG to get the data that was still in cache/ram on the actual non-volatile array.
     
    So without a SLOG, in case of synchronous writes, ZFS will have to save the file directly to disk, bypassing cache, as it has to assure the sender that it has stored the data in a non-volatile state,
     
    With async data the sender will just go BRRRRRR and wont care if the receiver has saved it volatile (HDD cache) or non volatile (actually on disk).
    This is faster but also has more risk involved as the sender wont know if its stored volatile or non-volatile.

    And this is also where the second fault in the SLOG config lies, the NVMe drive uses still has VOLATILE storage on it in the from of highspeed cache.
    So you could still lose/corrupt data in case of a power cut or something comparable, defeating the sole purpose of SLOG.
    You are telling the sender that the file is safe while it might not be, defeating the entire point of sync data transfer.

    For a proper SLOG (which 99% of the time you don't need as you wont be writing sync) you still need proper DC grade SSD's those have built in supercaps to make sure that no data should ever be lost no matter what.
    And yes, SSD'!s!, you would want two in raid 1 just to be sure.

    SLOG is 100% a anti-dataloss thing, It will never give you a speed boost, the drives still have to handle the same ingress regardless of having a SLOG.

    The only way to speed up writes in ZFS is with more VDEV's that can be written to in parallel.
    In your case the write IOPS will be limited to two drives as it can only write to two VDEV's at the same time.
  13. Funny
    looney got a reaction from Sir Asvald in Network layout showoff   
    I see your 6500 Chassis and I raise you with 3 more


     
    Okay okay, lets call it 2.5 more 😜
  14. Funny
    looney got a reaction from Lurick in Network layout showoff   
    I see your 6500 Chassis and I raise you with 3 more


     
    Okay okay, lets call it 2.5 more 😜
  15. Informative
    looney got a reaction from gcs8 in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  16. Like
    looney got a reaction from scottyseng in LTT Storage Rankings   
    Hey guys, long time no see, I can't be bothered to fill in the template, way to much info, whoever made that template is truly as massive a idiot.
     
    But here is a picture of what my home rack(s) look like now:
     


  17. Like
    looney got a reaction from Mitko_DSV in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  18. Funny
    looney got a reaction from TehDwonz in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  19. Funny
    looney got a reaction from Letgomyleghoe. in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  20. Like
    looney got a reaction from BrianTheElectrician in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  21. Funny
    looney got a reaction from Dabombinable in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  22. Like
    looney got a reaction from brwainer in LTT Storage Rankings   
    It's an arista DCS-7124SX-SSD-F, never really looked into ONIE / SONiC compatibility.
    I'm also looking to replace it in the near future, the availability of firmware updates is bugging me even though its one hell of a switch.

    Top switch is just a Cisco WS-C3750X-48P-S, needed something for ipmi and access points / patch points throughout the house.
  23. Like
    looney got a reaction from ProjectBox153 in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  24. Funny
    looney got a reaction from Kilrah in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

  25. Like
    looney got a reaction from sub68 in Show off your old and retro computer parts   
    I actually use this at lanparties, when people are still setting up that rigs I'm already playing Tetris
    Plus its fun to mess with the crew asking them where the coax is so I can install my 10BASE5 tap/transceiver. 
     

×