Jump to content

ThePurrringWalrus

Member
  • Posts

    12
  • Joined

  • Last visited

Everything posted by ThePurrringWalrus

  1. what is the hard tube bending kit LTT showed with the board that handled bending on 2 different axis?
  2. So based on responses, yes it can work but in the least efficient way possible. But as a LTT fan, I'm going to figure out the most over the top way to either succeed, or make something blow up. Hopefully very dramatically. If only I had a host to drop everything before trying. Maybe I should use a 1200 watt stand alone power supply and use a cryogenic cooler.....for reasons.
  3. I started using computers in 1979. Hard drives started off using sectors (a division of the physical platter space) clusters are a better way to handle large amounts of data by incorporating multiple sectors. As hard drive data density increased, the physical sectors became less important and clusters replaced the concept. Sorry....very old lingo. Remember, I saw Star Wars in theaters before it was Episode 4
  4. Well, I think we got off pint. The question was really, would it work? But while we are off point.... I know Intel bakes a GPU into all their CPUs. It still uses the CPU I/O and system memory. Most MOBO manufacturers for mud grade and above systems assume you will throw on a stand alone GPU with its own VRAM. So, if I was a good engineer, and I knew I had a separate (if not great) processor on the CPU chipper, could I handle my onboatd Raid processing using the GPU? We know NVIDIA has played with the concept of doing processing on the GPU for non-graphics stuff with the RTX audio AI. As an engineer, would it really be that hard to offload smaller demand processes like I/O to the unused GPU on the CPU chipset? It would remove CPU load from the primary cores, but really wouldn't degrade any system performance as you already are doing I/O calls from the O/S through the CPU.
  5. well, since I'm doing Raid in the Hardware, the O/S only sees the arrays as single drives. Even if the CPU is handling the I/O it is still cutting out the O/S dual I/O calls and the stored Array Configuration in RAM. I would be interested to see some engineering data on the UEFI processor and see how much I/O offloading it dies from the CPU. For a RAID controller, we aren't talking a whole lot of compute cycles, but by cutting out the O/S management, we are freeing up memory blocks. No matter how much RAM you have, it is still divided in to minimum sized memory blocks. If the blick is 1 gig, and it is holding 15kB, the block is used. Kinda like hard drive sectors. Back in "THE DAY", large capacity drives were cool, but the sector size was freeking huge. I 15kB the file took a block in the same way a 1 MB file took a block. It was possible to max out a drive with only 25% of the data capacity used if all of the data blocks were used. It took quite a while for WD to figure that out.
  6. The cards are only designed to handle raid 0 and raid 1. I kinda think this is a trade off made for form factor. They are designed to fit a 2.5 drive mount. They do have a 4 drive, 2 SATA port raid 10 capable solution which fits a 3.5 drive mount, but in my system, my only 3.5 mount is holding 2 SSHDs. The card is a SkyTech S322M225R, and raid config is handled by hardware jumpers. Basically, it funnels 2 m.2 SATA SSDs in to one SATA connector. So I take a data bandwidth hit here, but it is still faster than two spinning drives.
  7. I feel a need to ask for details. Is software really better, or just more convenient? Let's look at hardware for a second. In the later 1970s the IEEE defined the BIOS standard of hardware and machine code housed on an 8 pin EPROM. It stored information for the CPU and OS for addressing the system hardware. As technology advanced, BIOS standard limitations were hit, so in the late 1980s and early 1990s the BIOS standard was revised for a 16 pin EPROM. It could store more data, but was functally the same. The OS would make a hardware call to the CPU and the BIOS (operating like a dumb switch) would pass data to a specific hardware address, all controlled by the CPU, taking up processing cycles and RAM. In later versions of Windows, the BIOS addressing was mirrored in system memory to increase performance speed. Then in the 2000s, BIOS hit a hard wall. The new UEFI standard was developed, which was only a machine code standard, free of the BIOS hardware constraints. One of the new features was to offload I/O call from the CPU to the UEFI controller (processor). Instead of the O/S making hardware calls to the CPU. It can call directly to the UEFI. The UEFI works more like a router. It addresses all of the system hardware, and instead of the CPU and O/S directing how the I/O happens, the UEFI takes the call and routes calls to the appropriate hardware. So, with UEFI the hardware can talk to hardware without the O/S interpreting. Effectively, with a LAN enabled MOBO, you can run a dumb terminal in UEFI without a CPU, minimal RAM, and no mass storage. Getting back to Raid. An inboard Rais controller has its own processor to handle I/O. It is handled by the UEFI controller directly. So when there is an I/O call to mass storage, the CPU offloads to the UEFI. In software Raid, the CPU handles the I/O. In a 2 drive array, the O/S and CPU sends 2 I/O calls and has to do parity checks and 2 CRC calls to verify data integrity. The drive mapping is stored in RAM and there are processor cycles used to maintain the drive array structure. In hardware control, the stand alone processor maintains the drive array structure and does not need the CPU or system memory to control I/O. So in hardware control, we cut out CPU processing cycles, RAM usage, and push I/O and data processing to special function processors and controllers. So how is using software, using up CPU processing and system memory, but still having to use the special function processors and controllers better? Is it "really" better? Or is it convince? Using more resources for the same result versus more time and effort setting up a more efficient system
  8. The purpose of the question was to use hardware i already have. I'm not looking for a new hardware solution. I am planning an upgrade later in the year, but right now I'm just trying to work with what I have. Additionally, my current setup is maxed out on my expansion slots i don't have any room for a new NIC anyway.
  9. yes. I 4 Raid 1 arrays. 1 on my pcie bus for os and software. 2 arrays to dump projects I'm working on, and another array with SSHDs for mass storage
  10. Well I have 2 free SATA slots and 4 drives. The hardware adapters have their own independent Raid controllers independent from the rest of the system. So in that respect there is no CPU load for them. I can literally plug them into a different system and each adapter reads as a single drive. The mobo also has a standalone Raid controller.
  11. There are 4 nvme drives i already have. So, it is easier to use what i have than get something else. I've used the adapters and drives in Raid 1 (2 m.2 drives per array).
  12. Here is what I have going on. I have 4 860 EVO 1TB m.2 SSDs. I have 2 2.5 drive bay raid adapters with on board Raid 0 or Raid 1 capability for the two drives installed on each card. Each adapter card shows up as a single drive and uses only one SATA cable. My mobo has integrated Raid control. I'm thinking I should be able to run each adapter in Raid 0 for the on board m.2 drives, the use Raid 1 on the mobo for redundancy. That way I caan basically have a fast 2TB array for editing, while still having full redundancy in case one of the SSDs releases its factory installed blue smoke. Does anyone see this not working?
×