Jump to content

Wander Away

Member
  • Posts

    492
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Wander Away got a reaction from Haro in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  2. Like
    Wander Away got a reaction from igormp in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  3. Informative
    Wander Away got a reaction from flametwist in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  4. Informative
    Wander Away got a reaction from Ash_Kechummm in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  5. Informative
    Wander Away got a reaction from The1Dickens in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  6. Informative
    Wander Away got a reaction from FakeNSA in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  7. Informative
    Wander Away got a reaction from WhitetailAni in [Educational] What is SSE/AVX? (SIMD)   
    It's been a while since I've done my writeup on memory hierarchy back in... 2017? Wow time flies by fast. A lot has changed since then: I've gotten a Master's degree, gotten a job as a R&D compiler engineer, and I've learned a lot more about hardware and software in general. And I'm still bored so here I am.
     
    Alright, onwards to the topic at hand. Some of us have probably heard of, or seen marketing on some CPU's, regarding features like AVX (Advanced Vector eXtension), SSE (Streaming SIMD Extensions), MMX (Multi-Media eXtension) and such. These can all be categorized into what's called SIMD Extensions.
    Extension in this context refers to extra instructions and features added to the CPU that is separate from the main instruction set (x86, ARM, etc). 
     
    SIMD: What is it?
    It stands for Single Instruction, Multiple Data, and units inside the processors doing this are sometimes referred to as vector processor/co-processors. 
     
    SIMD instructions operate on vector registers (See below for explanation if you don't know what that is), and they can hold multiple pieces of data at once. Think of a vector as just a one-dimensional array of data. For example, if the vector length is 128-bits, it can hold 4x single precision float values (fp32), 2x double precision float values (fp64), 16x int8's, etc. SIMD instructions would take these vector registers and operate on all of them in a single instruction. (Single-value registers/instructions are referred to as Scalar register/instructions).
     
    Vector register sizes range between 128-bits to 2048-bits for ARM SVE (Scalable Vector Extension) (Although nobody that I know of was crazy enough to implement anything that big... yet), AVX2 uses 256-bit vector registers, and AVX-512 uses 512-bits. So, the more data you process at once, the faster a program can run!
     
    Registers:
    Did you know?
    HOWEVER
    Before you get too excited, SIMD is not being used for many, many, many programs. Most of the programs we use daily does not utilize SIMD instructions, or their usage does not gain a noticeable speed increase. 
     
    Why? Well, as always, it comes down to software.
    For one, in order to use SIMD instructions, there are often times restrictions on the layout of the data in memory. Sometimes, in order to get the data in the format the vector processors can handle, you're better off using scalar operations anyway.  Secondly, not all programs can benefit from vectorization. Sometimes you just need to do calculations with a single value. Sometimes your values need different operations (e.g. adding one, subtracting another). If you can't consistently fill up your vector registers, they're not worth the trouble.  Finally, in order to use SIMD instructions, the programmer often times need to embed SIMD instructions into the program they're writing directly. This is similar to embedding assembly code (basically "human readable" machine code) directly into C. It's also very hardware specific, older processors may only support MMX, or SSE... or maybe AVX-2. But typically only server parts have support for AVX-512.   
    So what are they used for?
    On the consumer side, SIMD is typically used for multimedia - software video encoding/decoding (Hence the Multi Media eXtension). Also, in recent years - AI. Some neural networks, for example, are not large enough to warrant transferring data to the GPU to process (remember, getting data to and from GPU takes time), and they are often times handled by the CPU, utilizing SIMD instructions. 
     
    By the way:
    Also, SIMD is an essential part to HPC (High performance computing, basically supercomputers) applications. For example, the processor making up the fastest supercomputer in the world (last I checked), was the A64FX, powering Japan's Fugaku supercomputer. These are ARM processors with SVE, with a vector length of 512-bits. 
     
     
    So that's about it. That's all I can come up regarding SIMD without going too much into the details... This lockdown is making me spend my time on things like this :v 
     
  8. Agree
    Wander Away got a reaction from TetraSky in Is phone support from tech retail sites actually important?   
    While you do have a point, there are another customer base for these companies - the people who aren't interested in these things who got recommended by friends/family. 
     
    I know plenty of people who wants a decent computer but don't want to put in the effort of building it themselves, no matter how much I tell them how easy it is. In which case I would just refer them to something like maingear/ibuypower. 
     
    A lot of times I would pick them a build for their budget but everything else after that is up to them, and presumably that's where the support comes in. 
     
    I think these people might actually be more common than people like us, who would probably build our own computers anyway, but we all have some friends who know us as the tech nerd and would come to us for advice and such.
  9. Agree
    Wander Away got a reaction from KnightSirius in Is phone support from tech retail sites actually important?   
    While you do have a point, there are another customer base for these companies - the people who aren't interested in these things who got recommended by friends/family. 
     
    I know plenty of people who wants a decent computer but don't want to put in the effort of building it themselves, no matter how much I tell them how easy it is. In which case I would just refer them to something like maingear/ibuypower. 
     
    A lot of times I would pick them a build for their budget but everything else after that is up to them, and presumably that's where the support comes in. 
     
    I think these people might actually be more common than people like us, who would probably build our own computers anyway, but we all have some friends who know us as the tech nerd and would come to us for advice and such.
  10. Like
    Wander Away got a reaction from Akila_KuKu in CORSAIR VS MSI budget gaming headset?   

    "Gaming" headsets are marked up quite a bit, I wouldn't go for them if I have a choice. While the integrated microphone can be convenient, its nothing a bit of tape/velcro can't fix
  11. Like
    Wander Away got a reaction from vanished in Gateway p173xl fx and my ssd problem   
    This doesn't sound right as if you can boot into windows then bios has to see the drives. 
    One thing that you might want to check is if you can format those to a legacy MBR partition table, since GPT is suppposed to support the newer UEFI standard. 
  12. Agree
    Wander Away reacted to Ca5h3w in Sad Panda is put down   
    I know to some this was just porn, just hentai... and if you see it that way I understand why you would and I'm sorry you can't see how one of the last havens of true wild west internet is now gone. Thats 50 terabytes of art that just burned much original and much that could only be found there, yes there was a lot of really fucked up shit that went with it but god dammit there was a lot of really good shit there. It was bassically 4 chan to a lot of us... the mos eisley cantina of the internet where you will never find a more wretched hive of scum and villany, but what did you find there? huh... shit like Han fucking Solo stroking his Wookie and awesome shit. And just like the cantina they didn't let just anyone wander in, no droids allowed no cute little R2D2's and no buzzkill C3PO's and yeah sometimes somebody lost an arm, shit happens acknowledge it then go back to your business playing yours music and having your drinks. And if you never went there I both envy and pity you, for now you neither feel the loss nor ever get the chance to see it, because there were treasures to be found but alas... I mean my metaphore isn't perfect admittedly, it was pretty much just hentai... but it was an endless archive of hentai that had everything you ever wanted and then always more and now it's gone... goodnight, sweet prince and flights of angels sing thee to thy rest, RIP and shitz...     
  13. Informative
    Wander Away got a reaction from Doobeedoo in Even Smaller LEDs for Displays   
    Source: IEEE Spectrum
    TL;DR: LEDs made out of Gallium Nitride Nanowires (Not quite as exciting as CARBON NANOTUBE TRANSISTORS) could be made smaller, brighter, faster switching, and more efficient than what's commercially available. Could be used for VR and such.
    Drawback: Expensive (aka. not yet commercially viable)
     
     
  14. Informative
    Wander Away got a reaction from PeterT in Even Smaller LEDs for Displays   
    Source: IEEE Spectrum
    TL;DR: LEDs made out of Gallium Nitride Nanowires (Not quite as exciting as CARBON NANOTUBE TRANSISTORS) could be made smaller, brighter, faster switching, and more efficient than what's commercially available. Could be used for VR and such.
    Drawback: Expensive (aka. not yet commercially viable)
     
     
  15. Informative
    Wander Away got a reaction from paddy-stone in Even Smaller LEDs for Displays   
    Source: IEEE Spectrum
    TL;DR: LEDs made out of Gallium Nitride Nanowires (Not quite as exciting as CARBON NANOTUBE TRANSISTORS) could be made smaller, brighter, faster switching, and more efficient than what's commercially available. Could be used for VR and such.
    Drawback: Expensive (aka. not yet commercially viable)
     
     
  16. Like
    Wander Away got a reaction from Mykie in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
  17. Like
    Wander Away got a reaction from MedievalMatt in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
  18. Like
    Wander Away got a reaction from Candysandwich99 in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
  19. Informative
    Wander Away got a reaction from BingoFishy in [Lecture] Computer Memory Hierarchy   
    No, if the CPU wants to access something that is in RAM, but not found in the multiple levels of cache, it is a "miss" and therefore will directly access the memory, while stalling the CPU. One thing with digital logic is that everything can be run in parallel, so while the CPU is trying to access L1 Cache, it is also trying to access L2, L3, and main memory at the same time. 
     
    In addition, the cost of implementing the prediction algorithm is that 1. you need more transistors to do it, and 2. When the prediction is wrong, there are going to be an associated penalty. However, keep in mind that the "hit" rate and accuracy of the architecture makes it so that the benefits of such an architecture far outweighs the penalties on incorrect predictions. 
     
    The branch prediction algorithms can range from very simple state machines to extremely complicated ones. I may do another writeup on it later. 
  20. Informative
    Wander Away got a reaction from BingoFishy in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
  21. Like
    Wander Away got a reaction from MedievalMatt in [Lecture] Computer Memory Hierarchy   
    No, if the CPU wants to access something that is in RAM, but not found in the multiple levels of cache, it is a "miss" and therefore will directly access the memory, while stalling the CPU. One thing with digital logic is that everything can be run in parallel, so while the CPU is trying to access L1 Cache, it is also trying to access L2, L3, and main memory at the same time. 
     
    In addition, the cost of implementing the prediction algorithm is that 1. you need more transistors to do it, and 2. When the prediction is wrong, there are going to be an associated penalty. However, keep in mind that the "hit" rate and accuracy of the architecture makes it so that the benefits of such an architecture far outweighs the penalties on incorrect predictions. 
     
    The branch prediction algorithms can range from very simple state machines to extremely complicated ones. I may do another writeup on it later. 
  22. Agree
    Wander Away got a reaction from BingoFishy in Why do we need more than 30 FPS in virtual environments   
    The thing is, a movie can get away with 24 fps because it has a consistent frame time between each frame. That way our brain doesn't have to work as hard to "fill in the blank" as we think of it. 
     
    However, the nature of a game means that you cannot possibly control the frame time, as it has to be rendered in real time. For example, the average frame time for 30fps would be 33 ms between each frame, however, the actual time to render each frame can range from 5 to 50 milliseconds, which makes up the "studdering" we observe. 
     
    With 60 fps, the average frame time is reduced to 17ms, which makes the variance in frame time less observable, and with higher refresh rate panels, it would be essentially unnoticeable. 
  23. Like
    Wander Away got a reaction from Curufinwe_wins in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
  24. Informative
    Wander Away got a reaction from ARikozuM in [Lecture] Computer Memory Hierarchy   
    No, if the CPU wants to access something that is in RAM, but not found in the multiple levels of cache, it is a "miss" and therefore will directly access the memory, while stalling the CPU. One thing with digital logic is that everything can be run in parallel, so while the CPU is trying to access L1 Cache, it is also trying to access L2, L3, and main memory at the same time. 
     
    In addition, the cost of implementing the prediction algorithm is that 1. you need more transistors to do it, and 2. When the prediction is wrong, there are going to be an associated penalty. However, keep in mind that the "hit" rate and accuracy of the architecture makes it so that the benefits of such an architecture far outweighs the penalties on incorrect predictions. 
     
    The branch prediction algorithms can range from very simple state machines to extremely complicated ones. I may do another writeup on it later. 
  25. Informative
    Wander Away got a reaction from ARikozuM in [Lecture] Computer Memory Hierarchy   
    As a senior in university studying computer engineering , I like how Linus doesn't pretend to know all the technical details about computers, unlike some other channels (*cough* Jayz2CentsMakesMeCringeSometimes). 
    However, I thought It'd be interesting to give a lecture on what I've learned in my computer architecture class (and study for finals :D.....). 
     
    I chose memory hierarchy as the topic because in this video Linus some did real world testing of different RAM speeds had on computers. I've had an issue with what Linus said between 0:57 -1:36 not being technically correct (Yes, I know I'm nitpicking, but hey, Linus does that all the time).  So here goes. 
     
    If you can stand me going on an aside every other sentence that is. 
     
    NOTE: everything here will be a gross simplification of the actual architecture used by intel/amd/arm etc. 
     
    Background:
    Solution:
    Conclusion: 
     
×