Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Nystemy

Member
  • Content Count

    227
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Nystemy

  • Title
    Member
  • Birthday September 5

Profile Information

  • Location
    Sweden, just north of most of Europe.
  • Interests
    Tech but primarily electronics.

System

  • CPU
    Xeon E5-1650v3
  • Motherboard
    AsRock X99 WS-E
  • RAM
    Crucial 4x 16GB DDR4 2666 MHz (Might get 4 more if I run out of RAM again...)
  • GPU
    Giggabyte GTX 660 x 2 (not running SLI, I just have too many screens, thinking of upgrading, anyone got a pair of GTX960?)
  • Case
    Some old cooler master case that I have no clue how I even got it...
  • Storage
    The usual mix. 512 GB SSD + a handful of TBs on HDDs
  • PSU
    Corsair CX750M
  • Display(s)
    Samsung SyncMaster 2494HS + SyncMaster 223BW + Envision (19" 4:3 aspect ratio) + Cintiq 13HD for the arts....
  • Cooling
    Noctua NH-U9S (Larger wouldn't fit in the case...)
  • Keyboard
    Logitech G110 Nordic version
  • Mouse
    Logitech M500

Recent Profile Visitors

1,093 profile views
  1. The intricacies of PCIe makes for a lot of fun troubleshooting time that one wish one never needed to have. Though, can't say that x86, and NVMe is making the situation any better on that front. I usually refer to software as a jungle, but sometimes hardware is its own. Though, at least a poorly cable managed rack looks like a jungle so one clearly knows what one is up against. But incompatible hardware, or just software issues are far harder to gauge at first glance. Though, a lot of systems are not built with simplicity in mind, but rather a scatterbrained feature c
  2. The main reason I can think of why they did this were likely so that all data centers wouldn't just run out buying cheap GTX cards for their VMs instead of Quadros. After all, there is a lot of professional applications that doesn't need any of the advantages of a Quadro, but the advantage of a VM and PCIe pass through is fairly huge in itself. Since moving a MV to new hardware is a lot easier than moving a bare metal machine to new hardware. And considering the price difference between the two, I am not surprised that nVidia made the decision they did back then. All though, Quadro card
  3. I thought that this were going to be a look into APUs. Cloud gaming is though another method of going around the issue of GPU shortages to be fair, but APUs have their own advantages.
  4. Better question is if they are printed in regular filament or an ESD "safe" one. ESD "safe" filaments are a tiny bit conductive to slowly discharge any static charge from accumulating over time. It tends to not be a major issue to be fair, but electrostatic charge can build up in a somewhat dry environment where there is air moving about. Or in areas where plastics are rubbing against each other, among other sources. And static chare can also "jump", ie slowly wander from one charged surface over to another. In short, a device that is laying on top of an insulated surface can accumul
  5. The biggest thing I have to take away from this is, "so we can do proper mixing and stuff." I hope this means that we will finally get good consistent sound levels for once, and hopefully some better ambiance. (Ambiance as in how the sound is colored by the room and mimicing a room one doesn't actually have, etc, or doing proper voice overs that sounds like they are in the same room and so on, not as in "background sound".) Though, would be interesting to see a tour of the sound equipment. Since sound is honestly as wide of a topic as cameras. One can't just toss in a lapel/shotgun mic and c
  6. The SATA controller can still be on the PCIe fabric and thereby be affected. It doesn't have to be a PCIe add in card either for this to still be the case. A lot of peripherals sits on the PCIe fabric. Especially if they are part of the chipset, since that these days are almost exclusively connected via PCIe lanes. (though, I would suspect that AMD haven't just shoehorned in resizable bar without ensuring that their own chipsets and associated components supports it.)
  7. My guess would be that it might resize the bar too often, resizing the bar is having a performance penalty each time one does it, and likely requires a quiet bus since one redefines the package header format itself. And resizable bar changes a property of the whole PCIe fabric itself, so it could potentially interfere with NVMe storage and other PCIe connected storage controllers among other things depending on how the fabric is divided, but with direct storage becomming a thing, then this is just another can of worms sprinkled on top, but I haven't read into this all that much. Resizab
  8. Resizable bar doesn't increase the width of our bus. All that it does is allow the system to increase the memory address field in the PCIe package header. If increased, it also increases the package header size and the overhead associated with it. This in fact reduces our peak bandwidth slightly. And this could theoretically decrease performance. But the problem you describe sounds more like the graphics engine and GPU driver having a fight.
  9. Having already explained what Resizable bar is before and why it really isn't a worth while feature to even talk much about. Resizable bar does allow us to change the size of the memory address field in the PCIe package header, a fairly minor thing in itself. Allowing us to reach a 38 bit address range or 512 GB of memory. (and yes 512 GB needs 39 bits, but PCIe addresses byte pairs.) There is also "Expanded resizable BAR" that allows a 263 bit address range, or enough to give every single atom in all of the universe its own address, a bit overkill to say the least. (th
  10. In regards to Linus' soldering, I will give it a pass for this project, but it sure isn't pretty. Could need a touch more flux, or rather, a fair bit less dwelling with the iron as to not have all the flux burn of and the solder thereafter oxidize. (though, I do electronics manufacturing professionally, so I build things to a standard. The solder quality shown in the video doesn't pass in production.) I myself would have though just used a female pin header and soldered the wires to it for a nice pluggable connection. (or more likely used a crimped connection for even easier assembly.)
  11. Watching someone that doesn't have much PC building experience is at the best of times scary, at the worst just face palm worthy. I usually guide people through their first PC build, as to ensure that people don't slaughter their own rig before it sees its first boot.... Wiggling add in cards is a bad move.... Turning things up side down is as well. Differentiating between PCIe power cables and EPS ones is something that has for a long time made me think that EPS cables should have been the only standard for both motherboards and GPUs.... (This almost became a thing in the server sp
  12. Largely "correct" video, but a few inaccuracies here and there sprinkled in. The animation at 1:40 is a bit abhorrent as the blips indicating current flow run back and forth like maniacs. This isn't in line with how the circuit would actually operate and could cause confusion for viewers. VRMs have some other intricacies that could be interesting to note, like how the socket has a Vsense pin since contact resistance in the socket, and trace resistance over the board, and sometimes even on the PCB of the CPU itself can introduce voltage drops that are too large for stabl
  13. I personally have a couple of hard drives in my rig. Mainly for bulk storage. I rarely find myself needing to transfer all that many files at once. So the performance drop there isn't something I mind. The thing I find important is generally price per GB, and SSDs are frankly just more expensive. Have been thinking of moving my bulk storage to a NAS, but then my network connection would be the main bottleneck making HDDs plenty fast enough. In the end, my OS disk is an SSD that houses the OS and anything that insists on living on the C drive. A few things have th
  14. The latest WAN show Luk and Linus talked a bit about how expensive streaming is.

    But I think they missed some key details in regards to streaming compared to streaming.

     

    There is multiple ways of implementing video streaming as a service. If a given stream has a lot of viewers, like 100's, then having a centralized server for the stream is a fairly wise idea, especially if that server in turn streams to sub servers that then stream out to the viewers. This can make the network bottleneck for the streamer itself become fairly negligible.

     

    But if the stream has a few viewers, like less than 10. Then peer to peer works fairly effectively. This requires no server other than for setting up the stream itself.

    And peer to peer streaming is a fairly old solution and there exists ways to ensure that each peer doesn't actually know the IP of the other side, all though this isn't always important and does usually add a bit of latency.

     

    Though, if one sets up a service where one can make a reasonable expectation that most streams will have a lot of viewers, then one won't bother about implementing the peer to peer solution as a fallback for tiny streams.

     

    In regards to voice chats, considering that 96 kb/s is the highest quality sound Discord supports and 64kb/s is the standard that few ever seems to change away from, nor that Discord seem to save it. Then this isn't all that intensive. And I wouldn't be too surprised if Discord's video quality is also a bit on the lower side of the street.

     

    Pictures, largest they support without Nitro is only a measly 8MB, 50MB with Nitro.
    But considering that storage costs around 10-20 cents per GB, and a Nictro account costs 5 USD per month, then they can afford a fair bit of storage through that.

     

    Text storage on the other hand is the trivial part.

    If one posts 2000 character post every second year round. Using some of the more arbitrary parts of Unicode as to need 4 bytes per character.

    Then this is only 235 GB per year, but I suspect they have actual spam protection to make sure people can't do this, because no real person would write that much text.

     

    In regards to how many have Nitro.

    In the community I am part of Furries then having Nitro is somewhat common, about 25-40% of everyone I know have it. (the best sign of seeing this is if someone posts an "emoji" that isn't avaliable on the server itself or part of the standard offerings.)

     

    Though, personally. I think Discord shouldn't sell, not even go public.

    They practically only have stuff to lose as a platform by selling themselves to another company or even go public and have to have shareholders breathing down their necks. After all, if Discord were to sell itself, then who ever the new owner(s) are will want to see a return on their investment.

  15. Bottlenecks in computing systems are a fairly nuanced topic. In a typical "gaming" PC, we are though mostly concerned about CPU and GPU performance, and memory bandwidth comes in as a close second, followed quickly by storage bandwidth/latency and overall network latency if one plays online. Though, to be fair here, there is a lot of stuff happening in the background that effects performance by a lot that is outside of the scope of the hardware itself. One of the bigger things to consider for parallel workloads is Inter Process Communication (IPC, not to be confused wit
×