Jump to content

JohnSmith2

Member
  • Posts

    44
  • Joined

  • Last visited

Everything posted by JohnSmith2

  1. Interesting. So this implies I could probably achieve the same efficiency in an x16 or x4 slot if I artificially restrict the clock speeds. I should have recorded those while testing.
  2. I render a lot of 3D animations with multi GPU setups and I have experimented a lot to optimize for power efficiency rather than render times. I've found that power limiting the GPUs to 50% results in a pretty large efficiency gain (more frames per watt hour) but one thing I didn't expect to make a large difference was how many PCIe lanes the GPU is getting. Because I'm using a consumer platform (AMD 7950X, X670 motherboard), the motherboard I use doesn't have very many PCIe lanes. I was experimenting with the GPUs in different slots to make room for a network card and this is what I observed: PCIe Setup Lanes (1x EVGA SC2 GTX 1080 Ti) Redshift Vulture Benchmark Render Time (Seconds) Average GPU Power Draw (watts) measured by GPU-Z x16 733 122.4 W x4 755 116.5 W x2 790 81.1 W (I only used one GPU for these benchmarks, and it was the same GPU, just moved to a different slot on the motherboard. All PCIe slots tested are PCIe 4.0) From x16 to x4, the differences are pretty small, but I was shocked to see the difference from x4 to x2; 35 watts fewer but only a 4.6% reduction in performance. Not sure if you would notice something similar for newer GPUs or if this is architecture dependent. Of course whole system power draw is ultimately what matters but even when I add in the consumption of the other components and measure the power draw from the wall, this is still an efficiency gain of 2% to 4%. Not huge, but unintuitive to me, especially considering there is an efficiency loss of about 1% to 2% from x16 to x4 mode. Any idea why the power draw is so much less when running in x2 mode, considering the x4 power draw is so similar to x16?
  3. I have an MSI X670-P Wi-Fi motherboard with the following layout: Currently I have the OS boot drive in M2_1, and two 2-slot GPUs in PCI_E1 and PCI_E3. I would like to add a PCIe 3.0 X8 networking card, and I'm OK with it running at X4 speeds, but X2 speeds are a little too slow. Given there are extra M2 slots on the board, I think it should be possible to convert one of them to a X4 PCIe slot. The only problem is M2_2 and M2_4 are completely covered up by the GPUs, and M2_3 is partially obscured by the top GPU. Even if I moved the boot drive to one of the lower M2 slots, the CPU cooler (Noctua NHD15) covers up M2_1, so I couldn't use an adapter there either. I have seen some adapters, like this one (https://www.amazon.com/ADT-Link-Extender-Graphics-Adapter-PCI-Express/dp/B07YDGK7K7), with a long cable, but it doesn't look like it would be very flexible. If it is long enough, I feel like maybe it could go in M2_3, twist out around the GPUs, and then fit in the case's slots that are beneath the motherboard; but maybe that is wishful thinking. Here's a picture: Do you have any suggestions for how I could get this to work? Are there M2 to PCI converters with long, thin, and flexible cables (so I can easily route them around the GPU)? Or do those not exist?
  4. I have two Windows 10 computers directly networked together, but when I copy a single, huge file (20GB) between them, I get performance that looks like this: For some reason, after a certain period of time, the copy speed falls off a cliff and then ramps back up. I can't really tell what's happening. When I copy the file over a gigabit or 2.5 gigabit connection, I never observe this behavior. But when the connection is 750 MB/sec (first example) or 2 GB/sec (second example), I see this. I guess it could be the SSD write caches filling up, but I thought that would look different, something like this for example: Given that's not what I'm seeing, and considering I see the same behavior no matter which pair of SSDs I'm copying between (Samsung 980 Pro, Samsung 970 Evo +, TeamGroup A440 Pro, Samsung MZVLQ512HBLU), I suspect it's something else. So maybe it's a Windows problem, or maybe it's related to how my network adapters are configured. I'm not sure. I've seen this with two networking setups; two Intel XL710-XDA2 cards connected with a QSFP+ cable, and an OWC 10G Ethernet card connected to a Mellanox Connect-X2 card using an SFP+ to ethernet adapter. Both setups gave me file copies with V shaped dips like that. Maybe there is a similarity in how the adapters are configured by default. Maybe there's a certain setting that could cause this? Any of these, maybe? One computer has a Ryzen 7950X CPU and the other has an Intel i7-9750H CPU, in case that's relevant. Both are running Windows 10 21H2 (build 19044). Any ideas what causes this behavior, and what I should try changing to fix it? EDIT: I tested copying the same 20GB file directly between the SSDs on the same system (so not over the network). They copied at a constant 1.8 to 2.2 GB/sec (depending on which pair of SSDs I was testing) the entire time, over the entire transfer. So I'm pretty sure this is networking related, and not SSD cache related.
  5. Is there any kind of adapter I can attach between a monitor's HDMI input port and an HDMI cable that can trick any computer into thinking it's a different kind of monitor than it actually is? Essentially, I want to copy the EDID information from one monitor, write it to an adapter, and then spoof it on another monitor. For example, I want my Asus VX288 to show up as an LG 43UN700-B (specifically when the LG is in 1080p 60hz mode). I don't think this would cause any issues since they both run in 1080p 60hz, so there wouldn't be a resolution or framerate mismatch. Why? Long story: I have a bunch of computers that I hook up through an HDMI matrix and output the display to an LG 43UN700-B through its four HDMI input ports. Because the LG supports PBP mode, I can view all four in 1080P resolution simultaneously, and the HDMI matrix allows me to switch which quadrant each laptop appears in seamlessly. Because the HDMI matrix (a tesmart) supports EDID emulation, each laptop doesn't treat it as though a monitor is being unplugged and plugged back in each time I switch quadrants. This is desirable because otherwise all of my open programs and windows get shuffled around. I want to add a fifth laptop, but I need to add another monitor to be able to see all 5 of them simultaneously. The issue is that because it is a different kind (an Asus), Windows detects the monitor as being disconnected when I use the HDMI matrix to move one computer from the Asus to the LG, or vice versa. This doesn't happen when I use the HDMI matrix to move quadrants within the LG, because each HDMI port on the LG 43 has the exact same EDID information. I need to actively monitor what is happening on ALL of the laptops 24/7, and I need to be able to switch between them at a moment's notice. So there are only two solutions I can think of: Buy another LG 43; but I absolutely do not have room for it, so it's not really a solution Somehow trick the computer into thinking the Asus is the LG; but I don't know if this is possible
  6. I have an HP Elitedesk 800 G3 and for some reason after I wake it from sleep, the mouse briefly stops responding for about 2 to 5 seconds at a random time between 1 and 3 seconds after waking from sleep; but it responds as normal outside of that time frame. This is true for all of the USB ports on the HP elitedesk except for one of the front ones; the one on the front that does not have this issue allows the mouse to keep responding immediately after waking. Why might this be happening? The USB Selective Suspend Setting in the power plan is set to disabled. I have tested every single port with multiple mice. This does not happen upon booting or restarting; it only happens when waking from sleep, but it does happen consistently. I should note this was a used elitedesk I bought on Ebay, and there were a bunch of bent pins in the CPU socket; I bent the pins back and place and everything else on the system works great, so it seems unlikely to be related to the bent pins considering everything else works flawlessly. The CPU is a Pentium G4560T.
  7. I want to directly connect two computers using two Intel XL710-QDA2 40Gbps PCIe 3.0 x8 cards. However, the only slot I have available on my MSI x670-P Wi-Fi motherboard is a PCIe 4.0 x 2 slot (the physical slot size is x16). See "PCI_E4" in the diagram below: If the card I would put in the other computer is in a PCIe 3.0 x 16 slot, does this mean I will get 10 Gbps speeds (since the Intel card is x8 but would be limited to a x2 on the MSI motherboard)? Or would it not function at all, or at a different speed? I did a similar test on these computers using Mellanox Connect-X2 10G SFP+ cards and I was getting about 500 MB/sec in the PCIe 4.0 x2 slot, which was higher than the 250 I expected given those cards are PCIe 2.0 x8 (I think).
  8. Thanks, this helps. The UPS is 6 years old now. This is the exact model: https://www.cyberpowersystems.com/product/ups/lx1325g/
  9. I have an i7 5960x (8-core, 16 threads) and two GTX 970s that are all running at full load 24/7. The total system power usage is typically around 450-550 watts when everything is under load. It has a 1000 watt power supply and is plugged into a UPS with a battery backup that is rated for 810 watts. When the electricity in our house goes out or lights start flickering briefly (common in the spring/early summer), the UPS starts emitting a constant tone. However, rather than continuing to power the computer like I would expect, the computer still shuts off anyway, defeating the purpose of the battery backup in the UPS. I looked up what the constant tone means in the manual and it says this is because the 810 watt capacity was exceeded (so it shut off power to the computer), even though under full load the system only draws 450-550 watts (the UPS tells me this). So how does that happen? Whenever the house has some kind of electricity problem (like it going out or lights flickering) because of a storm or whatever, is it possible that the computer can suddenly draw more watts than the 810 watt capacity? I just have no idea why this happens given the machine never gets close to 810 watts even under full GPU + CPU load when the electricity in the house is stable. It is a Thermaltake Toughpower 1000W Gold power supply and a CyberPower 810 watt battery backup UPS.
  10. Keep in mind that was just one example that I linked. They are everywhere on Ebay where the disclosure is more obvious. Do you think they are getting more scalpers or innocent people?
  11. I've been seeing this for months. I do of course feel bad for those people, but I just wonder if it's actually getting any of the people who run the bots.
  12. I have seen tons of listing like these on Ebay, where the seller is just selling a printed-out picture of the RTX 3080/3090 in order to trick bots into bidding on it without actually having to sell them a real card. I guess I don't really understand how the bots work; are they actually going through with the transaction and purchasing the printed image for that price? Or does somebody step in later on, realizing the listing is fake and cancels the transaction? Or do they just dispute it afterward? In other words, are people actually making any money selling images of RTX cards?
  13. No, I've never owned thermal paste, so I'd have to go out and buy some. Do you think it's worth it (will it actually keep it cooler)?
  14. I have an Asus ROG G752VY laptop (2015) that is almost always running at 95 degrees celsius when the CPU is under load. In fact, I think that it's throttling somewhat because Windows never reports it as running at 100% load even when I run Cinebench. Occasionally it even restarts itself (I assume it's hitting TJ max, sometimes it jumps to 100 degrees), but this is uncommon for the most part (maybe happened 3 or 4 times total over the past year). I disassembled it and found that the copper heat pipe is basically attached directly to the CPU with some extremely thin crust (it's barely visible) that I assume was thermal paste? It looks like it has all worn away, or maybe very little was applied in the first place. Cleaning out the dust didn't help thermals much. is it worth attempting to apply new thermal paste? Is there any way to install stronger fans? I'm not sure how I could better cool this thing without underclocking the CPU (not desirable). I use it as part of my renderfarm, so I have the CPU running all core full load basically 24/7. What would you do?
  15. I'm sorry, I actually have a 750, not an 850. I edited my post.
  16. I have a WD SN750 1TB NVMe SSD purchased in late 2019 and a Samsung 970 Evo Plus 2TB purchased in late 2020. I compared the performance in my system using CrystalDiskMark and got these results: WD: Samsung: Is this what you would expect to see? I'm surprised that the random performance on the Samsung 970 Evo Plus is worse than on the WD. Does that sound right to you? If it helps, I have the WD as my boot drive (in the motherboard's M.2 slot) while the 2TB is in a "Hyper M.2 x4" card in the bottom PCI slot of an Asus X99 Deluxe motherboard.
  17. Turns out it was not the computer at all; a power strip mounted to the desk directly above the computer had an AC adapter / power brick to a 15 year old magnavox montior plugged into it, and the adapter itself had started leaking some clear, ghastly-smelling fluid. Because of the positioning, the exhaust from the computer was blowing the fumes upward so I was getting a strong whiff of it. The reason the smell had disappeared, as mentioned in my post above, was because I had unplugged that particular monitor since my first post for an unrelated reason, and hence the adapter was no longer generating the smell. Moral of the story: AC adapters to old computer equipment can overheat / melt / leak and smell horrendous.
  18. Thanks, but I wasn't able to find the video. Are the steps mentioned in this article what you're referring to (I assume it's not different whether it's gigabit vs. 10 gigabit, right)?
  19. I don't know what is involved in the "configuration" that you've mentioned. Is there specific software required to make this work? If I simply plug-and-play, will I not be transferring files between the editing rig and the server at 10 gigabit?
  20. To make sure I'm understanding, you're referring to this, right? Where the "1Gb" connections are CAT6 Ethernet cables, and the "SFP 10Gb" connection is the one I've linked here? So I would only need to purchase that one cable and two of those PCIe cards?
  21. Really? If they are connected to the main switch with CAT6 cables, won't the fact that the ports on the switch are 1 gigabit limit the speeds? I guess they only need the 1 gigabit switch to see each other over the network, but then use the direct connection to transfer the files? I'm not sure I understand.
  22. I would like to set up 10 gigabit networking between two desktop computers. I have several other laptops that will be on the network, but the fastest they could connect is gigabit since they all have gigabit Ethernet ports. One desktop is a "render server", which is responsible for allocating work out to "render clients" (the laptops) which perform 3D rendering and write frames to the SSD on the render server. A 1 gigabit connection is sufficient for the render clients. However, I do editing on a desktop which needs to access data on the render server's SSD as fast as possible, ideally at 10 gigabit. I'm trying to understand how to set this up. I have created a diagram below showing what I expect this setup would look like: Does this make sense? Will this work as I intend? I want to ensure that the editing rig can access files on the render server at 10 gigabit. These are the components I'm looking at, specifically: SFP PCIe Card x2 (one for editing rig and one for server): https://www.amazon.com/dp/B01LZRSQM9/ref=twister_B08NX3K51F?_encoding=UTF8&psc=1 SFP Switch: https://www.amazon.com/MikroTik-CRS305-1G-4S-Gigabit-Ethernet-RouterOS/dp/B07LFKGP1L/ref=sr_1_3?dchild=1&keywords=sfp+switch&qid=1607806524&s=electronics&sr=1-3 SFP+ Cable x2 (one for editing rig and one for server): https://www.amazon.com/Blue-Cable-10G-SFP-SFP-H10GB-CU3M/dp/B01LB06TCK/ref=sr_1_10?dchild=1&keywords=sfp+cable&qid=1607806561&s=electronics&sr=1-10 Router: Any arbitrary router (I don't think it needs any special requirements for 10 gigabit networking, right?) 1 Gigabit Switch: Any arbitrary 1 gigabit switch. CAT6 Ethernet cables for all "1Gb" connections. With this setup, will the editing rig be able to access files on the render server at 10 gigabit speeds? Assume the SSD is fast enough. Do you recommend different components than what I have linked above?
  23. I woke up this morning and the smell is completely gone. I don't know how or why it would have only lasted one day, but I will keep a close eye (nose?) on it to see if I encounter this issue again.
  24. I have an M.2 2TB NVMe drive that needs to be accessed by six Windows 10 computers simultaneously. Currently, it is in my only desktop computer and it is being shared over the network with the other machines. The problem is that I use the desktop for multiple purposes since it has the best hardware, and there are many times that I need to restart or run tasks that interrupt its function as the render server for the other machines, which can cause problems. Ideally I would have a low power computer acting as the render server with the SSD so all my machines could access it 24-7. I would prefer it to be as low power and physically small as possible. My Windows 10 editing computer would also need to connect to it to edit the output from the render clients, and ideally, this connection would be as fast as possible (a 10Gb connection if possible). A 1Gb network connection to the render clients is sufficient. The render server should be able to run Python code and the files on the SSD should obviously be accessible by multiple Windows computers simultaneously. I don't think the OS needs to be Windows, I assume any Linux variant would work as long as it can run Python. This diagram sums up what I'm looking for. What hardware(motherboard and processor) would best fit these requirement for the render server? I'm OK with adding in a 10 gigabit network card if necessary. Cheaper is better but functionality and power usage is my primary concern, not price.
  25. As far as I can tell, everything looks fine, visibly. Could the two top fans above the radiator be the source of the smell? Could the motor in the fans cause a smell? Those fans are still spinning, but maybe the motor is about to burn out or something.
×