Search the Community
Showing results for tags 'efficiency'.
-
In WAN show on 12th April at around 3rd hour, Linus and Luke talk about light acceleration being more efficient than stronger acceleration. This is a very common misconception, that I've encountered way to often, when working in this field. Engines work optimally at optimal load, which, for ICE engines, is close to full load. For electric motors, it varies greatly based on design. For optimal efficiency and best fuel mileage, you'd want to accelerate at near full load in highest gear, until reaching the revs where the friction of the piston against the the cylinder causes too much drag and a speed where the wind resistance causes too much drag. In fuel mileage competitions, when a driver reaches this speed, they shut off the gas and let the kinetic energy turn the engine, or they just switch the engine off and shift into neutral, if they can save more fuel than what it takes to start the engine. I'm not saying anybody outside a competition should drive like this, that would be horrible, but don't feather that gas pedal either. Keeping the engine at minimal load all the time won't help it and you are not winning on efficiency either.
-
Let's take a look at one method we can use to measure the efficiency of a Graphics Card (GPU) at various Power Limits. With electricity costs soaring globally and the need to reduce heat emissions running Folding@Home can be a delicate balancing act between contributing to a worthwhile cause and keeping your Electricity Bill low and the Temperature in your home at a reasonable level. Modern GPUs, like CPUs, have a power-efficiency curve that is exponential. At the upper end of the curve you get diminishing increases in yields. So our goal is to find the most efficient Power-Level to run a GPU at. We can define Efficiency as the Yield (PPD) at a specific Power Level (W). For convenience we will use kPPD/W as the measurement of Efficiency. What you will need: Folding at Home Advanced Control Harlam's Folding Monitor (HfM.net) (Windows only or using Wine in Linux) nvidia-smi (Bundled with NVidia drivers on Windows and Linux) Excel or Google Sheets An hour or two per GPU. The best way of measuring efficiency in Folding@Home, given the variable yields in differing Work Units (WUs), is to run a GPU at a target Power-Level over a period of several days recording the Aggregate Yield of the GPU and dividing it by the Power-Level to obtain the Efficiency at that Power-Level then adjusting the Power-Level and repeating the measurements. However, a quick indication of a GPUs efficiency can be measured by observing the changes in Yield (PPD) during a single WU as the Power-Limit is adjusted. Frame Time (TPF) is the time required to complete 1/100th of a WU. In this example we will look at a EVGA RTX 2070 Super XC Hybrid (08G-P4-3178-KR) running project 18202 as the WU. First we need to configure HfM.net to calculate it's estimate of Yield (PPD) using the last 3 Frames as the Sampling Window. A larger Sampling Window might provide more accuracy but will take more time to measure. Select Preferences in the Edit Menu in HfM and choose "Last 3 Frames" to Calculate PPD based on and Click OK. Note that TPF appears to be calculated across all Frames so PPD will be a better measurement. Select a GPU to profile taking note of which Slot on which Host it is running. First we need to determine the Minimum and Maximum Power-Levels supported by the GPU. Open a Command Prompt (Windows) or a Terminal Window (Linux) and enter nvidia-smi -q to query the capabilities of the GPUs installed in the system: Power Readings Power Management : Supported Power Draw : 126.81 W Power Limit : 125.00 W Default Power Limit : 215.00 W Enforced Power Limit : 125.00 W Min Power Limit : 125.00 W Max Power Limit : 240.00 W where: Power Limit: Current value Power-Limit is set to Power Draw: Current Power consumed by the GPU Default Power Limit: Min Power Limit: Max Power Limit: Here we see this GPU has a minimum Power of 125W, a Maximum of 240W so we will want to measure the Yields between these two Limits. We will use 25W as the step size and record Yields at: 125, 150, 175, 200, 225 and 240 Watts. Next open the Folding@Home Advanced Control application from the Task Bar. Select the system with the GPU under test click on the "Log" tab to view the log checking the "Filter" option and selecting the appropriate "Slot" from the drop-down list.: Here we can see that this WU Checkpoints every two frames. We want a consistent sampling window with the same number of Checkpoints as the Checkpoint process adds a slight delay reducing the Yield. In this case we choose to record the Yield after an odd percentage has completed every 6th percentage as we want a sampling interval (6 frames) wider than that used for the Yield estimate (3 Frames) but with a consistent number of Checkpoints (3). It is important we measure the actual Power Draw rather than the set Power-Limit as at lower and upper bounds the GPU may have trouble enforcing the Power-Limit. Wait until the WU is 5-10% complete before starting measurements. In our Command Prompt (Windows) or Terminal (Linux) enter: nvidia-smi -i 0 -l 1 --format=csv,noheader --query-gpu=temperature.gpu,power.draw,clocks.gr,fan.speed which will query GPU 0 (-i 0) on this system and display the GPU temperature, Power Draw, Graphics Clock Speed and Fan Speed once a second. While the sampling window for the current set Power-Limit is in progress we will use this to estimate the Power Draw during the sampling window. In the above example with a 125W Power-Limit we see that the GPU appears to be averaging around the set value of 125W. Next we create a spreadsheet to record our values: The first Column is our "Set" Power-limit; the second our Observed Power-Draw; The third the Percentage measurement point; the fourth the TPF in Seconds from HfM; the fifth the Yield from HfM and the 6th the calculated Efficiency (E/B/1000) in kPPD/W. In a second Administrator Command Prompt (Windows) or Terminal (Linux) set the GPU starting with the lowest Power-limit at the end of a Frame. nvidia-smi -i <GPU#> -pl <Min. Power> In this instance I used: nvidia-smi -i 0 -pl 125 Watch the nvidia-smi window during the sampling interval and record the estimate of the Power-Draw. Populate the Command Prompt or Terminal with the next set-point in preparation for when the current sampling window ends. As soon as the current sampling period finishes (watch the Log in Advanced Control) change to the next set-point (nvidia-smi -i <X> -pl <Y>) and record the TPF and PPD estimate from HfM for the previous sampling window. It helps to record the TPF and PPD values a couple of times later in the sampling interval as they should be fairly stable after 3-5 frames have completed and it will give you a good estimate of the final values. As HfM calculates the Yield (PPD) based on the last 3 Frames and our sampling window is 6 Frames you do not have to be super accurate how soon after the Frame Completion you change to the next Set Point. Here are the final values. The values seemed inconsistent after the 175W Set-Point (completed 15:02) so I took measurements adjusting the Power-Limit down from the Maximum for comparison. Perhaps the calculations performed on the WU around this point got more complicated? Here is the smoothed (5-minute average values for PPD and Power) efficiency for this GPU over the initial test run from my Zabbix server for comparison. I then calculated the Average Efficiency over the two measurements for each of the Set-Points: We can then create a scatter graph of the data including a Trend line and display the Confidence or "Fit" of the Trend line (R^2 value): For this WU on this GPU we see the Efficiency is highest at the lowest Power-Limit and gets exponentially worse as the Power-Limit is increased. To put it another way, dropping from 225W, which is close to the 217W Default, to the Minimum 125W Limit we see a 7.53% decrease in PPD for a 44.4% decrease in Power.
- 16 replies
-
- folding
- performance
-
(and 4 more)
Tagged with:
-
How to get more mileage (wattage) out of my build
ThankGodItsFriday posted a topic in Power Supplies
Hey guys! New guy here, although I've been casually following your vids for about a year now. If you'll examine my specs on my profile, I have an 850w PSU, and through a wattage calculator (on NewEgg), I got an estimate of 700w max for all my hardware. Yet, when benchmarking or using 3D modeling software, I rarely push anywhere above 350w. I don't know if my computer is running as efficiently as it should, or if that's a normal readout. Thought I'd throw this question out to you guys: Is this normal? Also, I'm not sure if I posted in the right place, because I'm new here. Be patient with me LTT! -
Danish tech retailer Proshop has started showing electricity consumption efficacy on their monitors https://www.proshop.dk/Skaerm?o=2052&pn=6 The best efficiency monitor has rank C efficiency. Is it that impossible to create an energy efficient panel? I feel like every monitor is stuck at incandescent light bulb efficiency level. Not talking about the cheep ones only, expensive new models are almost the worst when it comes to it. Every one of them produce more heat than what seems reasonable, have minimal cooling and (correct me if I am wrong) have the most shit tier power supply inside. Would love to get some insight on why is that and would love to see if someone makes a video or has already made with a proper monitor cooling solution.
- 12 replies
-
- monitors
- efficiency
-
(and 2 more)
Tagged with:
-
Hey, i have a ryzen 5 5625u laptop, and i want to maximize power efficiency. Should i leave all cores enables with the power plan set to "BEST POWER EFFICIENCY" or should i disable 2 out of 6 cores and stick with the same power plan? I am asking because there is a possibility if i disable 2 cores, the other 4 will work harder -> higher power consumption than leaving all of them on.. Not sure how effective is core parking in action..
-
Planning a new build, experiment a bit with looks and put to use parts just lying around. Here it is. Question is, when looking at PSU's, I've got 2 choices. Either a gold PSU around 700W or these platinum ones 550-560W. Which ones should I go for? I'm not planning on upgrading this PC. Set it how it is and I'll forever have a spare PC until a part dies. PCPartPicker predicts a peak usage of 544W but I wonder what would be the average idle usage. Still, peak is 544W which is just short of the platinum rated ones. So it's just a matter of long term electricity costs. What would cost less by being more efficient? A Platinum PSU or a gold PSU sitting at lower usage. I can't seem to find efficiency curves anywhere apart from Seasonic's website. Have I set my expectations right? When both at the same price, I should just go for the platinum PSU which in the longterm should save a bit of money thanks to it efficiency especially at 60%~ idle load where peak efficiency is, at the cost of future upgradability, which I don't need so it's not a downside.
-
From the album: Images
V1200 derating efficiency curve from Cooler Master's website.-
- cooler master
- de-rating
-
(and 1 more)
Tagged with:
-
custom loop cooling Custom house cooling loop
Remington posted a topic in Custom Loop and Exotic Cooling
Linus, just watched your 3090 personal upgrade video. I had a thought: how about integrating the house plumbing with the home server cooling. I mean water-cooling EVERYTHING. My idea would likely reduce the need for water pumps and reservoirs. And you'd have a constant source of cold water. Two ways come to mind. The first is to incorporate the server radiators into a reservoir that the house plumbing flows through, before it goes to the water heater. The second would be to directly tie the house plumbing through all water cooled components. This would allow a constant cold water supply through your components and before again heading to the water heater. Both would have the effect of heating the water a bit before needing to be heated. I expect it might help with the heating needs. These ideas could also be tied into the floor heater (or both). If you value my feedback, please reach out to me, I'd love to contribute Edit: I know you tried in Langley and it was janky as balls but now you have the resources to do it right. And overdo it like you do- 4 replies
-
- efficiency
- water-cooling
-
(and 1 more)
Tagged with:
-
Hello Guys, I'm looking for some ideas for a high-efficiency pc build for 24/7 use. It should be able to run TeamSpeak server, one or two game servers (Ark, Minecraft, Terraria, etc...), maybe some FTP or small NAS like data share, and possbliy a small website. So I would need at least 8 GB of RAM I guess The power target for the whole system should be arround 15 watts if possible ( idealy less when idleing and not more than 30 under full load) I have an old 500 GB M.2 SSD lying around but other than that I have no spare hardware. I have thought of the recent Raspberry Pi 4, but I do not know I it would be powerful enough (and RAM could be a bit to small). For total cost (without SSD) I would be happy if it would not exceed 200 Euros/Dollar. So if you have any suggestions for, I would be more that thankful. I have build several gaming Pc, but I have little experience with low-power builds or even sever builds
-
You might think DDR5 is just another iteration on the same memory concept we've had since the beginning, but THIS is unlike any RAM you've ever used. Let's see what makes it tick, and how it can power the next generation of motherboards and CPUs.
-
(This is definitely a bit long but please bear with me till the end; jump to the question in bigger font near the end of the post if you don't want to read everything) What inspired me to create this post is some thoughts that occurred to me recently about Nvidia's and AMD's latest GPU releases. Stock shortages aside, we can't deny that computer parts, especially graphics cards, have become increasingly power hungry due to Dennard scaling (not quite but related to Moore's Law) breaking down around 2006 stemming from current leakages caused by quantum effects (e.g. quantum tunneling) at ever shrinking nodes. Therefore, one of the ways that this problem has been combated so far is through the use of alternatives to x86-64, the biggest perhaps being the ARM architecture family. This has actually been pretty successful already, as a quick glance at https://gs.statcounter.com/os-market-share tells us that when looking at the desktop AND mobile OS market as a whole (despite a few obvious problems with that approach, such as the difference in nature of desktop vs mobile devices), Android had already surpassed Windows in market share. Given that electricity does not come cheap for some, especially those in developing countries, and not to mention the increased environmental impact that increasingly power-hungry GPUs and even CPUs will have (see https://www.tomsguide.com/news/ps5-vs-xbox-series-x-with-great-power-comes-greater-electric-bills), this trend in desktop parts needing greater and greater power draw is pretty worrying (at least to me). Furthermore, I'm sure that no one wants to have to get 800+ or even 1000+ watt power supplies just to make sure that their computer doesn't just randomly shutdown in the middle of a gaming session, and even then still end up tripping their breaker (which isn't all that implausible given how common 10 amp breakers on 120V outlets are especially in apartments at least in the U.S.). And while "technically" a driver update can solve the issue, depending on the TDP of the GPU/CPU itself it could drastically reduce its performance (one need to look no further than rumors about how RTX 3070+ GPUs in laptops will have up to a 40% performance deficit due to power constraints especially with the Max-Q variants). Furthermore, Apple just demonstrated this past year that there's lots to be gained in both performance and power efficiency by switching over to a custom ARM architecture (although by how much is still disputable as Apple throttled pretty hard the Intel CPUs that they were putting onto their Mac-Minis and laptops). Furthermore, ARM has at least a reputation of being much more efficient than x86-64, especially with their widespread-use in high-performance smartphones such as the latest Samsung Android flagships and iPhone flagships. But the deeper I dove into the debate of x86-64 vs ARM efficiency, the more confused I got. For example, this webcodr.io website (https://webcodr.io/2020/11/ryzen-vs-apple-silicon-and-why-zen-3-is-not-so-bad-as-you-may-think/) and even the following post on this forum (at least the OP one - https://linustechtips.com/topic/1214401-apple-and-arm-a-quasi-insiders-thoughts/?tab=comments#comment-13758213) both emphasized that ARM is more efficient than x86-64. However, here's also another 3 different articles/posts (including another post on this forum) that emphasize the specific micro-architectural design of the chips themselves rather than whether it's simply ARM vs x86-64 when it comes to efficiency, and even outright state that beyond a certain wattage limit both x86-64 and ARM exhibit very similar levels of efficiencies (even the webcodr.io website stated earlier somewhat acknowledges this as well): 1. https://community.arm.com/developer/ip-products/processors/b/processors-ip-blog/posts/the-final-isa-showdown-is-arm-x86-or-mips-intrinsically-more-power-efficient 2. https://www.extremetech.com/mobile/312076-what-kind-of-performance-should-we-expect-from-arm-based-macs 3. https://linustechtips.com/topic/1157141-how-come-pcs-use-so-little-power-compared-to-other-machines/?tab=comments#comment-13310148 So as the title states, my question really is - when it comes to maximizing performance/watt, is it much more about basically optimizing every layer of your "ecosystem" all the way from the hardware microarchitecture to the APIs, system applications, and even user applications themselves (kind of like how Apple has always done it especially with iPhones), OR does using ARM in general truly have a performance/watt advantage over x86-64 which can be capitalized without sacrificing too much performance? P.S. Honestly, as a computer science student looking to do web development but also looking to graduate with a specialization in computer systems (e.g. networking, computer architecture, etc.), I'm not sure if I will like the answer either way. Microsoft so far has been seemingly dragging their feet on Windows on ARM (which is not the same as Windows 10x; Windows 10x is more for competing against ChromeOS than anything else), and if it wasn't a x86-64 vs ARM issue, then I would absolutely hate having to learn all of the quirks and features of a dozen+ different microarchitectures for anything I develop (especially if I decide to switch to system programming) just to get an idea of how I would go about optimizing the user experience for my program. (e.g. oh crap, I forgot that ARM Cortex XYZ can only support 2GB of RAM, better utilize storage more or oh no, these custom ARM instructions can't carry over to x86-64 so I better use a whole 'nother library, etc, etc. and I know that modern compilers and interpreters have made it much less of an issue, but then I also don't want to go towards the other end of the spectrum and spend the rest of my life coding nothing but iOS apps; plus I see all the time web dev jobs ads calling for "full-stack" developers or just knowledge of a whole slew of programming languages/ecosystems like knowing development for both iOS and Android.) Anyway hope that wasn't too long.
- 20 replies
-
Dear LTT-Users, is it advisable to use a pressure-oriented fan as a Pusher for an AIO-radiator and an airflow-oriented fan as a Puller? In my case (pun intended) I'm planning to upgrade my fans, since I'm dissatisfied with their noise emission. I'm currently using a be quiet! Silent Loop 2 120mm in a push/pull configuration and two additional case fans (all of them being 120mm pwm). I intent to repalce my AIO-fans with a Noctua NF-F12 (93,4 m³/h; 2,61 mmH2O) as a Pusher and a NF-S12A (107,5 m³/h; 1,19 mmH2O) as a Puller. My reasoning behind of this is, that the Puller doesn't need a high static pressure since the Pusher is already taking over this task, but should rather focus on dispensing the hot air being pushed through the radiator. Also, being the actual exhaust of my case and thus the most noticeable fan due to its placement, I would benefit from the Pullers low noise emission of 17,8 dB at 1200 rpm. But this is just the reasoning in my head. Am I possibly bottlenecking my airflow with this and/or even generating oscillations with those differently layed out fan blades? Is it better to use an NF-A12x25 (102,1 m³/h; 2,34 mmH2O) or the same NF-F12 as a Puller as well? My goal is to get the best Cooling/Noise-Efficiency out of my system. Thanks in advance!
-
My current systme is using ryzen 3700x and GTX 1660. When i'm playing game, usually it only pull 170watts from wall, and around 150 watts when rendering using davinci resolve. Now i'm going to build a new system. I have been fixed to buy a RTX4060/3070, but then i'm wondering about the CPU that i will get. I have been looking for benchmarks between 13700k and 7700x. what i found interesting is that 13700k has higher fps in gaming, around 5-10% than 13700k, but at the same time consume 20% more power in average. courtesy of the video i put as reference here. My monitor is only a 1080p ultra wide monitor with 75hz refresh rate and i'm going to have it for my next build too. so i dont need high fps. So then come my question here: What is most efficient CPU between the 13700k and 7700x to run game att 75fps with maxed graphic settings? Can i request Linus to create a video on gaming power consumption locked at 60fps 75fps 120fps, 144fps between different CPU and GPU?
-
I spent quite a long time trying to find a comprehensive test of CPU power consumption compared to their computational power, meaning energy efficiency. Techpowerup does efficiency measurements in their more recent CPU reviews, however generations older than the 7th Intel Core generation or 1st AMD Ryzen generation. How do Haswell processors compare? How does Sandy-Bridge compare? Broadwell? Skylake? AMD Bulldozer? All of them had significant jumps in Lithography shrinking, while the newer ones... well you know how long Intel has been using 14 nm. Craft computing has recently uploaded a video about old Xeon processors, E5 2667 v4 Broadwell to be exact which showed their gaming performance mostly on par with a Ryzen 7 3700X, but failed to measure their power consumption. This is still great information to see that you don't have to spend obscene amounts of money just to run Borderlands 3 or The Witcher 3 on FullHD. Now I know everything is processor to processor and stuff, but I also know that big jumps in efficiency these days are very useful to know about in the current energy crisis (even though its kinda under control right now) and the ever-hotter summer months which combined with the energy crisis is not good for your A/C bill (assuming you even have one). Is there any source you know of that could provide such information? I know that cpubenchmark.net has one, but their power consumption values come straight from TDP which as we all know is nearly useless these days, although if you think it might be of use let me know. Thank you for any and all insight you might have!
- 19 replies
-
- cpu
- generation
-
(and 2 more)
Tagged with:
-
I'm looking for a specific, I guess you could call it genre of PSU. I've been running a 500W OEM Delta unit in my home server/NAS for a little while now, but it's developed some nasty coil whine, which is driving me crazy. Since it's very low power and I don't want to spend a ton of money, are there any low wattage supplies (like 250 or 300) available that I can get for cheap, but are good enough quality to run in my always-on server?
-
http://blog.strml.net/2017/01/chrome-56-now-aggressively-throttles.html https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/-dmrNAFHd-4/discussion https://news.ycombinator.com/item?id=13471543 In layman's terms, this can be summarized as: Tabs in the foreground run as normal Tabs in the background have a computing "budget" for how many seconds of compute time they use Once a background tab budget is zero or negative, it is suspended and has to wait for more budget to accumulate Tabs that play audio are exempt from this There are two thoughts in response to this. Everyone on hacker news is pretty gung-ho about it, but some developers are concerned about how this will "break the internet". Personally, I'm for it. Background music play isn't affected, and if I need something running in the background, I can make it a new window.
-
Well, I wanted to upgrade my graphics card to a GTX 1060 6G but there is soo many options. I don't want one with overclocking because my motherboard can't do overclocking. I don't need a mini because I have plenty of space in my tower. Which one is the most efficient? or are they all the same?
-
source: https://www.cybenetics.com/index.php via: http://www.kitguru.net/channel/generaltech/matthew-wilson/cybernetics-wants-more-accurate-efficiency-certification-for-psus-and-noise-ratings/ so far, not much is known about this new company .. Cybenetics (not to be confused with Cybernetics !!!) they are a Cyprus based company that offers voluntary PSU efficiency and noise level certifications through which PSUs are promoted as noted, they offer two certification levels: ETA - overall power efficiency based on 4 factors: efficiency power factor 5VSB efficiency vampire power LAMBDA - noise levels --- digging deeper ETA: https://www.cybenetics.com/index.php?option=eta LAMBDA: https://www.cybenetics.com/index.php?option=lambda --- this flew under the radar, but as far as is known only SeaSonic embraced this new certification: https://www.techpowerup.com/231534/seasonic-embraces-new-cybenetics-ratings --- they have a few test certifications, but they don't specify what models were tested: Aerocool: ACP-650FP7 https://www.cybenetics.com/d/cybenetics_vgH.pdf ACP-750FP7 https://www.cybenetics.com/d/cybenetics_dqu.pdf ACP-850FP7 https://www.cybenetics.com/d/cybenetics_kyS.pdf Coolermaster: MPZ-C002-AFBAThttps://www.cybenetics.com/d/cybenetics_9Xe.pdf Corsair: AX1500i https://www.cybenetics.com/d/cybenetics_s5f.pdf HX1000i https://www.cybenetics.com/d/cybenetics_nzx.pdf HX1200i https://www.cybenetics.com/d/cybenetics_WKV.pdf HX750i https://www.cybenetics.com/d/cybenetics_GGT.pdf RM1000i https://www.cybenetics.com/d/cybenetics_8iM.pdf RM1000x https://www.cybenetics.com/d/cybenetics_j21.pdf RM550x https://www.cybenetics.com/d/cybenetics_JoN.pdf RM650i https://www.cybenetics.com/d/cybenetics_DbN.pdf RM650x https://www.cybenetics.com/d/cybenetics_HIS.pdf RM750x https://www.cybenetics.com/d/cybenetics_jil.pdf RM850i https://www.cybenetics.com/d/cybenetics_9tp.pdf RM850x https://www.cybenetics.com/d/cybenetics_DW4.pdf TX850M https://www.cybenetics.com/d/cybenetics_9gT.pdf RM750i https://www.cybenetics.com/d/cybenetics_gVI.pdf TX550M https://www.cybenetics.com/d/cybenetics_Qn8.pdf TX650M https://www.cybenetics.com/d/cybenetics_ZhZ.pdf Enermax: EPF500AWT https://www.cybenetics.com/d/cybenetics_oCe.pdf EVGA: 850 GQ https://www.cybenetics.com/d/cybenetics_Txb.pdf SuperNOVA 1000 G3 https://www.cybenetics.com/d/cybenetics_Hm6.pdf SuperNOVA 850 G3 https://www.cybenetics.com/d/cybenetics_HH4.pdf FSP: SDA600 https://www.cybenetics.com/d/cybenetics_mGf.pdf Riotoro: PR-GPD0850-SM https://www.cybenetics.com/d/cybenetics_BhK.pdf SeaSonic: SSR-650TD https://www.cybenetics.com/d/cybenetics_ZfJ.pdf SSR-750TD https://www.cybenetics.com/d/cybenetics_xGy.pdf SSR-850TD https://www.cybenetics.com/d/cybenetics_B7p.pdf SSR-1000GD https://www.cybenetics.com/d/cybenetics_zU5.pdf SSR-1200GD https://www.cybenetics.com/d/cybenetics_vkP.pdf SSR-650GD https://www.cybenetics.com/d/cybenetics_fXt.pdf SSR-750GD https://www.cybenetics.com/d/cybenetics_emw.pdf SSR-850GD https://www.cybenetics.com/d/cybenetics_QGN.pdf Siverstone: SX800-LTI https://www.cybenetics.com/d/cybenetics_13A.pdf NJ520 https://www.cybenetics.com/d/cybenetics_6nv.pdf ST75F-PT https://www.cybenetics.com/d/cybenetics_r5f.pdf SX700-LPT https://www.cybenetics.com/d/cybenetics_u1Q.pdf SuperFlower: SF-750F14EG https://www.cybenetics.com/d/cybenetics_UFp.pdf ThermalTake: TPG-0750F-R https://www.cybenetics.com/d/cybenetics_u4m.pdf
- 49 replies
-
- cybenetics
- psu
-
(and 3 more)
Tagged with:
-
I'm recently in a "home network upgrade" / "rack rebuild" / "my wife will kill me for what I've spent money for" situation and I'm thinking... I have in my rack: gigabit switch Cisco WAP (outside, but powered from rack) ISP router QNAP 2-bay NAS All these compnents are running on 12V DC power, from it's own power source and I'm wondering... why not to run them all from one power source ? There are high efficiency power sources available on the market built for such use. Not to mention that they are usually more efficient (then network equipment original power sources) and some of them also provides ability to hook up 12V lead-acid backup battery for UPS function (not needing to use actual UPS as we don't need 220/110 AC power). For example DIN mountable: http://www.meanwell.com/webapp/product/search.aspx?prod=DRC-60 Only downside of this setup would be single point of failure which is - for home use - not that serious as I can replace the power source within one day easy. Is there anyone who done this before ? I will post progress, but it would be best if I can gather some thoughts on this before investing a lot of money Thanks a lot.
-
Hello everyone, so i heard league of legends only uses two cores being 0 and 1. So as to my question, assuming you only have a DUAL CORE chip the operating system would be forced to use the same cores as league right? But if the chip had hyperthreading, would the OS just sit on threads 2-3 and the game on 0-1? Also if you have a quad core chip, would the os be running on the other cores or the same? Cause if the cores dont matter cause i dont multitask then i may aswell just get the 7350k once the price drops a fck ton cause the new lineup. 2 super fast cores is what i need. Sorry for the poor english LINUS is rubbing off on me ;). Looking forward to the responses. Stay frosty
- 3 replies
-
- cores
- multitasking
-
(and 2 more)
Tagged with:
-
Hello all, Looking all over, you can find countless places to get a rough idea of how much your system draws, and some will even add a few watts for you. I'm interested in getting at why. According to this article: https://en.wikipedia.org/wiki/80_Plus higher efficiency is expected at 50% usage than at 20% or 80%. To add to this, usually the same amount is expected at 20% as 80%. If we interpolate a normal distribution it should be that a psu is most efficient when using 50% of it's rated wattage. What this would mean is that ideally if you're willing to spend the money on it, is that the best power supply is one that has a continuous wattage of double what you system draws. I'm wondering out of pure curiosity as well for my new rig. On pcpartpicker it says I'll be using 385W. The current plan is that I'll the Supernova G2 750W PSU. It's almost exactly double what I think I'll need. It's also more power than what I think I'll need for a long time, so over the years I'll be able to put it in lots of different systems. Even if I wanted to run 2 high end graphics cards, and an i9 it would still be enough. In the name of future proofing it also has a ten year warranty. Since nobody else is doing this to my knowledge, I thought it'd be worthwhile to double check. Would I be better off with with the 650 or 550 watt versions? What about with a G3? My budget is fairly flexible.
- 24 replies
-
- power supply
- wattage
-
(and 1 more)
Tagged with:
-
Low power-consumption gaming PC for $600?
TheManWithNoPlan posted a topic in New Builds and Planning
I recently moved into a student housing complex and the power here is rather expensive and being a student I want to save as much as I can on power. So in an ITX framework I want to try and squeeze as much power in to 250 watts as possible, of course anything under that is a huge plus, 200 watts would be a cool stretch but I think then it's just becoming impractical. I'm mostly playing fairly easy stuff like KSP, CS:GO, and indie games, but I do play some demanding titles like GTA V, BF1, and Dishonored 2. But that said, I don't mind turning settings down to medium if it can save some money and power. I'm asking here having done fairly minimal research because I honestly don't know where to start, the system will probably end up being ITX or Mini-ITX just because those motherboards generally consume little power (And I like small cases) but from there I don't know where to go. I already have an efficient 1080p monitor and peripherals and I've adjusted for them already. I also have a Samsung 850 Evo 500GB SSD from an old build so I have some storage, but would probably like something a little larger if it's possible (totally fine if it's not though). Any recommendations for any parts (or how to best go about finding parts) would be awesome and very much appreciated. Thanks for reading.- 2 replies
-
- itx
- efficiency
-
(and 2 more)
Tagged with:
-
iam currently using corsair HX1000I psu and iam using peak ~460W, my psu is rated at platinum 93%. is it worth to upgrade to a titanium 94% efficient psu that is 1000W or more ?
-
Hi folks, I just got a new Notebook from my company and I'm not 100% happy with the device. We get new hardware every three years and its totally up to us, what we do with the devices, so we can also sell them and/or change them against other models. The device I got is a Lenovo Thinkpad E485 with: AMD Ryzen 5 2500U Processor 4x 2,0 GHz Radeon Vega 8 Mobile 8GB DDR4 256GB M.2 SSD 1TB SSD Pro: What I really like about the Notebook is that is has a USB C charging Adapter! As I'm also having a Huawei Mate 20 Pro I can travel with just one charging cable and charge everything on the fly. It's also pretty light and has a handy size. CON: I'm really missing a touchscreen and a backlight keyboard. And the touch and feel of the book is garbage. You can feel the cheap plastic everywhere. A friend of mine uses a DELL XPS 13" that just looks and feels amazing. Its a little more pricey I guess with the I7 and 16GB Ram. My idea now is maybe to sell the Lenovo and use the money to get a Dell instead. Any recommendations on how to get "the most laptop" for as cheap as possible? - I'm willing to add something on top of the price Ill get for the lenovo but I definitely don't want to double that price. I'm open to any other recommendations on a great looking business notebook that allows me to use the time while traveling for some easy gaming ?
- 14 replies
-
This is clearly not a subtle way of pointing out that while AMD might have been the first to create a 7nm graphics card, it can’t really compete with Nvidia’s 12nm Turing. In fairness, even the most ardent Team Red fan would find it difficult to argue with that! These statements, are however, pretty hefty shots fired and with AMD set to release their new graphics card range sometime this Summer, let’s hope he doesn't end up eating those words! Source: https://www.eteknix.com/nvidia-fire-shots-at-amds-7nm-graphics-technology/ Intel's entrance into the dGPU space can't come soon enough (2020). It's one thing smacking around AMD with your product stack's performance alone. It's another thing being entirely egotistical about it and getting too comfortable or content. Either way, I'm wondering if this overconfidence is a bit of foreshadowing of what to expect from another future architecture from NVIDIA, like the supposed Ampere.
- 199 replies