Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Sophia_Borjia

Member
  • Content Count

    114
  • Joined

  • Last visited

Awards


About Sophia_Borjia

  • Title
    Member

Profile Information

  • Location
    United States, Midwest

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If using multiple programs, you can use multiple cards. AMD and Nvidia drivers interfere though, had trouble even booting when I tried. If you are doing blender and playing a game or using UE4 while also using substance painter then multiple gpu could be useful. No benefit if only using single program. Gaming wise, maybe some streaming encoding benefit, option to play two games at once on dual monitors? I know some people do that in mmo with multiple accounts, very niche.
  2. If you only have two things to connect, just connect 10g nics directly without switch, if you find need later. I would wonder if load times from games over 10g on server like that would be ssd like from similar sequential or hdd like from io bottlenecks. Local hdd for games is faster then gig lan, 10g lan with enough disks is faster then sata ssd in some specs, just not sure if it would show up outside of large files.
  3. https://bit-tech.net/guides/modding/how-to-jump-a-psu/1/ Here is article showing what pin/wires to ground. Grounding that pin tells psu to turn on when power button on case is pushed, psu for gpu needs to have this either permanently grounded or hooked to same pin of main psu, so both ground when button is pushed. Paperclip is fine for test bench, I wouldn't try doing it day to day.
  4. If nvidia were to release only one card, it would be the most expensive one. Have to hold onto crown. So, go from $1200 2080ti to $2000 3090? Other two cards are same gpu with cores disabled/defects and less ram, so it means not selling a lot of silicon at all. Perfect chip is sold as 3090, lower binned few cores disabled/defective sold 3/4 price as 3080ti. Chips that can't make high enough clocks and/or have some defective core sold 1/2 price as 3080. Only having one sku means only having middle of pack performance and risk amd taking crown or tossing a lot a silicon and only having halo product. Having a halo helps sell the lower end parts, having no lower end defeats purpose of having the performance crown. So, they won't sell a lot of 3090, but people will see it is number one, want it, but settle for the 3080 at half the price. Still a high end and over kill part. If AMD gets crown more people will buy mid-range amd of same product line. If amd beats the 3090, they will cut down an a100, call it titan something with crazy 5 thousand dollar price tag so they can say they have worlds fastest card. They won't sell many titans, but the titan existing will help them sell mid/low end cards. Some people that need a lot of compute might find the titan a bargain. Like the $4000 threadripper, it is cheap for what it can do, just not many people need to do that.
  5. 400w+ on high end cards. If down clocked the overkill cooler/vrm should keep them very cool/efficient. Main reason for 3090 as I see it is nvidia doesn't know how fast big navi will be. They don't want a titan to not be number one, so new titan is called 3090 with a little lower price. If it gets beat by big navi, they make something faster with titan name at stupid high price to keep top gaming card crown. If big navi is slower 3090 fills titan market spot. Titan has uses, some programs have high minimum ram uses. Very complex renders or machine learning for example. I had a program not open saying 8gb vram required in error message. I had an erosion simulation running on i3 for few hours, then had error for out of ram, it hit 72 gigs in use by one thread when crash happened(upgraded to ryzen 2700 with 96gb ram to fix that). There are places that need the things and might order dozens at a time, for gaming it is more a bragging rights thing. The pace for games will lag behind amd, it can take years to make game. Once it is clear amd is catching up they will start making games. About now I think they start making 200 series level raytracing, when games started now come out both companies and consoles should have rtx2000 or better raytracing. 3000 will be able to do a lot more with raytracing, but game designers will wait for amd to be at similar level before taking it for granted in requirements. So, raytracing should be amazing and everywhere in 5 years or so, and 3000 cards should still play those games, I would expect rtx on to be slideshow with 2000 series on games with heavy use of raytracing.
  6. I use mixed ram, but at that small of amount I would just replace the 8gb, first gen was much more fussy about ram. 16 gig dual channel kit is probably best idea, if you use a lot of ram, single stick allows later upgrade. Majority of people should be fine with 16gb and dual channel gets better performance. This kit was samsung b-die at low price. https://www.newegg.com/oloy-32gb-288-pin-ddr4-sdram/p/N82E16820821201 The 3000 2x8gb kit for $60 would probably be about fastest ram a r5 1600 could make use of.
  7. A faster gpu limited by power will be faster then card rated at that power. A 200w gpu set to run at 50w will be more efficient and faster then a 60w set to run at 50w. Samw with cpu, i9-9900k limited to 40w will outperform a i3 running 40w, probably an i3 at 65w for that matter. Few years ago I messed with low power on i3-8100 and gtx1050 limited to about 60w total at wall under load, performance wasn't that far under stock for fraction of power used. No real use case, just tinkering for sake of tinkering. Another option would be to use two psu, a mining riser card has separate power input, and there are dual psu adapters to turn them on together. Or mods can be done to wire harness of psu.
  8. Price to performance should be better, but I expect price for location in stack to be higher. Of three 3000 cards based on same die, slowest should be double digit performance improvement ignoring raytracing, 4x or more for raytracing performance. So, if 3080 is under $1500 it is more performance per dollar. I expect it will be around a thousand with ti at 1500 and 3090 at maybe $2000. Since all three would be faster then $2500 rtx titan, I think it is better performance per dollar. When mid-range 3000 series come out, they should be plenty for most use cases. I wouldn't buy 2080ti now, in year it could be 6th fastest nvidia gaming card and be behind a few big navi cards as well. 2080ti could be midrange performance once 3060/3070 are released, and what ever radeon cards will run 400-600 range in year should be close to 2080ti as well. For new tech, 3000 series will have vastly better raytracing, 2000 could run it with big performance hit. 3000 series should run raytracing with little or no frame drop. Games making heavier use of raytracing then current titles could not run well on 2000 series. Those are not likely to be released for years though, developers don't like sinking resources into things fraction of players can see, or that limit player base. For example, dice wouldn't make massive use of raytracing 2000 cards couldn't handle in battlefield 6, just because 3000 can do it. They may in battlefield 8, hypothetically it could bog a 2080ti to 20 fps, but do 60fps on 3060, even if 2080ti is faster then 3060 with raytracing off. 3000 series should be first cards able to do more then a token amount of raytracing, once when games come out with more then token raytracing.
  9. If you have apu/igpu you can save a little power with laptop features in windows using motherboard ports for video, not worth it unless you are trying to save every watt, as in unplugging microwave when not using it to save power the clock uses. It is single digits watts difference. Matters if using battery or extreme energy conscious. I use to do both, unplugging microwave makes bigger difference.
  10. It is possible, you can specify what gpu to use for what program. If you are running multiple programs with heavy gpu use at same time it could be handy. GPU idles if not used or under light load, not worth it unless running on battery power foe energy savings, gpu life would not be a measurable difference.
  11. Not sure what xeon was in study, not the latest, study was around 2017 or so, so about what you could get cheap from used server or aliexpress. Prices for arm can be all over, I picked up 64 cores of arm for $72 during great sale. There are things that can skew power cost. For example electric heat in cool months, 1000w of electronics and 1000w heater make same heat at same cost. Running at slower clock gives better performance per watt, but higher initial purchase. Best performance is probably lowering clock over time as power cost rises and newer hardware is phased in, boosting clocks if 'free' electricity is available due to heating needs. Probably take a machine learning algorithm to find ideal plan. 3 gigs of ram eliminates most arm, remains being higher cost. Ryzen or epyc is probably best at moment then. PowerPC is more expensive and less power efficient from examples I have seen. Only other thing I can think of is xeon phi, lots of atom cores on a gpu like pcb, no idea if it is better or worse for mining.
  12. 2080ti gets nearly all performance at pcie 3.0 x4, it does lose a little at x8 lanes. Unless you are putting nvme on a expansion card in pcie it shouldn't affect gpu lanes, may not even then depending on board layout. Something like running 4x the ray tracing won't need more bandwidth, loading higher resolution textures would. Doing more processing on smaller texture could look better then larger texture, requiring more gpu processing and less pcie bandwith, it could easily very from more to less needed from one game to the next.
  13. Maybe check odroid? They have some low power arm sbc that can mine monero. I have not checked price performance or performance per watt vs amd. I did read a paper that showed it was much more efficient then xeon at scientific calcs of various types. XU4 8 core running about same performance as 2 xeon cores at less power used. 8 threads share 2gb of ram, so much better at things like crypto then rendering. They cluster easily. Better performance per watt can be had by under clocking, if that helps over all depends on ratio of operating cost to purchase cost. They have several different boards with different cores, I think they have mining benchmarks for many of them on odroid/hardkernal site. Things like support for instructions can very greatly what chip is best.
  14. That might be good case for older gpu, 500 series is very cheap. Newegg has some under $100 new and a rx570 8gb for $139.99. New generation from both companies is do out soon, so a placeholder cheap card may save money in long run.
  15. Chopsticks work better then fork for some things. They are also great for grabbing pasta from boiling pot to check firmness. Stabbing macaroni in pot of water with fork isn't happening, spoon brings boiling water with pasta. Chopsticks used to stir pot can grab single item out of liquid and cool by waving around before testing. Same tool can stir, grab. whip, ect, and most are safe with non-stick pans. For chopsticks still being around after fork. That is backwards, fork was known in china before chopsticks invented. Use of fork was mostly abandoned in asia when Europe was still eating off of knives. Odds are higher west will abandon fork before asia stops using chopsticks.
×