Search the Community
Showing results for tags 'compute'.
-
Why cpu and other chips are always square/rectangular, and cpu wafer is in shape of circle ?why
-
Thanks to SFU and Compute Canada for allowing us to visit this INCREDIBLE facility. Learn more about Cedar at http://geni.us/5N3l
-
Hey, I am a 3D Artist and gamer and now looking to upgrade my graphics card because my current one is dying. Upon my research i faced the obvious problem of the market being more confusing than math in 5th grade when letters first got introduced. I don't know what i should be getting honestly. The budget is fairly liberal as i still can probably sell my current gpu (5700XT) for a good chunk of money as it only struggles in DX11 games (which is sadly the majority on steam rn). Any advice on what would be best to get is greatly appreciated. Things that i "need": an NVIDIA GPU (got some sweet technology for 3D Stuff) NOT an 3090 or 3080Ti as this would be overkill a gpu that could be waterblocked in the future or even comes with a waterblock like some evga cards. Btw i Live in germany but dw about that i will figure out how i could get my hands on a similar one here. Thank you in advance.
-
Country: United States Games, programs or workloads that it will be used for: Linux Computer Node? Other details: I don't know what the customer wants these for, but I'm building them! As ever my job has steered me into building something amazing. Something not quite Epic, but truly Ripping... That's so bad I'm sorry. We have a customer that has in the past asked for some really beefy rigs as detailed on the forums last June. Mid March of this year they came back asking for more. Not more of the same, but simply More. By that I mean they want systems with more Cores/Threads, RAM, Storage and all of it in a 2/3U Rack mount... Now I told them point blank that I can't go past 128GB's of RAM on anything short of HEDT and they said Yes. After a few quotes we came down to the following specs. CPU: R9-3970X (32c/64t) Cooler: Dynamix A26 (Small and fits.) RAM: 8x32GB Crucial Ballistix 3,200Mhz CL16 (They wanted 512GB but didn't want to pay for ThreadRipper Pro) Mobo: ASRock TRX40 Creator (Does what they want +10Gb NIC for inter node connection) Storage- SSD: 1TB Samsung 970 EVO (Because they don't need a 980PRO) HDDs: 3 x 8TB Seagate Ironwolfs in RAID 5 (More Storage, More Better) GPU: GT710 (No GPU Compute in these rigs.) PSU: BeQuiet PurePower 11 500w (Enough power, 80+ Gold and Modular) Case: iStar D-Storm 300-PFS (3U, Full ATX, Rack mount) + Extra Noctua fan for airflow. The only thing I'm woried about is the small CPU Cooler, but it's rated for the job given enough airflow. Here is the Parts Picture less the case, I'll upload the build when It's all done.
-
In an effort to expand my Compute capabilities I recently picked up a i9-9900k and a 3900x. The 3900x has proven relatively plug-and-play installing nicely in my Aorus x570 Pro WiFi and the performance showing a nice uplift after replacing the "adequate" stock cooler with a NH-D15. The 9900k, however, has lived up to its reputation as a thermal monster. Making matters worse the GigaByte z370 Gaming 5 it has been installed in suffers from a rather weak VRM (4-phase) that is itself prone to overheating. A first attempt at cooling this beast using a Hyper 212 Black with a Noctua NF-A12iPPC2000 PWM strapped onto it showed poor results. While perhaps adequate for light gaming my requirements are for it to remain stable under 24x7 compute and even with an under-volt and a -300MHz AVX off-set and 4.6GHz all-core limit could not pass Prime95 torture test (mprime -t) of longer duration. Replacing the Hyper 212 with a 120mm CLC with the NF-A12iPPC2000 showed similar results. Next up was an upgrade to an EVGA 280mm CLC and a Fractal Meshify S2 to house the system in. A first attempt at a high all-core overclock with an under-volt showed that 4.6GHz all-core at 1.160V Vcore appeared to be the best attainable while passing Prime95 with AVX enabled. Futher testing showed, however, that LLC at Turbo and Vcore at 1.250V were required for stability. Again there were some stability issues and after dropping the all-core down to 4.5GHz and Vcore to 1.130V the system remained stable for Folding with the Cores at 64-72C, VRM at 81C (17C Ambient). Stability was still elusive though with the system randomly locking up after 8 or 9 hours with nothing in the logs (hard lock) likely indicating a hardware issue. After clearing the CMOS settings and running Prime95 it was observed that the VRM was hitting 115C with no end in sight during small FFT tests with AVX enabled. Removing the plastic I/O shroud on the motherboard the plastic spacers on the VRM heat-sinks were removed to allow better contact of the heat-sinks to the MOSFETs and a 80mm fan installed to provide airflow over the VRMs. This improved things but only slightly. Prime95 Torture tests were then run while observing the VRM thermals. An AVX off-set of -300MHz was found to be required at stock settings to keep the VRM thermals during small FFTs under 100C. A Prime95 Torture Test was run overnight (17 hours) and no errors or warnings were recorded. The system has returned to folding driving 2 RTX 2070 Super Hybrids and with 14 threads CPU Folding (an 8 and a 6-thread slot) with no stability issues in almost 48-hours. The UPS shows the system having an idle load of 55W and 225W when just the CPU slots are enabled so the CPU draws about an additional 170W folding. Hindsight being 20/20 the better approach would have been to just retire the z370 motherboard and get a new x570 board and another 3900x. Realistically this system would need a z390 motherboard with a beefier VRM to work well and the x570 with a 3900x would still crush a 9900k performance wise. At some point, when I can take the system off-line for an extended period, I intend to see how much I can lower Vcore as that could both alleviate the strain on the VRM and may potentially allow a slight increase in the maximum all-core clock.
-
source: https://videocardz.com/70162/amd-and-nvidia-preparing-graphics-cards-for-cryptocurrency-mining it is known that the crypto currency mining craze has hit both nVidia and AMD quite hard with the launch of the new generation chinese coin mining farms have bought video cards in bulk just to use them in coin mining while gamers were desperate to find cards in stores according to VideoCardz, both nVidia and AMD are preparing special Pascal and Polaris cards dedicated solely to compute tasks 90d warranty I think it's not attainable in certain markets where products must have a minimum of 1y warranty - depends on the country
- 104 replies
-
- crypto currency
- amd
-
(and 2 more)
Tagged with:
-
Source: https://www.theverge.com/circuitbreaker/2017/5/30/15713394/intel-compute-card-pocket-pc-computex-2017-lg-dell-lenovo Looks like there will be 4 variants, all passively cooled with no word on price. Interesting product, they seem to be aiming it as a reference device for manufacturers. I'd be interested in the modularity concept where these slide in a 'Laptop' dock and a 'Desktop' dock. Perhaps even a tablet of mobile dock?
-
Google Compute Engine now has machine types with 96 vCPUs and 624GB of memory
Guest posted a topic in Tech News
Just yesterday (or 10/5/17 if you're a latecomer), Google announced that it is now allowing VM configurations with up to 96 vCPUs and 624GB of memory for its own Compute Engine machine types on their cloud platform. This is a pretty sizable increase over what they offered in March, with VM configs of 64 cores and 465GB of ram. (3) And if the previous options are still not enough and you happen to have workloads compatible with what is mentioned above, you now have even greater options to use your compute budget. Well, you have a beta to try it out and there are even greater offerings coming someday, somewhere... Update 1: Google Cloud Compute will allow you to sign up for a one year trial with $300 of credit (Annual not monthly) if you want to play around try it out. The downside is that during the trial, you can only make VMs with up to 8 cores and 52GB of ram. This configuration is estimated at $345.73/month if that gives you a better indicator of how much the full fat 96 core will cost. What are your thoughts on this and will you end up using it? Sources: (Google) (Tech Crunch 1) (Tech Crunch 2) This is my first tech news and reviews topic so if there are mistakes, please do say so. This post is also compliant with night theme.- 11 replies
-
Has anyone figured out how to get a Tesla GPU to output video to a different video adapter?
-
Hello! I'm looking to get a double precision enabled GPU on the cheap. I run a program called BOINC where you donate your PC computation for interesting projects, whether calculating asteroid shape and spin, creating a model of the Milky Way, or some other project. I'm in a computing 'arms race' with a friend and I want to build a lead on him with a double precision capable GPU. I've got about $100 and I would like to keep the card in my main PC which has an NVidia card so I believe I am limited to Quadro's or the like (for driver reasons, maybe I'm crazy). If anyone could shed some light on how to pick a card I would appreciate the help, I'm a bit out of my depth with the computational cards but I think it would be fun to learn about this and to mess with the card over my summer vacation.
-
So this fall, I will be attending the University of Alberta to study engineering. Some of the programs I'll be using include: Autocad Wolfpack Mathematica ABAQUS VMGSim MS Visio Solidworks (CAD Package) ANSYS MathCAD Needless to say, I'll need a new computer. They sent out an email notifying future students the general specifications that will be required for computers. This link has the details. However, they don't specify what "CAD" GPU's are the minimum, only that it will be required. Knowing this, I knew double precision compute would be very important. So I was thinking of getting a Lenovo P71 workstation laptop. Specs: i7-7700HQ Win10 Pro 17.3" IPS UHD 4K 32GB DDR4 Nvidia Quadro P3000 The price is in the 3000+ Range and I plan on using it for at least 4-5 years through all my studies. Then I did some additional research into the double precision compute of the Quadros. From what I can tell, they are identical to the Geforce counterparts (even slightly lower because of the reduces clockspeeds) at 1/32 FP32. So what gives? Can a Geforce card be used effectively instead? If that is the case, I can use my current Asus G750 with a GTX 870m. If not, should I build a Ryzen Threadripper-based desktop with a Geforce card to handle heavy compute simulations and modelling? This option has the benefit of serving as a dual purpose workstation for studies and a gaming rig for relaxation in free time. Anyways, I have 2 months to decide but I need input from more informed people who know more about hardware than me (and I do know quite a bit) when it comes to professional applications. P.S. This might be a great video for Linus to do on techquickie or LTT about what different types of students should be looking for in a computer depending on their profession.
-
So I decided to run a folding client on my gaming computer temporarily. I have a server that is my main host but I wanted to pull some more points in so I started folding on my gaming rig, GPU only. GPU is running around 65 C to 70 C. The question is, will running a GPU at 100% like this for let's say a month have any negative effects? Thermal paste, stability, life? Gigabyte RTX 2080 Windforce 700w Platinum PSU i7 9700K
-
Okay, so I recently got myself a lovely NVIDIA Tesla C2075 at an excellent price. The seller was unsure of the operational condition of the card as it had been in storage for some time. I've swapped it into my gaming rig to test it and verify it's working properly. From what I can tell, it's fine- after loading up the driver it's doing a perfect job displaying the OS. But I want to test it to verify it's working, and all the benchmarks I have are for gaming cards. Being a compute card, the C2075 will not work with Unigine Heaven or Cinebench R15, for example. I need to put this baby under load to make sure there aren't any issues before the card goes into real-world use, and I don't have access to any real-world compute workloads, so how can I go about testing this? Are there any GPU stress tests that would work with a compute card? Just need to verify basic functionality, so a compute-specific test isn't necessary, just one that would actually run on it. Could I just play some 3D games with it? All I need to know is if it artifacts or overheats.
-
From the title, you can see where I'm getting. And yes, I know Pascal definitely changes a few things. Among those is changing the SMs to 64 CUDA cores each on pascal vs 128 on Maxwell as well as an hbm memory controller. Also it has support for half precision at double that of single (there could be more but we don't know). Oh and there are more cores in total. So I was looking at the compute power for the new gp100. Seems pretty powerful and it has some REALLY high clock speeds. So I decided to take the tesla m40 (most powerful Maxwell tesla GPU) and compare it to the tesla p100. If my theory is right... That Pascal is super similar to Maxwell, then if we add in everything changed (CUDA cores, clock speeds) to the Maxwell gpu's compute power we would get the pascal's, right? I decided to look at single precision, since Pascal has a 1/2 FP64 rate and Maxwell only has a 1/32, and Pascal has much better half precision than Maxwell (one of those thing Nvidia changed for sure). Note: I'm looking at the boost clocks since generally modern Nvidia cards keep their clock speeds around the boost clock if not higher. Plus when Nvidia does the calculations for their compute power they probably use the boost clock. So let's look at the m40: From anandtech: http://www.anandtech.com/show/9776/nvidia-announces-tesla-m40-m4-server-cards-data-center-machine-learning 1140 MHz clock speed 3072 CUDA Cores 7 TFlops peak Single Precision compute Now the P100: Also from Anandtech: http://www.anandtech.com/show/10222/nvidia-announces-tesla-p100-accelerator-pascal-power-for-hpc 1480 MHz clock speed 3584 CUDA Cores 10.6 TFlops peak Single Precision compute Ok, so now let's take out our calculator and find out what the p100's compute would theoretically be if it really was just a Maxwell card with more cores and higher clock speeds (I am not adding memory bandwidth to this since I assume Nvidia probably wouldn't incorporate that into their calculations): So the formula would be: Maxwell Compute*(Pascal CUDA Cores/Maxwell CUDA Cores)*(Pascal clock speed/Maxwell Clock Speed)=Pascal Compute So lets plug the numbers in! 7 TFLOPS*(3584/3072)*(1480/1140)= DRUMROLL PLEASE.... WAIT WHAT?!?!?! 10.6 is the Pascal's single precision compute! I guess then... ILLUMINATI CONFIRMED!!!!!! Lol jk So does this mean Pascal is a Maxwell design adapted for 16nm FinFET, hbm, more CUDA Cores, and higher clocks? Tell me what you think, and if you think I made a mistake, please tell me. If you ask me, though.... I don't know what to say.
-
Source: http://techfrag.com/2016/02/17/nvidia-pascal-gp100-gpu-is-a-real-monster-with-12-tflops-sp-and-4-tflops-dp/ So it seems that Pascal compute numbers have been leaked. Compared to Kepler, Pascal seems to be quite a big improvement. However, I don't think its enough. 3 times more than Kepler is definitely impressive. However, if we look at the Fury X: The Fury X has around 8.5 TFlops Single Precision and a little over 4 TFlops 800 GFlops Double Precision. EDIT: However, some FirePro versions of the 290x have around 2.5 TFlops. The Fury X The FirePro 9100 already has more than half of the Double Precision than of Pascal! And it's a card from 2 years ago. With Polaris AMD is sure to improve, and so they could still be ahead of Nvidia. Nvidia has made a huge improvement, but I don't think it is enough.
-
source: http://www.dualshockers.com/2014/09/28/check-out-the-fantastic-technology-of-deep-down-on-ps4-from-cedec-2014/ opinions/flames - It seems nvidia isnt the only or even 1st to voxel cone tracing in a commercial product.
- 1 reply
-
- ps4
- impressive
-
(and 5 more)
Tagged with:
-
Hey guys I just created this Java code which computes pi up to unlimited digits and I thought, lets put this to the test and find it's limit. So basically the idea is to see how high your computer can compute pi, I tried keeping the source as light weight as possible and since I created it on a old crappy laptop while I was on the train this should mean that anyone can run it with no issues. So here's the source and have fun with it import java.io.File;import java.io.FileWriter;import java.io.IOException;import java.io.PrintWriter;import java.math.BigInteger; public class ComputePi { final static BigInteger two = BigInteger.valueOf(2); final static BigInteger three = BigInteger.valueOf(3); final static BigInteger four = BigInteger.valueOf(4); final static BigInteger seven = BigInteger.valueOf(7); static BigInteger q = BigInteger.ONE; static BigInteger r = BigInteger.ZERO; static BigInteger t = BigInteger.ONE; static BigInteger k = BigInteger.ONE; static BigInteger n = BigInteger.valueOf(3); static BigInteger l = BigInteger.valueOf(3); static PrintWriter out; public static void main(String[] args) { BigInteger nn, nr; boolean first = true; int decimals = 0; try { out = new PrintWriter(System.getProperty("user.home")+"/out.pi"); while(true){ if(four.multiply(q).add(r).subtract(t).compareTo(n.multiply(t)) == -1){ out.print(n); if(first){out.print("."); first = false;} nr = BigInteger.TEN.multiply(r.subtract(n.multiply(t))); n = BigInteger.TEN.multiply(three.multiply(q).add(r)).divide(t).subtract(BigInteger.TEN.multiply(n)); q = q.multiply(BigInteger.TEN); r = nr; out.flush(); decimals++; System.out.println((decimals-1)+", Decimals Computed"); }else{ nr = two.multiply(q).add(r).multiply(l); nn = q.multiply((seven.multiply(k))).add(two).add(r.multiply(l)).divide(t.multiply(l)); q = q.multiply(k); t = t.multiply(l); l = l.add(two); k = k.add(BigInteger.ONE); n = nn; r = nr; } } }catch (IOException e) { e.printStackTrace(); }finally{ out.close(); } } } The final results of pi computed is outputted in your user folder (c:\users\your user), people running linux would know where their final outputted file would be found. Keep in mind that the console mentions how many decimal places it has found but the final outputted file has all digits from 3.1 to the final computed digit. file is called output.pi and also the results found inside of it are computed in realtime. So in theory you can use watch in linux to watch it change in realtime and also if your computer crashes then the final results should show your pc's limits since before continuing on to the next digit it would output it into the file. Don't forget to post your final results and how long you've been running it for Keep in mind that this isn't the most efficient way of computing pi, if you want to break the world record and get something in the million digits then you're better of using a assembly based pi calculator since that would directly communicate with the hardware in a way so it would only request the required data unlike java which would run tasks in the background to execute a function. Edit: Added instructions and java file for people who don't know how to program !Important read if you don't know how to program or got no programming experience! To run follow these steps otherwise it'll be running invisibly in the background which is no issue because it only uses 8mb ram but still Download this file Shift right click the folder (or desktop) where you saved the file Choose " Open command window here " type in " java -jar computePi.jar " And go!!!! If you accidentally double click it then just force close java through task manager when you want to close it or otherwise if you run it with my instructions then simply close the command window
-
Since I just ordered a 7970, I was wondering about the "GPU compute" power of AMD cards compare to Nvidia cards. In loads of reviews people mention that AMD is drastically better when it comes to computing power, but what does this actually mean in real life? When will the superior compute power of AMD cards come into play, and which applications will use them? Does this mean that having a 7970 will speed up Sony Vegas rendering more than a GTX 680 or 780?
- 2 replies
-
- amd
- graphics card
-
(and 1 more)
Tagged with:
-
Since I just ordered a 7970, I was wondering about the "GPU compute" power of AMD cards compare to Nvidia cards. In loads of reviews people mention that AMD is drastically better when it comes to computing power, but what does this actually mean in real life? When will the superior compute power of AMD cards come into play, and which applications will use them? Does this mean that having a 7970 will speed up Sony Vegas rendering more than a GTX 680 or 780?
- 19 replies
-
- amd
- graphics card
-
(and 1 more)
Tagged with:
-
Jean-Christophe Baratault Nvidia's head of professional graphics, the man behind the success of the Quadro & Tesla line of graphics accelerators joined AMD. This is in effort from AMD to continue its plans of restructuring the company, the aggressive push towards securing more market share in the professional graphics segment is a huge part of it according to CEO Rory Read. source