Search the Community
Showing results for tags 'cuda'.
-
Hi, I recently got a hold of 2 GTX Titan Black's and I was trying to get CUDA running on them. But as of now I couldn't get CUDA to work on this PC. I'm using Driver version 470.223.02 with CUDA version 11.4 and Ubuntu 22.04. When trying to download the CUDA toolkit for that version the latest Ubuntu version supported is 20.04. Is there any way to get CUDA running on this system without reinstalling Ubuntu (downgrading to 20.04)? Thanks in advance.
-
I'm currently looking into getting a GPU upgrade as my GTX1080 is showing its age. I will definitely buy a used card as my kidneys are preoccupied. To get a big noticeable upgrade from my GTX1080 I'm looking at something in the performance range of an RTX3080-3090 or RX6800XT-6900XT. The Radeon cards are 100-200€ cheaper than their Nvidia counterpart but: My problem is that for university and privately I do use Cuda quite a lot, whether it is for PyTorch, Stable Diffusion or LLMs. I can replace Cuda for PyTorch with ROCm on Linux and there are optimizations for Stable Diffusion. So there is more work involved in running Readeon but it would work. TLDR: But what other features and comfort, will I lose if I switch to Radeon instead of Nvidia?
-
As of Q1 Nvidia holds roughly 95% market share in the machine learning accelerator market, and 88% of the dedicated GPU market globally. ROCM support continues to be very limited effectively making CUDA the only option for many industries. Does this prove dangerous for the future of these technologies, and should governments step in to put an end to Nvidia's market dominance? Sources: https://www.cnbc.com/2023/02/23/nvidias-a100-is-the-10000-chip-powering-the-race-for-ai-.html#:~:text=Nvidia takes 95% of the,according to New Street Research.&text=The A100 is ideally suited,Bing AI%2C or Stable Diffusion. https://wccftech.com/q3-2022-discrete-gpu-market-share-report-nvidia-gains-amd-intel-in-single-digit-figures/
-
Hi everybody, I previously owned a desktop (majorly for 3D rendering and Deep Learning) with the following specs: Processor: AMD 5950X Ram: G.SKILL TridentZ RGB Series 128GB (4 x 32GB) 288-Pin DDR4 SDRAM DDR4 4000 Motherboard: MSI X570 ACE GPU: 2 x 2080Ti founder edition (did no do SLI since does not need it) Storage: 1 x SAMSUNG 980 PRO M.2 2280 2TB, 2 x Samsung 860 EVO SSD 4TB Power: 1300W Seasonic Gold Case: LIAN LI O11 Dynamic XL ROG Now I got one more 3080 Ti, I was wondering whether it is possible for me to add it to the current setup? Since the specs of the Mobo said: 1x PCIe 4.0/ 3.0 x16 slot (PCI_E5, supports x4 mode) 2x PCIe 4.0/ 3.0 x1 slot 2x PCIe 4.0/ 3.0 x16 slots (PCI_E1, PCI_E3) 3rd Gen AMD Ryzen™ support PCIe 4.0 x16/x0, x8/x8 modes 2nd Gen AMD Ryzen™ support PCIe 3.0 x16/x0, x8/x8 modes Ryzen™ 4000 G-Series support PCIe 3.0 x16/x0, x8/x8 modes Ryzen™ with Radeon™ Vega Graphics and 2nd Gen AMD Ryzen™ with Radeon™ Graphics support PCIe 3.0 x8 mode1 If I understand it correctly, I could put 3080Ti on the top PCIe (which should be x16 in use since does not enable SLI, but x8 when using CUDA?), and 2080Tis on one x8 and x4. Will this work? (If not work, I could sell one 2080Ti..
- 11 replies
-
Budget (including currency): <$400 US Country: USA Games, programs or workloads that it will be used for: CUDA development, working through Real-Time Rendering by Moller and computer vision by Forsyth (on my own). Want to use headless Xubuntu. Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): Situation: I was gifted an nvidia 1080 GTX (10.5) and it won't fit in my prebuilt, so have to build/buy another tower. $400 is pushing it for me. Path1: Build a dirt cheap linux box. -- Hoping to take advantage of black friday deals (and using pcpartpicker day-to-day). -- watching barebones systems for black friday. Path2: I'm looking for a dirt cheap pre-built model #(older models where refurbs exist preferred) that doesn't have a GPU (except maybe onboard) that can support a separate GPU ( reasonable PSU, etc) and has room for a 10.5+ GPU in the case (without drilling), that is linux compatible. --removing a cheap separate GPU isn't a problem, just am hoping to spend as little as possible. --- ex: I don't want to get a non-sff optiplex because I don't want to deal with removing the cage. Parts list: No case, some old tiny HDDs,1 8gb 1Rx8 PC4 3200AA stick of Ram. I know this black friday is going to suck, but no other option. Thank you!
-
TLDR: Does LHR affect regular Cuda Core performance? I am primarily a programmer who does some gaming on the side. Back at the RTX 3070 launch, I set my mind on getting a 3070 or better for machine learning and VR gaming. I have a relatively low budget, so I can't afford any workstation cards, and I would prefer a 30-series card due to their insane value at MSRP (if I can even find one in the next year lol). With some new LHR cards coming out, I am wondering if Cuda Core performance for training machine learning models will be impacted by the mining limiter. Let me know if I should add any information. Thanks!
-
Hello, I am building a coworker a budget system and was wondering what the best option would be for a workstation. They're getting an Ryzen 2700 & 32GB of RAM. He told me he uses the following software: Zbrush Unity 3D Unreal Engine AliceVision I have seen some graphics cards on the used marked and was wondering what the best one to get is. Out of the ones below which would be the best recommendation for his use case? Potential cards; Nvidia Quadro M4000 (8GB) ~$200CAD Nvidia GTX 1070 (8GB) ~$220CAD Nvidia GTX 1080 (8GB) ~$400cad (seems overpriced) Nvidia RTX 2060 (8GB) ~$400cad (seems overpriced) Would the Quadro be a good card since it's designed for workstation applications? I know it's quite old at this point and the Wiki page says its closest comparable GeForce Model would be a GTX 970. Would my friend see better performance gains for his application from a workstation card or would the other cards listed offer more performance? Thanks!
-
Hey, I just noticed the 3060ti is not on the list for cuda supported gpus from nvidia. The last gen 2060 is on the list , so I am questioning why that is. I do not have any experience in this specific feature set and so it happened that I have bought a 3060ti for a friend who does graphics and is still in university and has upcoming projects which I think also can use the gpu. My question is that, can the gpu somehow use those features if a programm is able to use cuda? The gpu does have cuda cores after all. I am feeling realy dumb right now for missing sth like this. The list: https://developer.nvidia.com/cuda-gpus
-
Hi everyone. I saw Sony's PS5 keynote presentation a while ago and I want to discuss one particular statement that was made there: higher clocked gpus are better than lower clocked gpus with more cores. They were referring to the then fresh revelation, that the new xbox actually will have 2 tflops more raw gpu performance. This is thanks to the having around 50% more streaming processors than the otherwise similar PS5 SOC. However due to the PS5s 400mhz higher gpu clock, the tflops difference is only 20%. Sony actually claimed, that due to the overall higher clock speeds, other factors beyond raw floating point calculations should benefit enough to actually catch up to the, on paper, more "powerful" xbox SOC. Now I understand that older games, designed with less parallelism within the rendering engine, would straight up solidify Sony's claim, but upcoming game engines might actually benifit more from having 50% more cores and not being 50% slower. The only thing that's annoying me is by how much are both philosophys true, so I came up with a benchmark idea that I can't pull off myself due to lack of gpus. Anyway, this is the idea: you need 2 similar gpus, same architecture, ideally one tier apart. however they have to fullfill some specific specs in order for this to work: 1.something that can be easily pushed to constant 100% utilization within random gpu bound benchmarking. this concerns both gpus. 2. both gpus need to be able to achieve the same theoretical tflops while having a different amount of streaming processors / cuda cores. you can actually achieve this by over and underclocking the gpus. in order to hit the same tflops, you can use this formular for both amd and nvidia 1core can do 2flops each clock or more easier for the calc app: coreCount*2*frequency in ghz=gflops in case of the xbox, that would be 3328x2x1.825=12,147gflops the reason for capping the utilization for benchmarking is to make those other components like ROPs and TMUs shine, giving a more countable side to Sony's claims. If anyone has the means to test this out, please do this and share the results! For further information, I use the Android apps CPU-L and GPU-L to check on hardware specs. It's also where I noticed the formula for calculating the tflops. This by the way works for most of the latest(if not all) amd and nvidia gpus. Intel gpus have actully a similar formula, but with 4 or 8 flops per clock. Also I am mainly curious about this "basically same" console gpu comparison, but this test should also be possible between different generations, architectures or even between amd/nvidia/intel gpus. As long as the math shows the same theoretical flops, any comparison should bring more light into the "flops does not equal performance" argument. Edit: Almost forgot a rather major requirement for this test: constant gpu clock. even if you set both gpus to clockspeeds, that would give them the same theoretical flops/second, any kind of thermal throtteling, or even automatic downclocking due to lack of utilization, would turn a comparison pointless. I actually thought of a way to calcute the actual flops achieved per frame, by multiplying the core count with the percentage of utilization , but that wouldg generate more data to crawl through.
-
From the album: IdeaStormer Gallery 1
© IdeaStormer
-
So, I know that CUDA cores can be more or less powerful, but why? What causes a CUDA core to be weaker or stronger? Why did nVidia use weaker CUDA cores for the Ampere architecture?
-
Hi guys ! I'm upgrading my system (or let's be honest, nearly building a new PC). The only things I'll keep from my old system are the PSU and the drives. I already made a list and would like your comments on it. The list is below the requirements. 1. Budget & Location Max 1500 CHF (Swiss Francs). If we remove the Swiss VAT and look at the exchange rate, it comes down to approximately 1460 $USD. As you guessed, I live in Switzerland 2. Aim Mainly gaming. I want to be able to run AAA games at 1440p with at least High settings. For instance Elder Rings and Hitman 3 (I know it's old but I've been without a GPU for a while). On the side I might use my rig to run ML assignments for school, so i will need a GPU with CUDA support. 3. Monitors I will play on a 1440p monitor, and use an old 1080p monitor for Discord and/or web browsing. I will purchase the 1440p monitor in 3 months. Because of that it isn't included in the budget and I do not ask for recommendation on it. 4. Peripherals I've got all I need. 5. OS I'll make do with unactivated Windows 10. 6. Why am I upgrading ? My graphics card died in early 2020 (the games that still ran looked like vaporwave). I made the foolish mistake of waiting for the RTX 30 series to launch and you probably guess what happened next. Since the prices are nearly sane right now, I want to jump on this occasion. 7. The list So I arrived at this list: https://pcpartpicker.com/list/QnJbKp. A few notes: The PSU is only here for reference. It is what I have at home. I couldn't find the memory module I saw in my store of choice (Digitec), so I used a parametric filter. I was thinking of using this module (this links to Kingston's web site). 8. A few questions I also have a few questions I wanted your opinion on. Is 2x8GB enough or should I go for 2x16GB? I cannot see a single 8BG module validated for Dual Channel on the motherboard QVL list. Is it a bad idea to use this board and/or this memory kit? I already have a 512GB SATA SSD and a 1TB HDD. Am I missing out by not upgrading to an NVMe SSD? This post is way longer that I anticipated it to be. Sorry about that. I hope I've given you all the info you need! Have a nice day!
-
Hi there, I'm new to this community so I may not be able to cover all the details I've missed. A month ago I've got a used Galax GTX 1060 from a friend and started mining on Hiveos Linux system, everything seems normal until a few days ago, it crashed due to cuda error: "The launch timed out and was terminated" (code:6) from NBMiner. I tried rebooting but still the same error happened every 3minutes it had booted up, then I removed the gpu and the problem was solved for the rig. Later, I installed the gpu in my normal windows rig, my specs are followed: Processor: AMD Ryzen 7 5800X 8-Core Processor, 3800 Mhz, 8 Core(s), 16 Logical Processor(s) Motherboard: MSI B550 GAMING EDGE ATX GPU: Nvidia GTX 1060 6GB RAM: Corsair 16GB 3200MHz DDR4 (x2) PSU: ASUS ROG STRIX 750W GOLD PSU It booted up surprisingly without major issues, yet until I tried booting up Dead by Daylight on my device, the game started running for a minute then the screen froze while the audio was playing, shorty after the whole game crashed. After I tried booting up some other games like Minecraft, Roblox and Euro Truck Simulator 2. Minecraft and Euro Truck Simulator 2 were able to boot without problem but the game Roblox got the same result. Later I have found multiple forums on the internet and tried the following fixes: Uninstalling and reinstalling Nvidia GPU driver Install Nvidia CUDA driver Underclocking my GPU on MSI Afterburner Checking on GPU-Z if GPU not detected (It appeared to be normal) When everything didn't work out, I reinstalled windows. However, the problem still remained after reinstalling (or even worsening), I got Wallpaper Engine on my device it used to work probably and now it just wouldn't boot probably. Then, I changed my focus on hardware and tried several fixes: Replacing 2 Corsair ram with another spare 2 Corsair ram (also underclocking which didn't work) Replacing my GPU with my old Rx 580 (it worked probably but after swapping back the problem remained) When I thought all hope was lost and tried benchmarking, the funny part is the gpu was able to run different benchmarks without graphics glitches or freezes. Now I am really confused, I could figure out it must be something wrong with the cuda, but it seems to be a rare case for both the gaming and mining community. It even reaches the professional server-level field. As an ordinary miner and gamer, it seems out of my reach for solving. Now the gpu was used for basic uses. If anyone with adequate knowledge about this problem, please help, I'm willing to post anything needed. Thanks, T
-
Good Evening Everyone! I hope you are doing well! These past couple of weeks have been so frustrating trying to render a 4K video using CUDA in Adobe Premiere Pro 2022. Whenever I render a video, both of my monitors instantly turn off and go to sleep (in the meantime I can hear the windows ejection sound when this happens). Occasionally when the monitors turn back on, the resolution has been changed automatically to 480P and gets vertical/horizontal lines (even the display changes to a warm color). Both monitors are not faulty. GPU handles gaming. All Drivers have been updated. removed Adobe Premiere Pro and reinstalled it. None of these option works. I'm starting to believe that I don't have enough wattage from my PCU? Here are the specs for my build below. Anything with be much appreciated. Motherboard: ASUS Maximus V Formula – CPU: Intel Core i5-3570K CPU Cooler: Corsair Hydro Series H60 AIO Liquid CPU Cooler Graphics Card: NVIDA Geforce RTX 2070 RAM: G.Skill Ripjaws X Series 32 GB (4 x 8 GB) Current PCU: Corsiar CX500M Storage: SamSung SSD 860 EVO 1TB WDC 1TB – Blue – 2.5in 5400RPM WD Purple Surveillance Hard Drive 3TB – 3.5in 5400rpm HP 22x Internal SATA SuperMulti LightScribe DVD Burner FANs Fans: 1 of – Corsair 120mm SP Series PWM Fan 1 of - Thermaltake Ring 12 High Static Pressure 120mm Circular LED Case Radiator Cooling Fan CL-F038-PL12OR-A Orange 1 of - be quiet! Pure Wings 2 120mm PWM high-Speed, BL083, Cooling Fan 2 of – be quiet! Pure Wings 2 140mm Bluetooth/Wifi: TP-Link WiFi 6 AX3000 PCIe WiFi Card (Archer TX3000E)
-
Getting NVENC or similar on thunderbolt, without a whole GPU
RedyAu posted a topic in Graphics Cards
Hi! This may be a stupid question, but I hope you can point me in the right direction. I have a ThinkPad T15 laptop with a 10th gen i7 in it. It's great, compact, fast, I love it. It has no discrete graphics, however. I've been thinking of buying a thunderbolt EGPU case, and some low-power (for example GTX 1650) GPU, but I was shocked to find out, how much EGPU cases cost. More than a GPU, even in today's world! I really only need NVENC, or some other hardvare accelerated video encode/decode and processing power. I'm not planning on gaming (and certainly not AAA recent titles), but occasionally I need to edit video, or make a livestream of an event. This CPU alone can't handle that, as OBS uses 30-40% just for encode/decode. Does a device like this exist? Or can you recommend some extremely simple EGPU case/cable/anything, that doesn't cost a furtune? -
As title... I cannot enable hardware rendering in Premiere Pro. I used to be able to, but now I cannot. See attached JPG. GPU is a GTX980. Latest drivers installed. Premiere Pro obviously whatever latest version is... 14. something or other... Windows fully up to date. Everything is taking an age to encode. Help!
-
- premiere pro
- premierepro
-
(and 3 more)
Tagged with:
-
Budget (including currency): 50,000 HKD Country: Hong Kong Games, programs or workloads that it will be used for: Machine Learning, CUDA Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): Hey fellas, I am a student in a computer science lab and we want to build a pc for machine learning tasks. We don’t have much experience in setting up a heavy processing pc, hence I would like your help. A shop recommended us a build-up (attached in the post) The shopkeeper also recommended us to go with a higher power supply (mentioned in the end as optional) so that we may upgrade it to 4 GPU(s) in the future. However, we are skeptical if the current setup can fit 4 GPU(s). Are we right? We would like to have the option to add more GPU(s) in the future, given it doesn’t exceed the budget too much. Can you please suggest any required changes to achieve the same?
- 11 replies
-
- gpu
- machine learning
-
(and 1 more)
Tagged with:
-
Budget (including currency): $800 - 900 Country: USA Games, programs or workloads that it will be used for: 1080 or 1440p gaming, CUDA softwares, Video processing softwares Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): I was thinking of the AMD Ryzen 5 3600 over the intel's i7 7700, for the budget and extra clock speed + number of cores.... Either ways, I'm open to newer better options than these........
-
- gaming
- workstaion
-
(and 2 more)
Tagged with:
-
Hello all, I am looking at buying a 1060 6gb GPU and I am wondering which one has the best quality and performance. Now i know rx480 performs better, but I need the CUDAs for machine learning support. I will also play games and all but its not the primary objective. I want the EVGA SC gaming one with only 1 gpu fan, but am afraid cooling will be a problem. here is a link to it: http://www.ncix.com/detail/evga-geforce-gtx-1060-gaming-28-133579.htm http://www.ncix.com/detail/evga-geforce-gtx-1060-sc-b9-133580.htm?affiliateid=7474144&promoid=1637 (sry the wrong link) Which ones do you guys suggest? Thanks!
-
I am planing to start a new Youtube channel and therefore upgrading my current PCs GPU and i would have gone with the RX 480, because its price/performance-ratio is about as good as it gets in my country and because i wanna play some games at 1080p. But after i did some research online i saw that some programs (like Blender) benefit from Nvidias CUDA acceleration but i couldn't find any current numbers. As of right now I might pick up a GTX 1060 instead, but I'm not to sure because its price/performance-ratio is not as good as AMDs RX 480. Has anyone experience with editing/rendering on AMD/Nvidia GPUs and can help me out? Im also not to sure how important the amount of VRAM is for editing/rendering: 1060 (3gb vs 6gb), 480 (4gb vs 8gb).
-
I plan on building my first computer. I have scrambled together something that represents a computer on pcpartpicker.com I live in the Netherlands and not all parts on the site are easily obtainable here. I have a descent 4 years old computer right now, but I use it extensively. I use 3D moddeling and rendering software, I run vertual machines, I run heavy artificial neural networks with Cuda, I write programms and I play games (GTA V, heavily modded Minecraft and some other stuff). It started to fail to meet my needs, so I decided to build a new computer. These are the components that I selected: https://pcpartpicker.com/list/fWbFnn CPU: Intel Core i7-7700K 4.2GHz Quad-Core CPU Cooler: Cooler Master Hyper TX3 Evo 43.1 CFM Motherboard: MSI Z270 SLI PLUS ATX LGA1151 Memory: G.Skill Ripjaws V Series 16GB (2 x 8GB) DDR4-3000 SSD: Samsung 960 PRO 512GB M.2-2280 HDD: Seagate Barracuda 3TB 3.5" 7200RPM Video Card: Gigabyte GeForce GTX 1080 Ti 11GB Gaming OC 11G Power Supply: Cooler Master 750W 80+ Bronze Certified Semi-Modular ATX Power Supply Optical Drive: LG GH24NSD1 DVD/CD Writer I also plan on getting 2 monitors that weren't in pcpartpickers database, so they can be found here http://aoc-europe.com/en/products/i2475pxqu I'm keeping my current mouse and keyboard. I can get the computer components for a total of €2003.37 (US$ 2242.91) and the 2 monitors for €338.00 (US$ 378.41), which makes a total of €2341.37 (US$ 2621.32). As I said earlier, I am completely new to building computers and I am not realy certain of what I actually need. I know I need a good graphics card, a good processor and a lot of memory, so I picked the components in such a way that in a few years, I can add another graphics card and double my memory. Could some people give me some feedback on this PC? Does this build make any sense? Are there any better configurations that are better for about the same price or cheaper? Any tips and comments are welcome! Edit: My budget is somewhere around €2250, not a hard limit but not too far over.
- 14 replies
-
- gaming
- 3d rendering
-
(and 2 more)
Tagged with:
-
Tensor cores by NVidia, Dear Linus would you please give us a "Linused" video on this topic - Do they improve gaming experience? - Do they improve video rendering? - Would I benefit from these cores if I deal mainly with 3D Modeling production?(what I really wanna know) - Or are these "Tensors" just for "AI" and deep learning? - What's Nvdia's target market for this tech?
-
Hi there, I am considering to buy GTX 1080ti for my existing machine. Currently I have a 2GB GTX 680, and it has been really good and keeping up with most of the tasks. Question: I am looking online and there are cards with different base clock speeds, which, I understand the higher - the faster performance. However, how much of a difference does it make when modeling heavy 3D scenes (dense in geometry/shading)? Also, does it have any effect on CUDA rendering (speed)? As always, thanks to everyone.
- 3 replies
-
- rendering
- graphics card
-
(and 2 more)
Tagged with:
-
I am curious if the Cpu used effects the performance of the Gpu as it pertains to carrying out cuda operations. Does Intel or Amd perform better when working with the gpu? My understanding was that Intel had a partnership with Nvidia to develop products that were well integrated, is this true and does this mean that Intel has an advantage? I ask because I'm in the position to upgrade my workstation and the only requirement I really have is optimize for working with Cuda along with being a beast to solve hefty numerical programs on. Thanks for the answers. P.S. I run Ubuntu rather than any of the if that makes a difference for this. Edit: Since people are asking, most of my work consists of simulating electromagnetic systems. So Finite Element, Runge Kutta for 10^6 or greater interacting cells (scales as N log N in my experience). I use some open source code like Mumax3 (mumax.github.io) along with writing my own.