Search the Community
Showing results for tags 'deep learning'.
-
Is a rtx 3080 10gb good for ai ml and deep learning purpose or rtx 3060 12gb a better one Apart form price point
- 8 replies
-
- gpu
- deep learning
-
(and 2 more)
Tagged with:
-
Budget (including currency): 2500/3000€ (excluding GPUs) Country: Italy Games, programs or workloads that it will be used for: Workstation for deep learning, mostly for training, but also as an inference platform to serve the trained models. Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): NOTE: I already have the two GPUs I have based this build off of the one I found here: https://www.mifcom.de/workstation-ryzen-9-7950x-rtx-4090-dual-id18524 But I have also considered a few more options to replace CPU, Mobo or case. I have a few questions as well: - should I change the cooler to an AIO? I have seen the performance reviews of the NH-D15 and I know it's a very good cooler, I just wonder whether it is a good choice for this huge build with two 4090s in it. - as far as I understand, the 7950X3D shouldn't be very useful for my use case, but I'd like to get a confirmation of this before picking the 7950X plus, is the 7950X even a good choice? Or should I go with a Threadripper instead of a Ryzen? - is the Crosshair X670E Hero overkill here? Maybe a good alternative would be Gigabyte's X670E-E Aorus Master AM5? CPU AMD Ryzen 9 7950X (16x 4,5 GHz, 170 W) 1 575 575 https://www.trovaprezzi.it/processori/prezzi-scheda-prodotto/amd_ryzen_9_7950x AMD Ryzen 9 7950X3D 0 705 0 https://www.trovaprezzi.it/processori/prezzi-scheda-prodotto/amd_ryzen_9_7950x3d Motherboard ASUS ROG Crosshair X670E Hero 1 625 625 https://www.trovaprezzi.it/schede-madri/prezzi-scheda-prodotto/asus_rog_crosshair_x670e_hero-v Case Fractal Design - Meshify 2 XL 1 230 230 https://www.trovaprezzi.it/prezzo_case-alimentatori_fractal_design_meshify_2_xl.aspx Lian Li O11 Dynamic EVO 0 210 0 https://www.trovaprezzi.it/case-alimentatori/prezzi-scheda-prodotto/lian_li_o11_dynamic_evo RAM Corsair Vengeance DDR5-5200 64 GB (2x32 GB) 1 200 200 https://www.trovaprezzi.it/prezzo_ram_corsair_vengeance_ddr5-5200_64gb.aspx SSD Samsung 980 PRO 2 TB 1 185 185 https://www.trovaprezzi.it/hard-disk/prezzi-scheda-prodotto/samsung_980_pro_m_2_nvme_w_heatsink_2_tb-v PSU 1500 W Corsair Professional HX1500i 1 300 300 https://www.trovaprezzi.it/prezzo_case-alimentatori_hx1500i.aspx Fan Noctua NH-D15 chromax.black 1 120 120 https://www.amazon.it/dp/B07Y87YHRH/ Case fan Noctua NF-A12x25 5 30 150 https://www.trovaprezzi.it/prezzo_dissipatori-e-ventole_nf-a12x25.aspx Thank you all in advance!
-
I am planning a new build for deep learning / AI workloads. I need a stable system with >80 GB of memory. After some research I found these 3 options: 1. 2 x 48 GB ddr5 6000 mhz, C30 2. 4 x 32 GB ddr5 4800 mhz, C30 (can this be any faster like 5200 / 5600) 3. 4 x 32 GB ddr4 3600 mhz, C16 Which of the above memory options would offer me the highest bandwidth to work with are be stable at the same time? I want to set the XMP / EXPO profiles once and let it be. No OC, no tinkering with timings. It would be best if someone could tell me the effective memory bandwidth for all the 3 options above. Does having 4 sticks (quad channel?) mean double effective bandwidth? Does Intel have any advantage over AMD? I am looking at 7800x3d or 5800x3d or 13600K. I listed the 3rd option (ddr4) above because I am already running a 5600x + ROG strix x570e mobo + 2 x 16 GB 3000mhz Crucial. So upgrading to 5800x3d and new memory is the cheapest option for me. Also the RAM timings are quite better with ddr4 vs ddr5 and I wont be getting >6000 ddr5 speeds anyway with such a high capacity requirement. But if the bandwidth gains are high then I am willing to spend for a new AM5 / Intel platform.
-
Budget (including currency): 6-7k€ Country: Europe Games, programs or workloads that it will be used for: Deep Learning Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): Hello, I would like to build a new setup including 2 x 4090 GPUs. I started looking a bit at some possible configuration but I'm not sure if both GPUs could fit well and be cooled enough. Here is what I have in mind for now: Power Supply: https://pcpartpicker.com/product/sdRgXL/be-quiet-dark-power-pro-12-1500-w-80-titanium-certified-fully-modular-atx-power-supply-bn647 Case: https://pcpartpicker.com/product/f2hmP6/fractal-design-meshify-2-xl-atx-full-tower-case-fd-c-mes2x-02 Video Card: https://pcpartpicker.com/product/TgkWGX/gigabyte-gaming-oc-geforce-rtx-4090-24-gb-video-card-gv-n4090gaming-oc-24gd Memory: https://pcpartpicker.com/product/tdCZxr/corsair-vengeance-64-gb-2-x-32-gb-ddr5-6600-cl32-memory-cmk64gx5m2b6600c32 Motherboard: https://pcpartpicker.com/product/tPYmP6/asus-proart-z690-creator-wifi-atx-lga1700-motherboard-proart-z690-creator-wifi CPU: https://pcpartpicker.com/product/CgkWGX/intel-core-i9-13900kf-3-ghz-24-core-processor-bx8071513900kf Fan: https://pcpartpicker.com/product/84MTwP/noctua-nh-d15-chromaxblack-8252-cfm-cpu-cooler-nh-d15-chromaxblack And I'm not sure which SSD or NVMe disk to choose if you have any idea?
- 60 replies
-
- gpu
- deep learning
-
(and 1 more)
Tagged with:
-
Hi everybody, I previously owned a desktop (majorly for 3D rendering and Deep Learning) with the following specs: Processor: AMD 5950X Ram: G.SKILL TridentZ RGB Series 128GB (4 x 32GB) 288-Pin DDR4 SDRAM DDR4 4000 Motherboard: MSI X570 ACE GPU: 2 x 2080Ti founder edition (did no do SLI since does not need it) Storage: 1 x SAMSUNG 980 PRO M.2 2280 2TB, 2 x Samsung 860 EVO SSD 4TB Power: 1300W Seasonic Gold Case: LIAN LI O11 Dynamic XL ROG Now I got one more 3080 Ti, I was wondering whether it is possible for me to add it to the current setup? Since the specs of the Mobo said: 1x PCIe 4.0/ 3.0 x16 slot (PCI_E5, supports x4 mode) 2x PCIe 4.0/ 3.0 x1 slot 2x PCIe 4.0/ 3.0 x16 slots (PCI_E1, PCI_E3) 3rd Gen AMD Ryzen™ support PCIe 4.0 x16/x0, x8/x8 modes 2nd Gen AMD Ryzen™ support PCIe 3.0 x16/x0, x8/x8 modes Ryzen™ 4000 G-Series support PCIe 3.0 x16/x0, x8/x8 modes Ryzen™ with Radeon™ Vega Graphics and 2nd Gen AMD Ryzen™ with Radeon™ Graphics support PCIe 3.0 x8 mode1 If I understand it correctly, I could put 3080Ti on the top PCIe (which should be x16 in use since does not enable SLI, but x8 when using CUDA?), and 2080Tis on one x8 and x4. Will this work? (If not work, I could sell one 2080Ti..
- 11 replies
-
Budget (including currency): 3500 € Country: France Games, programs or workloads that it will be used for: Deep learning, Gaming, maybe rendering at some point and mining when it is cold Other details : Hi, I've already done some work and i got this for now : https://www.ldlc.com/s/DLMMEG I am trying to build a tower that will last me a while and that i will use for deep learning (and for gaming but that's a plus). My requirements for the moment are (feel free to tell me if you think i am wrong) : - Will allow for a second RTX3090 at some points in the future (i really need the 48GB VRAM for future work) - Motherboard compatible SLI (for RTX3090) - AMD processor - Easily upgradable the points i don't know: - 3200MHz for ram is it overkill ? - Do i really need a ssd ? (I need a lot of space for dataset but i could do SSD/HDD) - Do i need to go watercool ? (i am pretty afraid to do watercooling with such expansive stuff ) - Is Ryzen 9 9500X overkill ? (i am hoping for the cpu to last at least 5/6 years) I also have choosen a case and a motherboard that seems nice and durable but i would be happy to save some money on them. It will be my first pc build, hope you can give me some hints. Thank you !!!
- 1 reply
-
- deep learning
- gaming
-
(and 1 more)
Tagged with:
-
Hi, I found a basic notebook online to write code to implement CNN from scratch. Can some machine learning specialist help to fill the notebook with the relevant bits and pieces. cnn.ipynb
- 3 replies
-
- cnn
- deeplearning
-
(and 3 more)
Tagged with:
-
Tired of heat, noise and don't know how to explain the power bill to your waifu? Folding in the cloud could be the solution. During the COVID-19 folding event I looked into cloud GPU servers and tested Hostkey among Cherryservers and the big boy (and expensive af tbh) Google Cloud. Contact via email has been uncomplicated and I introduced what LTT, this forum and this event is all about. The owner of the company showed great interest and kindly agreed to lend me a Windows 10 VM with dual GTX 1080 Ti to test out. While it worked right off the bat and having a GUI and convenient RDP access is nice, the performance on the second GPU was suffering even after deleting the CPU folding slot. I blame virtualization on windows to be the bottoleneck in this case. Having switched to a dedicated quad GTX 1080 + beefy Xeon Ubuntu server and after some configuring all issues were gone. Performance as expected even with active CPU slot. Don't be afraid of the terminal, it's not that complicated. What's great about Hostkey is the availability of GTX servers at reasonable prices and they explicitly allow folding and mining workloads. RTX servers (Turing) are there but you need to closely watch out as they are gone FAST. You can pay per month / 3 months / 6 months or annually, lengthy contracts give you discounts. Why fold in the cloud? Zero noise or heat in your room and high flexibility. Say, you want to boost COVID-19 research (and your PPD) for a couple weeks but don't have a rig with a powerful GPU? Rent a GPU server for a month. You can also use GPUs in the cloud for rendering or mining. As stated, folding on Hostkey is legal and even endorsed by the company. (Some companies will ban you if you actually use such heavy workloads.) They were already running FAH on some of their servers as I contacted them. <referral link removed by moderators> Hostkey GPU servers: https://www.hostkey.com/gpu-servers#/ 10 % promocode (apply at checkout) UDFN4967UJ IMO, the 4x GTX 1080 server for 270 € / month is the best deal currently available (0,375 € / hour) as of 16 April 2020. But there may also be RTX 2080 Ti server available. Screenshots and configuration guide: Request Ubuntu 18.04 LTS. If you don't have an SSH client yet, install https://www.bitvise.com/ssh-client-download Use the SSH client and your credentials to log into the server (host, username and password). sudo apt install nvidia-driver-435 sudo apt install ocl-icd-opencl-dev sudo reboot wget https://download.foldingathome.org/releases/public/release/fahclient/debian-stable-64bit/v7.5/fahclient_7.5.1_amd64.deb sudo dpkg -i --force-depends fahclient_7.5.1_amd64.deb sudo nano /etc/fahclient/config.xml edit the config file as following: config> <!-- Client Control --> <fold-anon v='false'/> <!-- Folding Slot Configuration --> <gpu v='true'/> <client-type v='advanced'/> <!-- Slot Control --> <power v='full'/> <!-- Work Unit Control --> <next-unit-percentage v='90'/> <!-- User Information --> <passkey v='yourpasskey_should_already_be_here'/> <team v='223518'/> <user v='YourUsername'/> <!-- Folding Slots --> <slot id='0' type='CPU'/> <slot id='1' type='GPU'/> </config> save the config file under same name and location (confirm changes with Y). sudo reboot Allow some time for reboot, reconnect and do watch -n 0.5 nvidia-smi to check if GPU folding is active. You should see all GPUs listed and something like FAH core 22 under processes for each active GPU slot. You can exit with ctrl+c. Other useful commands are: Show last lines if the log file, updating real time: tail -F /var/lib/fahclient/log.txt Check PPD: FAHClient --send-command ppd Restart FAH client: sudo /etc/init.d/FAHClient restart Have fun!
- 10 replies
-
- folding@home
- server
-
(and 4 more)
Tagged with:
-
I will be choosing AI in university and i was wondering that for Machine learning proposes can I use a consumer grade AMD gpu and not the Instinct ones cause i want to save money and am a poor cheapskate University student. I mean 4 RX Vegas compared to one Instinct card would be better?
- 12 replies
-
- deep learning
- deeeeeeeeeeeeeeeeeeeeeep
- (and 4 more)
-
My budget is $2200 in pcpartpicker prices for US, I have them higher in my country and this build will cost $2550 Aims: 1. The main purpose of this build is to do machine learning using tensorflow on a GPU, so it will need cuda and cuDNN. I'm going to train both different types of networks(RNN like LSTMs, convolutional, DQN) for nlp, computer vision, object recognition, etcetc, I'm not fixed to a single type of learning problem yet, so I can't say that I will only be doing computervision or only nlp or only something else. It also needs to be able to run 24/7 with 100% GPU load for a few days minimum, 1-2 weeks is better though. But I'll probably just use AWS when it's in weeks range. 2. I also have an evolution simulation that makes use of all cores, I'm not sure if I will be able to delegate this work to a GPU. 3. In a year or two I want to buy a VR headset, so it needs to be ok for it(with RAM or GPU addition if needed). Barring some crazy developments for headsets of course that will render my pc useless for them. 4. Be as much quiet as possible for this price. At least when idle(browsing, youtube, movies). And not too loud when under the load, I'm sleeping 3 meters away from it. 5. Upgradable to 2 way GPU SLI and >=64GB. Is 28 PCI-e lanes CPU enough for 2 way SLI? and 16? 6. Support for up to 3 monitors. Nothing fancy, 23.8 inch IPS 1080p with 60 Hz for now. Parts I've picked so far: http://pcpartpicker.com/list/TK8mxY This is my first try at building a PC. What do you think? Is it ok for these goals? What do I change? Should I throw away 6800K, get 6700 with a cheaper RAM, mobo and 6700 being cheaper itself and spend the surplus on something else? I have no idea :\ My reasoning for parts: Video Card $664: MSI GeForce GTX 1080 8GB GAMING X 8G I can have EVGA founders edition as well, but it will cost a little bit more. Also will MSI GTX 1080 SEA HAWK X with AIO water cooling be quieter than the picked one? It costs more though :\ I'm not sure what's better for ML for this price. Maybe some other GPU? CPU $428: Intel Core i7-6800K 3.4GHz 6-Core Processor I don't plan to overclock anything really. I wasn't able to find 6800 without a K. And 6850K is too pricy for me right now. My choice was between 6700 and 6800k. 6700 is cheaper and that's great, I could have bought SSD or more RAM. Plus it has a different socket and I could have picked a cheaper mobo as well! But it has 16 PCI-e lanes instead of 6800K's 28, will it be enough for 2 way 1080 SLI? Plus two less cores. And only 2 channel memory(not sure if it's significant) Plus it only supports 64GB RAM although I'm not sure if I will upgrade RAM to more than 64GB. Basically, I'm overpaying a lot in total for just 4 additional threads and not sure about PCI-e lanes thing. CPU Cooler $84: Noctua NH-D15 82.5 CFM CPU Cooler I want this build to be quiet. Also I'm not sure if this big heatsink with fans will allow for all memory slots to be occupied. How do I check? Or should I consider kraken x61 or h115i AIO water cooling? Will they be quieter? And not leak? Motherboard $394: Asus X99-DELUXE II ATX LGA2011-3 I wanted to go with Asus X99-e WS but it's from 2014 and needs a bios update to work with 6800K....? It has a lot of slots, lanes and a lot of whatever. So there is no problems here. Memory $165: Corsair Vengeance LPX 32GB (4 x 8GB) DDR4-2400 it's a quad channel memory, right? and 2400 is a little bit more than 2133 ahah. Not sure about CL timings and all these things for my budget. Storage $109: Western Digital Red 3TB 3.5" 5400RPM Will this drive be noisy? I don't have money for a SSD just for fast OS bootups right now. Case $95: Fractal Design Define R5 w/Window (Black) ATX Mid Tower Noctua and 1080 fit in it. And it has some sound dampening material. Should I choose it with a window or without? I like with a window, but I don't want to loose sound dampening because of it. Or will there be no difference? Power Supply $237: Corsair AX760i 760W 80+ Platinum Certified Fully-Modular ATX It's modular, it has "i" at the end of its name, a similar one(more powerfull) was in one of the linus tech tips videos to which I've subscribed a few weeks ago, and I hope 760 is enough for 2 way SLI GPUs if I decide to add it. Right now I'm running this setup, lol, so I'm not particularly picky: 2011 year HP notebook with Arch linux and i3-2310M 2 cores/4 threads @ 2.10GHz 8GB 500GB HDD and intel graphics HD 3000. I have a discrete gpu, but I'm not using it.
-
I've just spend the last few hours looking for information on neural networks and how to get one up and running. Right now, literally nothing makes sense. Is there anyone who is kind enough and knowledgeable enough that I might be able to talk to? Just run some basic concepts and ask for clarification. The language I'm using is Java, but this knowledge should be relatively transferable through all languages.
- 2 replies
-
- oh god no
- deep learning
-
(and 2 more)
Tagged with:
-
Intel is planning to ship their first Intel Nervana NNP by the end of this year. NNP stands for Neural Network Processor and the Intel Nervana was specifically designed for deep learning. The NNP can be used in a wide range of application like, Code name Lake Crest, Intel Nervana will have, It will be send to selected customers and one of them will be Facebook because they have actually collaborated with Intel on designing the Neural Network Processor. It will compete against Nvidia's deep learning hardware, their Volta based Tesla. https://www.anandtech.com/show/11942/intel-shipping-nervana-neural-network-processor-first-silicon-before-year-end
- 11 replies
-
- deep learning
- intel
-
(and 2 more)
Tagged with:
-
NEC has recently announced their SX-Aurora TSUBASA PCIe based Vector Engine card. Designed for use in the field of engineering and science, it is now also used for artificial intelligence as well as machine learning. In it, the the SX-Aurora TSUBASA, uses CoWoS, that's Chip on Waver on Substrate connected to six HBM2 modules, giving each core 150GB/s of bandwidth, has a total of 8 cores, thus the total combine bandwidth comes out to 1.2TB/s and 2.45TFlops of performance. The card runs on linux and and will need some specialized program to take advantage of them, and NEC has provided their own tools, to help those that want to use it. It can be configured in a wide range of configurations from single card in a desktop tower and up to 64 of them in a rack mount server, where it will have a total bandwidth of 76.8TB/s and 156TFlops of performance. http://www.nec.com/en/press/201710/global_20171025_01.html
- 12 replies
-
1. Budget & Location: $1.5K - $2.5K. Can receive parts in Austin, TX or my residence in Mexico 2. Aim: Deep Learning, scientific number crunching. I want a top-end GPU like the Tritan Xp and 64GB+ of RAM to do Data Science projects. I just finished waiting 17 hours while my current desktop trained a language classifier. That kind of sluggish turn-around makes development very difficult. I can rent GPUs on AWS, but in addition to costing money it is an awkward way to work and I prefer to have my own hardware. But I need something portable, because I may be traveling back and forth between Texas and Mexico and want to take my GPU with me. 3. Monitors: don't care. Big. 4. Peripherals: I need USB 3 ports for external drives and an Ethernet port I viewed the marvelous overviews Linus did of the Alienware Graphics Amplifier and the Razer Core, but the external GPU approach has big downsides: Weight of laptop: I don't want a heavy laptop Rapid depletion of battery: battery has to be recharged after 45 minutes of use Drivers and software: for Machine Learning I need Linux installed, not Windows 10 Home. Ideally my system would be a dual boot with both Linux and Windows 10 Professional. I fear that I might plop down lots of $$$ for these proprietary systems only to discover that they are not adaptable to Machine Learning applications. Linus has persuaded me that a compact computer in a Dan case might be a great alternative. However, the parts Linus put into the system he built with a Dan case are just too expensive for me, especially the Intel Xeon E5-2699 processor! I need a high-end GPU and *lots* of memory, but I don't require a top-end CPU. I am not a gamer and care nothing whatsoever about graphics. I just need to crunch huge amounts of data. QUESTION: Where can I get guidance building a system in a Dan case similar to the one Linus described in the video, but maybe with a better GPU (Nvidia Triton Xp) and a much less expensive CPU. Parts Linus used: Crunchyroll link: http://crunchyroll.com/linus Intel Skull Canyon NUC link: https://linustechtips.com/main/topic/... Buy GTX 1080 on Amazon: http://geni.us/ilxNCb0 Buy Xeon E5-2699 v4 on Amazon: http://geni.us/bpWoPYA Buy DAN Cases A4-SFX: http://bit.ly/2aNOyBX Buy Silverstone SFX-L 700W: http://geni.us/cABATzn
- 2 replies
-
- deep learning
- machine learning
-
(and 3 more)
Tagged with:
-
Hello, i'm building a little super computer for deep learning (AI), and i have put together this part list: https://pcpartpicker.com/user/AhmedAdly/saved/BzKzyc Can i have your advise by commenting on the compatibility ? i'm getting the following warning from the web site: Some Intel X99 chipset motherboards may need a BIOS update prior to using Broadwell-E CPUs. Upgrading the BIOS may require a different CPU that is supported by older BIOS revisions. also do you advise to build it myself? i have never done it before so is there any risks associated ? Thank you very much. Best, Ahmed
-
I'm planning to build a new desktop PC mainly focusing on my research regarding deep learning. Further I'm not that much into gaming and occasionally do gaming. So I'm planning to build the PC with i7 7700 processor, Z270 / H270 / B250 motherboard and 16GB RAM. Still doesn't have a clear idea to choose which chipset from above motherboards also. Main thing I need to know is which GPU should I choose out of GTX 1080Ti vs GTX 1080 ? Is it worth to spend on 1080Ti over 1080 ? Further I'll be glad if someone can give an explanation about a motherboard to choose also.
-
AMD has announced their Radeon Instincts accelerators for deep learning, which consist of 3 models, the MI6, MI8, and MI25. The MI6 has a compute capability of up to 5.70TFLOPS, with 224GB/s of memory bandwidth and a TDP of less than 150w. The MI8 has 8.2TFLOPS of computing power, 512GB/s of memory bandwidth and a power consumption of less than 175w. MI25 on the other hand, AMD did not provide any numbers for this card, except for a few notes like 2x Pack Math and High Bandwidth Cache and Controller, with a TDP of <300w. Out of these 3 accelerators, the MI6 is based on the current Polaris, MI8 on the older Fiji, and MI25 on the new upcoming VEGA. Now for those who are excited to get one just to see how fast Crysis will run, don't bother because http://www.anandtech.com/show/10905/amd-announces-radeon-instinct-deep-learning-2017 http://videocardz.com/64677/amd-announces-first-vega-accelerator-radeon-instinct-mi25-for-deep-learning
-
At Super Computing, last year, AMD announced their Boltzmann initiative, and HIP. These are to my understanding attempting to even out the software gap between AMD and Nvidia, Boltzmann is the name given for AMD's ambitious plan to overhaul their HPC stack, while HIP was a set of tools designed to port CUDA code over to HIP. Boltzmann has been renamed Radeon Open Compute Platform, or RCOm. Version 1.0 was released earlier this year in April, while version 1.3 has just been released. AMD plans to aggressively update this software stack, and now that the core fundamentals are set out, major investment into maths libraries such as BLAS and FFT, and frameworks such as AMBER and CAFFE. The way things are looking AMD should be in a position to fully tackle Nvidia by this time next year. (If it interests you, there's much MUCH more detail in the source) http://www.anandtech.com/show/10831/amd-sc16-rocm-13-released-boltzmann-realized It also seems as though the first fruits if this are starting to appear, it was earlier announced that Alibaba would use AMD for its cloud servers. Google has now jumped on board, and starting from 2017, their Cloud Based Learning and Compute Engines. The GPU in question is the FirePro™ S9300 X2. These deals are paramount for AMD to re-establish a presence in the Nvidia dominated and fast growing HPC segment, for which they have previously had the GPU power, but inferior software to Nvidia. http://www.forbes.com/sites/aarontilley/2016/11/15/google-taps-amd-for-accelerating-machine-learning-in-the-cloud/
- 7 replies
-
- amd
- deep learning
-
(and 1 more)
Tagged with:
-
Hi all I want to install Tensorflow gpu on my PC (Windows 10, Ryzen 1700, GTX 1660 Ti) and for this I need to know what the compute capability is of my GPU. This should be listed here, but it seems like they forgot to list the GTX 1660 Ti. Does anyone know what its compute capability might be? Also I tried this a couple years ago (on a different GPU) and I think that by installing CuDNN, I broke my graphics drivers and I could not game anymore until I deleted CuDNN again. Does anyone have experience with this or can anyone confirm if it's possible to install CuDNN while still being able to game on the same PC? Thanks a lot!!
- 2 replies
-
- cuda
- gtx 1660ti
-
(and 2 more)
Tagged with:
-
Please give me a strong idea how to compare bandwidth on PCI Express with GPU side, are they pci bandwidth vs gpu memory bandwidth or what? and what's the impact of putting rtx 2060 super on pci 2.0 x16 mobo? especially on heavy task like deep learning, thankss
-
Hi guys, I am looking to purchase a new laptop to take to university. My use cases: 1. A laptop on which I can do light to moderate gaming and can play AAA titles. since my current laptop is 5 year old which doesn't have a gpu I have hard time running any modern titles. 2. I will mostly be starting with deep learning so I want a laptop where I can run linux on a partition or VM or use the new linux kernel update on win 10. I would need the laptop to get started on running neural nets and train deep learning models and do computer vision work as well as for prototyping etc. Also do projects and solve kaggle challenges. For much heavier workloads like for example programming for my masters thesis and kaggle challenges with very large datasets etc I can use the university labs or rent a cloud. 3. I have limited budget of 1 lakh Rs (~1500 US $) and my purchase is restricted to availability. 4. I want to use the laptop for at least 4-5 years before I get the next one or until I build a pc myself. I am confused between these laptops I shortlisted below .Please help me out with the decision. If you have any better alternatives please suggest. A. HP Gaming Pavilion - 15-dk0051tx https://store.hp.com/in-en/default/laptops-tablets/hp-gaming-pavilion-15-dk0051tx-7lg82pa.html price: 1,05,989 rs Intel® Core™ i7-9750H (2.6 GHz base frequency, up to 4.5 GHz base with Intel® Turbo Boost Technology, 12 MB cache, 6 cores) Intel® HM370 12 GB DDR4-2666 SDRAM (1 x 4 GB, 1 x 8 GB) 512 GB PCIe® NVMe™ M.2 SSD + 1 TB 7200 rpm SATA GTX 1650 4 GB DDR5 150 W power adapter , 3 cell 52.5 Wh li-ion battery I was set on this one, only things that bother me is the build quality, since the screen is single hinged on the center and I feel that it may break down due to rough handling down the line, the ultraviolet backlit keyboard is not a deal breaker but I dont like it and I would have prefered a gtx 1660 card but it is out of my budget. B. OMEN by HP 15-dc1093tx https://store.hp.com/in-en/default/laptops-tablets/omen-by-hp-15-dc1093tx-7nm78pa.html price: 92,195 Rs Intel® Core™ i7 9750H Processor (2.6 GHz base frequency, up to 4.5 GHz base with Intel® Turbo Boost Technology, 12 MB cache, 6 cores) Intel® HM370 8 GB DDR4-2666 SDRAM (1 x 8 GB) 256 GB PCIe® NVMe™ M.2 SSD + 1TB 7200 rpm SATA HDD NVIDIA® GeForce® GTX 1650 Graphics (4 GB GDDR5 dedicated) 150 W AC power adapter, 3-cell, 52 Wh Li-ion prismatic PRO This ones cheaper by ~13,000 Rs. (price of an budget smartphone) omen is a much more premium brand than pavillion build quality looks better I like this backlight config better CON Less Ram ( It can be upgraded after purchase, but I have no idea if its economical to do that and will it affect the performance?) less SSD storage No idea on whether the display is 144HZ like the pavilions since it is not mentioned. No 1660 either.
- 1 reply
-
- gaming
- deep learning
-
(and 3 more)
Tagged with:
-
I am building a deep learning computer and am choosing the parts. Is it possible to have a computer with rtx 2080 ti under 1500? Minimum specs: 16-32 GB ram Intel 8gen cpu Case any PSU enough is ok RGB not needed Quiet system If it is not possible, is it possible to build a pc with rtx 2080 instead but with 1200 budget? Thank you
-
Hi all, I want to create a cheaper version of the LambdaLabs GPU Server. I want to create one with : Quadro - rtx-8000 X 8 784~1024 GB DDR4 RAM 8TB HDD Storage 2 TB SSD Storage Intel Xeon Gold 6130 X 2 I want to know how to buy the power supply, the motherboard and the cooling for this beast at an efficient price in the US at Budget < $60000. What do you guys think about it? Can I do it?
-
- deep learning
- gpu
-
(and 1 more)
Tagged with: