Search the Community
Showing results for tags 'ml'.
-
Is a rtx 3080 10gb good for ai ml and deep learning purpose or rtx 3060 12gb a better one Apart form price point
- 8 replies
-
- gpu
- deep learning
-
(and 2 more)
Tagged with:
-
When: Within the next 3 months. I have work projects that will require it in roughly 4/5 months time. Budget (including currency): £3000 (willing to start with 1 GPU if I need to to stay in this budget as I will be able to add the second later) Country: United Kingdom Games, programs or workloads that it will be used for: I am a self employed software engineer in need of a powerful new machine. In my job I do a lot of ML/AI and I am trying to build a PC with that in mind. I've done some research and it seems that 2 3090s is the best price to performance for the amount of VRAM I need. A lot of the software I write is heavily multithreaded and much of the ML I do requires significant preprocessing that can only be done on the CPU. This means I need a CPU with around 12-16 cores to be able to keep up with the GPUs. The main struggles I am having are working out the correct CPU + motherboard combos that meet these requirements and have enough PCIe lanes to support them (with at least one NVME SSD). I also have very little idea about appropriate cooling. I do play some games (DOTA2, Factorio, CIV) but nothing too demanding. Other details: I currently have 2 monitors, 2560 x 1440 and a 1920 x 180, nothing fancy but it works well for my workflow. They both suport HDMI and VGA inputs. The 1440p has inbuilt speakers that I use. I would also like to have a wifi card as currently I have to use one of my external USB ports for one and it gets in the way. USB-C ports are also a must. Ideally it would also look nice if possible, a case with glass would be my preference. I'm a fan of all white or all black builds, but aesthetics are secondary.
-
Does anyone have any info about benchmarks on ML workloads, such as widely used model (i'm mostly interested in LLMs) inference speed for different configurations? What would be CPU inference speed on something like Xeon-W 2400? Or threadripper? Assuming any of those has comparable amount of memory to possible maxed-out M2 ultra config. How it compares against 2x or 4x 3090s/4090s? How all above compares against maxed out M1/M2 mac studio? How all above will cope with loading 30 (or even 60-70) billion parameter models? How they will work with 4-int/8-int/16-float models? And diffent inference engines (huggingface transformers/vLLM/auto-GPTQ/llama-cpp/GGLM) Aren't M1/M2 macs are actual secret ML monsters with their huge unified memory? And taking into account price of 3090/4090, M1/M2 macs can actually be a cost-effective (!) measure for this task. Here are my calculations : If I get 4x 4090, assuming each cost 1600 bucks, it's 6400. I will get 96 gigs of v-ram. And it's gpu's alone, not even talking about rest of the system. And inference speed might suffer a bit if I have to run model in distributed mode (I gotta do it even if I load llama-2 70b in int-4, for instance) If I configure m2-ultra mac to have 76 GPU cores and 192 gigs of unified memory, it will cost me .... 6600 dollars. For a bit more than 2 times of gpu-accessible ram. I need benchmarks. Does anyone have any? Mac's looking more cost-effective than PC is peak clown-world. And the funniest part, it might be a reality.
-
I've been training my own GAN-based TTS and now diffusion-based TTS models for quite a bit (the eventual goal is to have a 'teacher' model teach a cloned voice how to sing and rap fwiw). I've seen the guys try a couple different models zero-shot to try and clone their voices - hasn't quite hit ever, so trying to fix that. Here's a super early preliminary attempt on a ~500m parameter TTS model fine-tuned with Linus' voice from the most recent WAN show. Only fine-tuned it for ~15 minutes on a single 3080, very undertrained obviously, can probably get much better with more time. Just making a thread to track progress until it sings. Novel text from a random The Verge review to test Linus' voice against: Generated Audio: linus test.wav The model is autoregressive a la GPT-2 and Tortoise. So based on speech and words it's seen before - it may choose to change the emotional tone, add pauses, different words etc based on the training data and preceding text while generating - for example it added "i.e." and uhhs and umms near the end of the clip on its own - I straight copy pasted the highlighted text above. Rate on a scale of 1-10 in its current state? --- If you're interested in TTS btw - I post some experiments on my Twitter - (Anuj Saharan (@theAnujSaharan) / Twitter).
-
Hi there! I'm looking for some eyes to help me verify a build I'm planning to execute twice. One for me, one for SO. It's taken me months to find one GPU but recently I've got a lead on a second and that has triggered me to start to plan the rest of the builds. Aim Light gaming, mainly e-sports\indie titles. We're not heavy AAA title gamers. SO is a data scientist and actively works with ML and AI data modelling with libraries like TensorFlow. I freelance locally as both videographer and editor next to boring dayjob that is 90% boring Teams meetings. We both share "maker" hobbies and are novice 3d modelers. Budget and Location Less relevant here - Mainland Europe so we use €, I'm hoping to stay under 3k but I think the cpus will be the most influential there. I can go over if needs be. I have a PCPartPicker Part List CPU: AMD Ryzen 9 5900X 3.7 GHz 12-Core Processor CPU Cooler: Corsair iCUE H115i ELITE CAPELLIX 97 CFM Liquid CPU Cooler (€163.85 @ Megekko) Motherboard: MSI MAG B550M MORTAR Micro ATX AM4 Motherboard (€137.00 @ Azerty) Memory: G.Skill Trident Z RGB 64 GB (2 x 32 GB) DDR4-3200 CL16 Memory (€382.95 @ Megekko) Storage: Samsung 980 Pro 1 TB M.2-2280 NVME Solid State Drive (€189.00 @ Paradigit) Video Card: MSI GeForce RTX 3080 10 GB GAMING X TRIO Video Card Case: Fractal Design Meshify 2 Compact TG Light Tint ATX Mid Tower Case (€108.95 @ Megekko) Power Supply: Corsair HXi 850 W 80+ Platinum Certified Fully Modular ATX Power Supply (€265.00 @ Azerty) Total: €1246.75 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2021-04-04 23:08 CEST+0200 Other stuff These machines will sit on our desks in the livingroom of our house and will be in view. We spend 10 - 15hrs a day at our desks, either working from home @ dayjob (Thanks, pandemic) or working on pet projects / freelancing. We tried setting up an office space but quickly realized spending the majority of our day cooped up in a converted spare bedroom when we have an entire livingroom to live in was dumb. Why the upgrade? Currently I work on a 10yr old 3770k+HD7970 build... I think this says enough. Working with proxies in Adobe is a godsend - I literally would not be able to edit some of my footage without proxies but DaVinci Resolve long since denied even starting on my machine, flatly producing a gpu error. SO has a slightly more recent build, sporting a 6600K+GTX1070 but the performance is nowhere near where she believes it could be on the RTX platform and she's hoping she can rid herself of some cuda incompatibilities between driver versioning and hardware. I think my main insecurity is motherboard and whether I did a halfway decent job of finding that sweetspot where it'll support fancy stuff without the fluff of being too "Gamer" or offering overclocking features which neither of us will use so I'm hoping someone can point at the MoBo and offer words of encouragement or advice. If there are any pointers on the rest - I'd be happy to take more advice into consideration! We're good on peripherals and storage. Thanks in advance!
- 4 replies
-
- video editing
- ai
-
(and 1 more)
Tagged with:
-
So I have a i7-10700k, RTX 2070 super, 32g DDR4 3200 ram in a corsair 4000D airflow cooled by a icue h100i capellix AIO(240mm) and luckily corsair released their 5000D airflow right on time and still had time to return my cooler/case. Now im getting the new case and a 360m varient of the same AIO since now the larger case can fit it both in front and on top. so my question is: What would be the best outcome for me? Front mounted aio as intake, tubes down with 1 exhaust on the back and 2 exhaust on top. OR do I put my AIO on top as all exhaust, another exhaust in the back (4exhaust) and then 3 intake fans? (ML corsair fans to be specific) or is there another arrangement that would be better? having more exhaust than intake makes me wonder if this would cause an unwanted type of air pressure? I honestly have no idea
-
Bfloat 16 is a half precision floating point format that has the same 8-bit exponent as single precision, but only 7 (plus 1 implied) bits of significant. Surprisingly, this turns out to be adequate precision for many machine learning applications, so a lot of resources are being put into making arithmetic in this format run fast. Given that, it would seem to make sense to also try to use it for graphics. Using it for RGB components during calculation, for example, would allow a much wider dynamic range of light sources to be rendered, compared to just trying to calculate with 8-bit integers. At the same time, it could potentially be faster than using single precision floating point for RGB components. Are any existing graphics rendering systems actually using it for such purposes?
-
I am currently quite confused as I am having a difficult time choosing over MacBook air m1 and MacBook air m2. The main problem is that the base MacBook air m2 only comes with one 256GB nand chip, and I don't have the money to go for the 512 GB model. I am planning to use my MacBook for mainly coding purposes, including running some ml models. Although m2 is faster than m1, I speculate that it would hinder the performance if I'm running a heavy memory task like an ml project, and hence I'd pay for the same, if not worse, performance. The new MacBook Air has got some new chassis and stuff, but it might not be that good of incentive to pay additional 300ish dollars (as MacBook air m1 is discounted). Can anyone give any suggestions and some graphs pointing out what would be a real-world difference when it comes to coding as I couldn't find some relevant data?
-
Games, programs or workloads that it will be used for: basic games only(free ones) , Coding, ML Other details i am finalising on this build.i can get this for 580$ . should i change anything?
-
Hi guys, I'm in serious need of help I had installed Magic Lantern in an 8GB card for my Canon 600D. In a another 32GB card I was running the normal firmware. Recently, my cousin accidentally erased the ML and rendered the card useless. I carried on using the 32GB card in the camera and it worked fine with the normal firmware. I'm going to sell the camera to another person now. So I read that if I formatted the ML card and inserted it back, it would work. but it doesn't. Now the camera doesn't switch on even if I use the 32GB card Please help, I am selling this today to upgrade to another model. I need to figure out what the problem is fast.
- 6 replies
-
- canon 600d
- ml
-
(and 3 more)
Tagged with:
-
Various cities in California have already banned the use of facial recognition by law enforcement, but the California State Senate has passed a new law that will ban the use of this technology by local law enforcement across the state. This bill also targets other forms of biometric technologies that have been abused by police. Technologies such as: This bill bans California law enforcement agencies from deploying these technologies and prevents out of state actors from setting up these technologies for their behalf. The bill still has to be signed into law by Gavin Newson, the Governor of California. If the bill is signed into law, (it mostly likely will be) California will be the first state in the Union to ban the use of these technologies by law enforcement. The authors of the Bill claim their intent behind the bill is as follows: Source: https://www.techdirt.com/articles/20190913/12354542986/california-senate-passes-statewide-ban-facial-recognition-tech-use-law-enforcement.shtml My comments are in the spoilers below:
-
Hey.. Looking for the right CPU for a computer (meant for machine learning) that will use x4 RTX 2080. thanks for helping
-
If you are looking to purchase Noctua fans I am sure you are well aware of their reputation, and specifically their beloved NF-F12 fans, and with good reason. You get a premium packaging and unboxing experience, and most importantly their extremely high-quality fan. All around I really love these fans but subjectively there is one reason I can’t give it a 5-star review. And that reason is the Corsair ML series fan. I will begin with the subjectiveness of my review, to say the ML is better is not a fact, simply my opinion. However, I will lay out some facts as to why I prefer the ML over the NF, but every person can interpret facts a little differently, and respectfully I have no issues with the NF-F12. Anyways, for those of us that care about aesthetics the chromax edition of this fan is really the only option, I honestly could never bring myself to buying the standard puke brown version (for those of you that can look past that I applaud thee). But for your $29 (Canadian pesos) hard-earned dollars you get a nice assortment of colour swappable red, black, white, blue, yellow and green anti-vibration pads. This along with the 4-pin PWM cable extension all wrapped up in a nice large package. Out of the box performance is great, these bad boys move a lot of air and have performed excellently for my use. I have three of these as intakes against the mesh front panel of my Phanteks Enthoo Pro M TG pulling air through my four 3.5” hard drive cages. This of course restricting some air flow and causing some more fan noise (to be expected), despite this my hard drive and internal case temps are respectable. I decided on these fans as they move a lot of air despite being very well static pressure optimized, perfect for being mounted in front of hard drive cages. But now I reason with the value of these fans. At $29 (at time of purchase) they represent a ‘premium’ 120mm option. At the time of this review the ML 120 2-pack is $37.50, which subjectively is just as quiet of a fan around 800ish RPM (my opinion) and looks pretty much the same, you just don’t get colour swappable anti-vibration pads. Blind math prices them at $18.75 each which is over a $10 saving per fan, so if you need four fans for your system this equals to a $40 difference. Now some of you would argue that you get what you pay for but to that, I would argue, “do you really?” and “is the price premium necessary?”. Let me throw some raw numbers into the equation: Corsair ML120 2-pack: (ML120 Pro series have the EXACT SAME specs) Price: $37.50cad (effectively $18.75 each) Speed: 400 - 2400 RPM Sound level: 37 dBA @ max RPM Static Pressure: 0.2 - 4.2 mmH20 Air flow: 75 CFM @ max RPM Warranty: 5-year Noctua NF-F12 PWM Chromax: Price: $29cad Speed: 300 - 1500 RPM Sound level: 22.4 dBA @ max RPM Static Pressure: 2.61 mmH20 @ max RPM Air flow: 55 CFM @ max RPM Warranty: 6-year I will let the numbers argue for themselves, but to say one is better than the other is not entirely objective. However, I PERSONALLY find the ML fans quieter at the same RPM, and although I keep my systems as quiet as I can, the higher max RPM of the ML fans is something I am sure that appeals to some users. To be fair there are higher RPM models of the NF-F12, the industrial 2000 and 3000 RPM models, but these come at $33 each, and are not available in a chromax version. So to wrap this all up, if you still want/prefer the NF's, great! You are making a good purchase and will be very happy. But having personally bought three NF-F12s and one NF-A14 I will stick to my ML’s in the future. I own the 2-pack variant, Pro LED’s (in 120mm and 140mm), and the Pro RGB’s. When I consider the three variables of price, performance, and aesthetics, the ML’s are my favorite in all 3 categories. That said the NF chromax fans are a very close second, and I genuinely hope to see some new colours and possibly an RGB version in the future. I think this would get Noctua more customers. Anyways, to those interested, I can highly recommend Noctua products and specifically the NF-F12 chromax.
-
Looking to build a 4/3 gpu nvidia RTX 2080 Ti CUDA Machline Learning system to PyTorch and TensorFlow. Should choose a Ryzen or Intel CPU? Based out of Vancouver, Canada. Select Componts, Build on Camera, I will setup off camera, run computer vision on Camera, Train ML off camera run ML trained computer vision on Camera
-
Guys I need help, I am planning to create a multipurpose system which I can use to train model, host it as a server(because i stay away from home a lot of times), have multiple virtual systems running and part time gaming. I was looking into : Ryzen 7 2700x ASUS ROG Hero VII 32 GB RAM 2 X 512 M.2 for storage 2TB HDD for storing junk collected over the years 2 X GeForce RTX 2080 Ti GAMING X TRIO I have bundle of questions. Should I look to the team blue(by that i mean intel)? Should I wait for Ryzen 3rd gen? Would the performance reduce if the to graphic cards are connected with SLI? I have a dedicated IP, would I be needing an extra NIC for better performance? In a hypothetical situation, under which condition would I ever need to overclock my CPU or GPU or RAM?
-
I am setting up a Deep Learning system with Nvidia RTX 2080 Founders Edition [non-Ti] on Linux (Most probably Ubuntu). The purpose is obviously to train Deep learning models. I don't plan to install windows or games as of now. I am looking for some advice for the best components that will work well without bottlenecking the GPU. In future I may add one more GPU, so would prefer a mother board with 2x 16 PCIe slots; even if I add NVMe M.2 SSD in future Below are my thoughts, please advice me with a better idea: 1. CPU Selection: I am thinking of Ryzen 7 2700X 8 core 2. Mother board: As it will depend on the CPU, assuming Ryzen 7 2700x which is the best mother board? Is Gigabyte X470 Aorus Ultra a good choices. What are the other best candidates ? 3. RAM: I heard that some RAMs are optimised for Ryzen 7 2700x. Is this true? what are the best candidates here? And what are the factors that I should consider when evaluating this? 4. Cabinet/Case: What are the best candidates here? Which are the cases with a good airflow design? What are the factors that I should consider here? 5. Cooling: As I am not planning to overclock my CPU or GPU as of now; can I just go with the stock fans of the CPU and case? Is water base cooling necessary for the stock settings of this GPU/CPU
-
I have trained a neural network in Keras but when running an sklearn confusion matrix on it after training I found that it always outputs 0. [[32849 0] [ 7215 0]] What have I done wrong? Code here
-
- ai
- machinelearning
-
(and 4 more)
Tagged with:
-
Ok so i am putting the final touches on my new build im going to be doing shortly, the theme is going to be black/red ROG style build i want some kind of fans that i can change the look to red color but not RGB as i dont want RGB i like the look of the Corsair AF/SP fans but the new ML fams have better specs of what i seen and still have slight color modification. my question is what one if better and are they quit as the pc will be on my desk about 2 feet away from my ears. if you have other suggestions please share opinions
-
With the old SP fans you could hold the fan blade and pull it out of its housing to make cleaning easier. Can this be done with the ML's? I would try but i dont wanna damage them
-
Corsair ML Fans magnetic levitation technology and custom engineered rotors provide unrivaled performance, as well as low noise. They deliver both high static pressure and air flow, check them out at the link below! Corsair ML Fans Product Page Link http://bit.ly/2bYSh5B
-
Hey guys, some news ! Corsair release their new ML series fans Corsair has released their new ML series fans which feature replaceable coloured corners and full PWM control. These fans will be available in Red, blue and white LEC versions as well as an all black or a gray and black design. Corsair's new ML Pro fans use Magnetic Levitation (ML) technology in order to minimise motor noise and provide both great static pressure and high airflow with a lower noise output, effectively combining their previous AF and SP series fans into a single quieter product. These new fans will be PWM controlled to speeds of up to 2,000 RPM, allowing them to deliver high performance and low noise levels. Corsair's ML fans will also be available in 120mm and 140mm models and feature colour coordinated corners for their LED model fans. The replaceable corners for these fans will be available to purchase separately, though the LED versions of these fans will come with matching coloured corners. Right now these fans are available to order at select retailers for the price of £18.95 for a single 120mm LED fan and £21.95 for a single LED 140mm fan. Dual packs for the plain non-led 120mm and 140mm fans are available for £21.95 and £24.95 respectively. ML = Magnetic Levitation hehe REQUEST TO LMG/LTT FOR A VIDEO ! ML fans vs SP and AF Full case with two in front one in back 2 on top and also push pull config sound levels and temp levels test please ! is this even worth it and how much will we gain! Thanks! Source: http://www.overclock3d.net/articles/cases_cooling/corsair_release_their_new_ml_series_fans/1
- 76 replies
-
- ml series fans
- fan
- (and 4 more)
-
Summary Open AI released the public beta for their new chat tool powered by their GPT technology. GPT is series of increasingly amazing deep learning models, that since GPT-3 are closed source. At their simplest, they try to continue a phrase that's given to them, according to which words are more statistically likely to be chosen next, given the words before and after and a statistical model. Embedded inside the parameters of the model is the statistical correlation between words, and, as the model gets bigger and bigger, with a wider and wider "memory", trained on more and more data, this statistical model can represent increasingly complex knowledge. Luke and Linus tried it on stage during the WAN show, with impressive results. The model is always very confident, even when it's inventing facts out of thin air. ChatGPT is banned from answering on Stack Overflow. People have been using ChatGPT to answer in mass questions on Stack Overflow without checking if correct. Quotes My thoughts The model is mighty impressive. Very fast to respond. It's confident both when it's right and when it's wrong, and the chat/thread functionality allows to refine the output incrementallty. One of the conversation that I tried: I would have very much liked to have such a useful chat bot when I was at the university. I'm doubtful it is very accurate on ambiguous/niche tasks, but on common task that are likely to be covered by a lot of text in the training data, the performance is nothing short of incredible. Sources News https://openai.com/blog/chatgpt/ https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned Talk with the actual bot https://chat.openai.com/chat Ars Tecnica article https://arstechnica.com/information-technology/2022/12/openai-invites-everyone-to-test-new-ai-powered-chatbot-with-amusing-results/ Yannik (creator that covers ML news)
- 71 replies
-
- ml
- machinelearning
-
(and 3 more)
Tagged with: