Search the Community
Showing results for tags 'ai'.
-
Hello There, So this is my last resort cause I was not able to find much useful and working info on this on the web. So I have a gaming laptop and the specs are as follows: HP - 15-ec2xx CPU: AMD Ryzen 7 5800H Graphics 1: RTX 3050 mobile, Memory: 4GB GDDR6 SDRAM Graphics 2: (Integrated) AMD Cezanne - Internal GPU, Memory: 512 MB DDR4 SDRAM RAM: 32GB DDR4 So my issues are: Whenever I start a game (GTA5 and RDR), the RTX graphics card or AMD card is used. I won't use the RTX card often and that makes the CPU go nuts it takes most of the game load and overheats. To make sure I am not making any other cooling mistakes. I made sure to clean my laptop and also did a repaste, the temps did improve and reduced quite a bit but the GPU issues persist. The second issue I have is that I am not able to train any AI&ML models, I know that 4GB is a pretty low amount of RAM to do any AI&ML training but it's also more than enough to train a few small models to manage quite minute tasks. and I use Python Torch for basic AI&ML training and torch won't acknowledge the RTX GPU nor the AMD GPU and tries to build/train the model using the CPU. What I did to tackle this: I tried a multitude of driver options and also installed dev support for my GPU. I tried installing Ubuntu in duel boot and tried running my training model on that but the same issue persists. I tried using other libraries to train the models and they also failed to detect the GPUs. I tried installing any drivers I found and ran all the tests, but none worked. I tried the Nvidia-utils to detect I have the GPU present and that worked. What should I do? What can I do? How can I determine the issue?
- 2 replies
-
- graphicscard
- troubleshoot
-
(and 2 more)
Tagged with:
-
Is a rtx 3080 10gb good for ai ml and deep learning purpose or rtx 3060 12gb a better one Apart form price point
- 2 replies
-
- gpu
- deep learning
-
(and 2 more)
Tagged with:
-
Summary OpenInterpreter just announced a portable, open source, language model-powered voice interface for home computers. Like OpenInterpreter in a terminal on PC, the 01 can be run either via OI's servers, or 100% locally using a downloaded LLM and local host (as OpenInterpreter does currently). The 01 developer preview is live on GitHub. Quotes My thoughts This is one of the more exciting bits of AI news I've seen in a while. I love the open source ethos, and I believe it can offset some of the more worrying potentials of allowing corporate profiteering to be the primary driver of this endeavour. I've already been playing with a local instance of OpenInterpreter in a Windows terminal using local inference, testing it out as a code building tool and general helper. It's slightly fiddly, tech nerdy stuff, which is fine for me, but would never reach mass uptake in its current form. The 01 is super exciting, not only cos it's like having your own personal Majel Barrett Enterprise computer interface, but because it could break through that fiddly tech nerdy barrier and make a genuinely useful open source tool which everyone needs - the natural language computer interface - accessible for huge numbers of home computer users. Sources https://www.openinterpreter.com/01 https://github.com/OpenInterpreter/01 https://twitter.com/OpenInterpreter/status/1770821439458840846
-
I'm currently in university studying Computer Science. My region allows those with Bachelors to teach in public education (after a two year licensing program after university) for those with technical degrees. My plan is to become an educator in my community which is a bit on the rougher side but still lovely. I've lately been utilizing Microsoft Copilot to aid in my studies. It will create large problem breakdowns for me in subjects I'm struggling in without needing to nag and wait for the professor or search over forums across several web pages which may include contradictory or misleading information. While not perfect it does an overall great job and while I utilize it 9/10 for pure productivity aids, sometimes if tired I've caught myself slipping and just tossing the answer in (I know I'm cheating myself when I do this). This got me thinking, this is going to be a HUGE problem soon isn't it? Its already made some splashes, but like if these language models become so sophisticated we can't tell genuine from generated, how will teachers of the future be expected to teach students? In an ironic twist the same silicon which allowed online schooling seems to be making it obsolete due to the need to guarantee or at least attempt to guarantee no cheating. Students will need to be actively observed while completing ALL assignments. On the bright side this could spell the end of homework as there will be no point in sending assignments home with students if you can't determine what is genuine or generated on a student's assignment.
-
Summary CNET, once considered one of the major technology-focused news outlets, has been heavily criticized for falling editorial standards since it's acquisition by Red Venture in 2020. After years of dealing with AI-generated articles and other "advertiser-driven" decisions, Wikipedia's editors have had enough and are removing CNET from their list of reliable publishers and are categorizing any article published since November 2022 as unreliable. Some editors are going further and pushing to have content from any outlet owned by Red Venture automatically marked as unreliable https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources Quotes My thoughts Enshittification claims another one. Journalism is the absolute last place that AI should be deployed to. Sources https://futurism.com/wikipedia-cnet-unreliable-ai
-
I wanted to kick off a discussion about an increasingly popular topic in the AI and tech community: running large language models (LLMs) on different hardware. One of the remarkable aspects of this trend is how accessible it has become. I wanted to discuss the real game-changer – running LLMs not just on pricy GPUs, but on CPUs. I wanted to see LLM running to testing benchmarks for both GPUs and CPUs, RAM sticks. Basic models like Llama2 could serve as excellent candidates for measuring generation and processing speeds across these different hardware configurations. As for me, seeking for upgrade, it would be high priority thing. 1. Moving on to the details, I'd like to ask some questions about the productivity of running LLM models on specific CPUs, RAMs, and motherboards. Does RAM frequency play a key role in generating speed? And what about RAM timings – do they impact generation speed significantly? As far as my understanding goes, the difference between 40 and 32 timings might be minimal or negligible. And motherboard chips- is there any reason to have modern edge one to prevent higher bandwidth issues in some way (b760 vs z790 for example)? And also- standard holy war Intel vs AMD for CPU processing, but later about it. 2. I'm also intrigued by the idea of optimizing RAM setups for LLMs. For instance, is it more beneficial to have 4x24 GB or 4x16 GB sticks with a single channel rather than 2 sticks of 48 GB with two channels (at possible maximal frequencies avaliable on market)? Could those arrangements improve bandwidth for LLM processing? 3. Looking ahead, it's exciting to consider the upcoming 14th-gen Intel and 8000-series AMD CPUs. Rumors suggest these processors will feature integrated GPUs. It would be really interesting to explore how productive they are for LLM processing without requiring additional any GPUs. At least for such low budget entusiast like me =). This could potentially be a game-changer. I haven't fond similar theme searching for 'llm' or 'llama' nor better place to ask questions just in case. Found an opinion that those things are for basement trolls on reddit. Sure it's not a thing widepsread like Blender (which is based on CPUs and RAM mostly too, to say, specially for simulations, high poly etc.) Also English is not my native language, sorry in advance.
-
Billy Joel released a new song for the first time in a long time and his music video features AI versions of himself singing to the song throughout the 3 distinct eras of his career. The AI was done by a company called Deep Voodoo and was pretty impressive all things considered. Video Link: Article: https://people.com/billy-joel-turn-the-lights-back-on-music-video-exclusive-8583952
-
Chat with RTX Now Free to Download | NVIDIA Blog So far, its been mostly right. I've been able to get it to give me some wrong information and information that conflicted with previous responses. The datasets that it comes with are limited, but the 'default' option at least can scour the internet. I imagine more robust datasets will come out in the future, but so far, its quite promising. With my RTX 4090, I'll see it cause a jump in GPU utilization anywhere from 50-100%, which didn't impact gaming performance in a DLSS enabled game, Warframe, while also still being instant in its response. Outside of the installation process, CPU+RAM don't seem to matter. The installation process took a solid hour with regular 100% CPU and RAM utilization. Anyone else playing with it, and with what hardware?
-
This was something I did on January 19, 2024, where I used the ElevenLabs AI audio tools to attempt to translate excerpts of the May 18, 2018 episode of The WAN Show into Canadian French and Japanese, or close approximations of it. I was curious if Linus, James, Luke, producer Dan or other LMG and/or Floatplane alumni had any thoughts on these. wan gigs.mp4 French Canadian WAN Gigs.mp4 Japanese WAN Gigs.mp4
-
- ai
- elevenlabs
-
(and 2 more)
Tagged with:
-
Budget (including currency): $1,500 Country: USA Games, programs or workloads that it will be used for: AI/ML model training, general computer science related tasks, obviously gaming Other details (existing parts lists, whether any peripherals are needed, what you're upgrading from, when you're going to buy, what resolution and refresh rate you want to play at, etc): I've never built a PC before, even though I have watched countless videos on how to, and I am looking to get something that will set me up for a while. For some context, I'm currently a second-year comp-sci student and will be specializing in Machine Learning and AI. I haven't had too many courses specifically related to those fields yet so I'm really unsure what kind of hardware works best for it. The only PC I have as of right now is a Dell XPS 13 with an i5-1135G7 and 16 GB of RAM. I have no complaints about this laptop, but as of yet, I haven't really used it for demanding tasks and when compiling C++ code or even trying to play Minecraft, it definitely struggles. As a little side note, I am very deep in the Apple ecosystem (iPhone, iPad, Apple Watch, Apple TV, Airpods, and Homepod Mini) and I'm also not opposed to getting something from Apple, with the acknowledgement that the gaming performance will suffer. I have a PS5 and as much as I'd love a PC to play much better games, it is not my biggest worry. Any advice would be much appreciated. I'm looking to buy this system at any point in the next year. Thanks!
- 4 replies
-
- ai
- machine learning
-
(and 2 more)
Tagged with:
-
Been using RTX Super Resolution and RTX Video HDR since release. So far, the technology has been working flawlessly and is noticeable when toggling between them, which leads me to believe that it would be worth adding to my home theater PC. Currently the technology playing 4K video from Amazon Prime (Invincible, so far just got to season 2) was using 10-13% of my RTX 4090. This gives me concern that the absolute lowest end cards might struggle to maintain the highest quality settings for RTX Super Resolution while doing RTX Video HDR, though I imagine an RTX 3050 can do it perfectly fine (at like 80-90% usage). Anyone else use these features and have low-mid tier RTX hardware who can observe GPU usage?
- 12 replies
-
- super resolution
- ai
-
(and 3 more)
Tagged with:
-
Hi there LTT Forum! I'm looking for some help on the easiest way to tackle a work-related task. I work for an event venue and we're trying to build a searchable database of events over 60 years. We have scanned in (most of) the lists that were kept as records from 1959 - 1990, and then from 1990 onward, we have some form of digital / PDF, etc. for the more recent events. They are generally laid out like "XYZ Event - December 1, 2000 - Venue Space", mercifully. What would be the best way to tackle getting all of this into a database entry format? I saw the UPDF ad in WAN last night and started wondering if there was an AI / smarter way to handle this than just going through the lists and typing it out by hand. Looking for recommendations on: 1) How to tackle this issue (what software, etc.) 2) How to store this data (best database solution to share with 20+ employees). Happy to answer any questions, just let me know what you want to know! Thanks, WX
-
Original article from Baltimore Banner: (Alleged remarks may be triggering)! https://www.thebaltimorebanner.com/education/k-12-schools/pikesville-high-principal-eric-eiswert-NT7K7N4K6RDEJNL5Z7ULTEG7VY/ How the heck do we tell what's real and what isn't when some public figures at smaller scales don't have larger platforms to compare to?
-
Summary The Rabbit R1 is a new type of personal assistant that might one day replace the use of a smart phone. It runs a new kind AI they call a Large Action Model because it was trained on people using apps instead of language models. Quotes My thoughts This looks super promising and really cool but it is really early on in development to know if this is going be as good as they make it sound but what is everyone's thoughts on this? Sources https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date Update: let me preface this with idk how accurate this but here's a report that the company sold out of its unit of the rabbit R1 in 1 day which was 10k units https://www.theverge.com/2024/1/10/24033498/rabbit-r1-sold-out-ces-ai
-
I have a ryzen 5600X right know and a RTX 3070. In the futur, I want to upgrade my pc to a Ryzen 8000 CPU + RTX 5070 (Zen 5 aka granite ridge). 99.99999999% chance that the new cpu from AMD wil have NPU, because right now the ryzen 8000G series (zen 4) have NPU. The question is do you think NPU will be a gimmic? for now most app are not ready for this, I think they wait for Windows 12 because Microsoft said that the next generation of windows will be focus of AI.
-
Note: This topic is pure speculation, and should be taken with a grain of salt. It's just a theory. A GAMEEE THEORY (i'm sorry) As a minor, I've always been worried and on the alert for the voice cloning scams wherein a scammer will call a parent and (the scammers will) clone the voice of that parent's child to try to make it seem like the child has been kidnapped. (Instance of this happening) The one thing that nobody really knows the cause of is how the scammer is able to clone the child's voice in the first place. I mean, for a scammer to be able to accomplish such a task they would have to have multiple recordings of the child's voice, and by default, be in contact with the child! I didn't have any ideas as to how a scammer would get voice recordings like this. Well, until I started getting these strange calls. For the last couple of months, I've been getting calls from people with my number's area code. When you answer, normally nobody answers the call. It's just 15 (or so) seconds of silence until the caller on the other end hangs up. It's the same for every single call (except for a couple of rare cases where a robot on the other end would say 'hello' three times back-to-back after a couple seconds of silence and then hang up). What's weirder is that every single call is from a different number. Not one of them has the same number as another, yet they all have my area code. I've tried calling these people back, but every single time, no matter how close it was to the original call in terms of time, it always goes to voicemail (and the voicemail box is never set up, might I add). I get these calls multiple times a day. Once, I did get a voicemail box, but the number that the robot listed was different than the one on the caller ID. My theory is that they're calling me to try to get me to try to speak into the mic so they can record and clone my voice. Again, this is all speculation. I don't know if any of this is true. I just thought I'd share this information just in case someone else had a lead.
- 12 replies
-
With all the excitement around AI and doing stuff with AI, I think it deserves its own category on the forum.
-
SAG-AFTRA, the union responsible for protections of actors, voice actors, and radio stars, has made a deal with Replica to allow for voice actors to have AI of them speak their lines. According to the union, this "will enable Replica to engage SAG-AFTRA members under a fair, ethical agreement to safely create and license a digital replica of their voice." As of now, this has only been confirmed for interactive media, namely video games Source: SAG-AFTRA themselves https://www.sagaftra.org/sag-aftra-and-replica-studios-introduce-groundbreaking-ai-voice-agreement-ces In my opinion, this is a mixed bag. Licensing out your voice still requires money for the voice actor, yet goes against what SAG-AFTRA would seem to have believed from AI.
-
Summary RTX 4090D released today, binning puts it below RTX 3080 10GB in terms of silicon quality. What I find interesting is how chopped the AD102 GPU is but still maintains its 384 bit bus and 24GB of VRAM. 14952 Cuda cores is a significant reduction compared to the 16384 of the normal RTX 4090. My thoughts Updated RTX 3000 vs 4000 binning scheme chart: What I'll be curious to see is if we get an RTX 4080ti, if it'll be a 320bit 20GB card. It's quite possible that it'll fall below the 14592 CUDA core count in the 13,000-14,000 core region, potentially at the bottom of what the AD102 GPU can reach. The rest of the story on the 4090D isn't what I'm interested in, but potentially how this affects the RTX 4080ti (if we ever see one), since there's a huge cap in AD102 GPUs between the RTX 5000 Ada's 12800 (256 bit bus) and the now second lowest bin, the 4090D, at 14952 (384 bit bus). NVIDIA GeForce RTX 4080 Ti Specs | TechPowerUp GPU Database This is speculative, since TechPowerUp's projected die configuration has been wrong before (looking at you RTX 4060, shown above lined out). I'm still hopeful we get an RTX 4080ti with a 20GB or higher memory buffer. Since Nvidia hasn't put out an equivalent die on the workstation side, and I doubt their binning scheme magically doesn't have a million or so dies in the 13,000-14,000 core count region, they're probably waiting for competition in the space. NVIDIA AD102 GPU Specs | TechPowerUp GPU Database Sources NVIDIA GeForce RTX 4090 D GPU Launched In China: Reduced Cores, Similar Gaming Performance For $1599 US (wccftech.com) NVIDIA launches GeForce RTX 4090D with 14592 CUDA cores, 24GB G6X memory and 425W TDP - VideoCardz.com
-
I am planning a new build for deep learning / AI workloads. I need a stable system with >80 GB of memory. After some research I found these 3 options: 1. 2 x 48 GB ddr5 6000 mhz, C30 2. 4 x 32 GB ddr5 4800 mhz, C30 (can this be any faster like 5200 / 5600) 3. 4 x 32 GB ddr4 3600 mhz, C16 Which of the above memory options would offer me the highest bandwidth to work with are be stable at the same time? I want to set the XMP / EXPO profiles once and let it be. No OC, no tinkering with timings. It would be best if someone could tell me the effective memory bandwidth for all the 3 options above. Does having 4 sticks (quad channel?) mean double effective bandwidth? Does Intel have any advantage over AMD? I am looking at 7800x3d or 5800x3d or 13600K. I listed the 3rd option (ddr4) above because I am already running a 5600x + ROG strix x570e mobo + 2 x 16 GB 3000mhz Crucial. So upgrading to 5800x3d and new memory is the cheapest option for me. Also the RAM timings are quite better with ddr4 vs ddr5 and I wont be getting >6000 ddr5 speeds anyway with such a high capacity requirement. But if the bandwidth gains are high then I am willing to spend for a new AM5 / Intel platform.
-
Budget (including currency): 40-60k ₽ 450-650 $ Country: Russia Games, programs or workloads that it will be used for: Open Ai whisper, Stable diffusion, some modern games not really after graphics Good day from country which chose the villian path! I am a student from HSE, and i need your help! I have a system running 3800x on B450 asus prime plus board with asus dual oc 2070 along with 32GB hyperx fury 3200Mhz memmory, all of it being powered by corsair 750Wh PSU. Ive ran into problem with Open AI whisper Large model which uses up to 10GB of VRAM space, my first thought was a direct upgrade from 2070 to 4070, however seeing how prices plummeted for 2070 on after market, ive started thinking that mbe senior models might be more valuable when il be making my next upgrade. Am i on a right track or there are any other paths? Thanks In Advance! Sincerely, okfb P.S. Sorry for any misspellls or error Grammatical
-
Summary As AI gains popularity so does the need for compute performance and Nvidia wants to transition to that marketplace Quotes My thoughts I saw this coming from a mile away every since the 10 series they have stopped being competitive price wise for normal users. Which to be honest why should they the money is still there without trying. Businesses will pay way more than even the consumers buying a 4090 and AI is eating GPUs faster than the cryptocurrency hype train and this time it's businesses with deep pockets forking over the cash. Sources https://www.digitaltrends.com/computing/nvidia-said-no-longer-graphics-company/
-
Summary After a review by the Board of Directors, Sam Altman has been fired as the CEO of OpenAI. Quotes My thoughts Well, I was certainly not expecting this to happen. I'm extremely curious about the specifics, but we'll probably never know. Also, what do you guys think the impact of this will be? One could argue that it might not be significant, but sometimes the CEOs of bleeding edge companies (my way of saying companies on the bleeding edge of tech development) have strong visions, and when they go, it all comes tumbling down. OpenAI seems to have a strong structure in place, but you never know. Curious as well how the new CEO will do. This is a developing story, so I'll keep adding details and sources as they come. Sources The Verge OpenAI Blog
- 29 replies
-
- sam altman
- ai
-
(and 1 more)
Tagged with: