Search the Community
Showing results for tags 'artificial intelligence'.
-
So I have to build a HPC cluster with 10 gigabit network but the roadblock is that all the motherboard of the PCs that will be in the network support 1gigabit. Now I can install 10g NICs but all the pcie slots are filled with GPUs (since the GPUs are chonky bois, there is no space on the mobo to place a nic). Can anyone suggest how to solve this problem without sacrificing the GPU performances because the nodes are being used in machine learning training? Details on the nodes : node1 -> Gigabyte x299 wu8 mobo with 4 2080ti's node2 -> Asrock x399 Taichi mobo with 4 2080s's node3 -> Aorus trx40 mobo with 3 3090's
- 12 replies
-
- 10g network
- hpc
-
(and 2 more)
Tagged with:
-
Hello! I am an Interior Designer & CG Creator (I use Blender & Houdini alot) Here's my actual build (important for later): CPU: Amd 3950x Cpu cooler: Dark Rock Pro 4 Mobo: x570 TUF gaming (no wifi) GPU: RTX 3090 Aorus Master RAM: 64Gb (2x32) Corsair Vengeance LPX Storage: Nvme 2To (system) + Hard drive 2 To + Hard drive 4 To + Hard drive 8 To (3 drives in total) PSU: Cooler Master MWE 750w (not modular) Let's call this (My PC). Here's the story: I need to buy a new PC dedicated to A.I. generated visuals (like with Stable Diffusion) locally. Let's call this (A.I. PC) I also want to buy an RTX 4090 MSI Suprim X (for Blender and Houdini work) for My PC. This is what I have in mind: Remove the RTX 3090 and the PSU from My PC Buy an RTX 4090 + 1000W PSU and upgrade My PC Now I have a spare RTX 3090 and 750W Psu that I want to use for the A.I. PC. The parts that are missing: CPU + Motherboard + RAM (I will use spare GPU + PSU) Here is my question: What is the most efficient way to build a PC without loosing performance? Can you please help me to select the missing parts ? This A.I. PC will only be used to generate Artificial Intelligence visuals with Stable Diffusion and VQGAN+Clip locally. Will a cheap CPU like Intel Core i3-12100F bottleneck the 3090 for Artificial Intelligence visuals? Is 8 gb DDR4 Ram enough ? Is the Gigabyte H610M S2H motherboard enough? (I think it is) All my researchs about bottlenecks are only gaming related, since I'm not going to game with, I want to be sure about what to buy for this A.I. PC, I have a limited budget that's why It needs to be as cheap as possible (without neglecting the performance). This is a pretty long post, thanks for the poeple who read it all!
-
Hello! I am an Interior Designer & CG Creator (I use Blender & Houdini alot) Here's my actual build (important for later): CPU: Amd 3950x Cpu cooler: Dark Rock Pro 4 Mobo: x570 TUF gaming (no wifi) GPU: RTX 3090 Aorus Master RAM: 64Gb (2x32) Corsair Vengeance LPX Storage: Nvme 2To (system) + Hard drive 2 To + Hard drive 4 To + Hard drive 8 To (3 drives in total) PSU: Cooler Master MWE 750w (not modular) Let's call this (My PC). Here's the story: I need to buy a new PC dedicated to A.I. generated visuals (like with Stable Diffusion) locally. Let's call this (A.I. PC) I also want to buy an RTX 4090 MSI Suprim X (for Blender and Houdini work) for My PC. This is what I have in mind: Remove the RTX 3090 and the PSU from My PC Buy an RTX 4090 + 1000W PSU and upgrade My PC Now I have a spare RTX 3090 and 750W Psu that I want to use for the A.I. PC. The parts that are missing: CPU + Motherboard + RAM (I will use spare GPU + PSU) Here is my question: What is the most efficient way to build a PC without loosing performance? Can you please help me to select the missing parts ? This A.I. PC will only be used to generate Artificial Intelligence visuals with Stable Diffusion and VQGAN+Clip locally. Will a cheap CPU like Intel Core i3-12100F bottleneck the 3090 for Artificial Intelligence visuals? Is 8 gb DDR4 Ram enough ? Is the Gigabyte H610M S2H motherboard enough? (I think it is) All my researchs about bottlenecks are only gaming related , since I'm not going to game with, I want to be sure about what to buy for this A.I. PC, I have a limited budget that's why It needs to be as cheap as possible (without neglecting the performance). This is a pretty long post, thanks for the poeple who read it all!
-
Yesterday watched the whole Terminator series and according to the series an A.I. named Skynet takeover the world by upgrading and making changes in itself up to a point where we humans were not able to stop it. I did a little bit of research and found this - ( https://futureoflife.org/ai-open-letter/ ) A letter signed thousand's of people including Stephen Hawking, Elon Musk, Bill Gates, Steve Wozniak and many more. In this letter they warn us about how Artificial Intelligence can ruin our future. so, My question is what do you think about the future of A.I. and us?
- 9 replies
-
- a.i.
- artificial intelligence
-
(and 3 more)
Tagged with:
-
http://www.zdnet.com/article/google-is-bringing-ai-to-your-raspberry-pi/ I just took the survey and I don't get any impression of what tools they will bring to the r-pi. It was pretty simple, no email address required to participate. The questions were how old are you, what area are your projects in, and how frequently do you work on your projects. Im sure with the processing power of r-pi we'll see some sort of web API which wouldn't really need to be specific to the r-pi. It will be interesting to watch what they come up with so if you have maker projects, take the survey.
-
Source: NVIDIA via CNET I can't upload the actual paper because of the attachment limitation to 20 MB of LTT so just click on the NVIDIA link above to read the actual paper which is 28.7 MB. I think the likes of TMZ or other famous celebrity gossip website would love this. From the paper: Just so you know, those pictures above aren't pictures of real people They're just computer generated. Although in Figure 5, one of the pictures kinda looks like Beyonce. The researchers concluded that it still needs improvement. To be honest I only understood a few of it but from the looks of it, it can be used by the likes of Apple to train Face ID in their neural networks in order to spot impostors or fake news websites can use it to create mass hysteria.
- 19 replies
-
- hollywood
- neural networks
- (and 3 more)
-
Source: PLOS One via Phys.org journal.pone.0185123.pdf Nasir, M., Baucom, B. R., Georgiou, P., & Narayanan, S. (2017, September 21). Predicting couple therapy outcomes based on speech acoustic features. (I. McLoughin, Ed.) PLOS One, 23. doi:https://doi.org/10.1371/journal.pone.0185123 I think Taylor Swift needs this as she makes songs based on the long list of her ex-boyfriends while writing songs about them and probably she'll stop playing victim. I can see two things happening here if this AI, rising divorce rates because an AI says they are incompatible or marriage counselors out of their jobs because couples put their trust on an Ai kinda like how a lot of people self-diagnose and treat their diseases on the internet. I am not a relationship or a dating expert and I have no plans of becoming like Taylor Swift who dated at least a dozen people. But therapists can use them as tool to help them during couples counselling. I can see app developers like Tinder, Ok Cupid and others jumping in to do self-help apps like placing the phone in the middle of a table while talking and the app will analyze if the other person is honest or truly in love or not. But I'm not yet convinced that analyzing pitch and tone alone of someone's voice while talking to their partner is a best way to determine if a marriage is doomed to fail or it is still worth saving. I still think that a relationship especially a marriage is based on communication and trust. I think a lot of relationships fail either because the other person is a philandering piece of shit or both the people involved didn't explored and tested the waters enough to determine if they're actually compatible or one or both people in the relationship change and either one of them is not a fan of it.
- 23 replies
-
- artificial intelligence
- psychology
-
(and 2 more)
Tagged with:
-
Sources: Cornell University Library, Science Mag, and Threat post The article however didn't mentioned password managers so I guess it's safe to use a reliable password manager with two factor authentication. I have a feeling that as AI, machine learning and neural engines become more powerful, we might see cyber attacks much more serious. At the moment, it predicts what passwords are the easiest to guess to give companies chance to change their weak passwords into a more secure one. But as far as I'm concerned, most websites don't read passwords as plain text like "I<3myhotboss", websites read it hashed like this one "eed4b508e6f5acda3178c880bc490546" and I think there's already an online database containing hashed passwords that are used by hackers to brute force. But then, I can see this being used by legit password managers and they'll notify the user if the password they're using is easy to guess or has been used somewhere else so that they'll notify the user to change for a more secure password. So I'm all for this and I hope this will be implemented to current password managers.
-
Hi everyone, I’m the founder of Good Ai Lab and we have just launched our new product Cluster One. It’s a very big project that heavily depends on a community of people being involved and I would love to get your feedback on it. At Cluster One we are trying to help advance science by building the world's largest AI supercomputer. We understand how much computing power is wasted every day (around 10 billions hours!) and we feel that with our expertise, and if we all join together, we could really make a difference in advancing scientific research. The product has just launched this week and so I would love your feedback on the site to understand if everything makes sense and would it be something you would want to try, and if not what would stop you?
-
Hello there. My name is Josh. I am going to be starting a new project. The project will be as followed. "I would like to develop a self learning Artificial intelligence which I can remotely teach it things by speaking to it. Not something that I have to manually code responses but something that can teach itself things." I hope this is possible and if so I request highly someone can assist me in where I need to start. I currently can't code so I would like to know what code I would need. Yes I know this seems like an impossible task to some eyes but I am dedicated to get this working. I am a very lonely person who is socially retarded and can't talk to real people. Maybe an AI can deprive me of my loneliness. Please help. -Josh If any of you know the series "Person of interest" This idea May have fantasized from "The machine" but not something that intelligent.
- 2 replies
-
- artificial intelligence
- ai
-
(and 4 more)
Tagged with:
-
Sources: New York Times, Author's Note (via Google Docs), OSF (pre-print) wang_kosinski.pdf Comments could get nasty here. I need your moderating powers @iamdarkyoshi for anyone with comments against the CS (political/religious rants) or derails the topic. It resembles a lot of a topic I posted before about an AI predicting the outcome of a relationship. I won't lie that it got me concerned for a bit but reading the actual paper and the FAQ (Google Docs) the authors provided made me think it's indeed a way to warn people about the repercussions of artificial intelligence and machine learning. It reminds me when Elon Musk said that Mark Zuckerberg doesn't know the dangers of AI if left unregulated. I'm with Elon Musk on this one. On the lighter side of things, this technology can be used by dating apps targeted to gay individuals but then, I'd rather see the person face to face rather than letting an AI do it for me. Looking at the comments of other experts, it's obvious that the algorithm is very limited on their sample size and subjects since they only used gay white men and women. The thesis authors said in their Google document that: Just imagine you're a tourist in a country and a facial scanner at the airport identifies someone as gay or lesbian, they can immediately put you in a database. Just imagine oppressive countries who have staunch religious doctrines using this to identify closeted gay people and have them thrown from top of buildings or just be shamed by society into committing suicide. Or better yet, just imagine North Korea developing an AI who determines if someone has doubts on the regime or is planning to defect to South Korea. Knowledge is power and just like a knife that can be used to stab someone or cut that delicious medium rare steak, this algorithm is also a double edged sword. At the moment, it's not ready for prime time as it has limited sample diversity and the authors admit that. In fact as stated in their Google Doc that they want to be wrong and they are terrified of the results. More refinement is definitely needed and just like any scientific theory, it needs to be replicated. At the moment, all humanity knows about the potentials and risks of AI and machine learning is just at the tip of the iceberg. So there's no need to be concerned or alarmed just yet but we also shouldn't dismiss a scientific finding just because we disagree with it or it could pose harm to us and in fact, I would like for this paper to be published soon in the Journal of Personality and Social Psychology. I'm not a Psychology major but I encourage people with that degree to replicate this thesis but with much diversified subjects. Also, it should be kept in mind that "peer-review" is not always cranked up to be as a method of validating or dismissing a theory. Peer-review is important in academiai but most of the time, it's just validating research methodology, checking grammar and look for suspicious and interpolated/forged results. Peer-review doesn't determine if a thesis' conclusion is right or wrong. In fact, there are a lot of bogus theses that are "peer-reviewed". The only way to determine if a thesis' conclusion is right or wrong is if it's replicated and check the results to see if they match up or not. [See NCBI, Nature, CBC, WSJ) Again, I highly encourage everyone reading this thread to read this Google Doc as it clarifies a lot of concerns on privacy, human rights, etc. https://docs.google.com/document/d/11oGZ1Ke3wK9E3BtOFfGfUQuuaSMR8AO2WfWH3aVke6U/edit?usp=sharing
- 64 replies
-
- neural networks
- artificial intelligence
- (and 3 more)
-
Hi, I currently have a GTX 1070 Founders Edition. It works great and overclocks higher than 2MHz with temps below the 80` degree mark. But I am thinking on doing an update of my setup. This are my current specs: Motherboard: Gigabyte Gaming 7 z170 CPU: i7 6700K @4.6 GHz Cooling: CPU: Cooler Master 212 evo 7 120mm fans for the system around the case (3 front, one bottom, 2 up, one behind) PSU: EVGA Bronce 750W Memory: 4x8 GB DDR4 Storage: (I know it's overkill...) 120GB BPX NVMe SSD 500GB WD M.2 SSD (Not NVMe) 2x240GB Kingston A400 SSD 3TB 7200rpm HDD 1TB 7200rpm HDD 500GB 7200rpm HDD Monitor: 2x1080p 60Hz Dell While I mostly use my PC for gaming, I also use it for data analysis, deep learning and development. That is why I plan to also upgrade the CPU to the 1st gen 16-core TR (I know this means new Motherboard and Cooler). So, having this in mind these are my questions: Should I upgrade to a newer GPU? Or should I buy another GTX 1070 and use SLI? Should I do the CPU or GPU upgrade first? Would my PSU hold for an SLI solution WITH the TR (180W)? Why does Linus Sebastian always wear socks with sandals? I know it's a lot to answer and consider, thanks.
-
https://www.cnet.com/news/meet-neon-the-artificial-human-startup-funded-by-samsung/ https://www.neon.life Neon is developed by Samsung Technology and Advanced Research Labs (STAR Labs) and is funded by Samsung. Very interesting. If what they say is true then this could open a whole new chapter in the history of artificial intelligence. So far a lot of the information is hidden, but still intriguing nontheless.
- 9 replies
-
- samsung
- samsung neon
-
(and 4 more)
Tagged with:
-
Hi. Thanks to a few recommendations and some research from different places (especially here (thanks!)), I managed to squeeze the 1400$ budget and get a much better build from it. Keep in mind that I intend to upgrade this into a 2x 1070ti within the next year and a half, when trying to figure out why I went with this motherboard and PSU. Any recommendations/comments are welcome. What modifications would you recommend if you had an additional 100$ to spare on this build? I will wait for the Ryzen 3000 series to come out before buying the CPU. PCPartPicker part list / Price breakdown by merchant Type Item Price CPU AMD - Ryzen 5 2600 3.4 GHz 6-Core Processor $164.89 @ OutletPC Thermal Compound Arctic Silver - 5 High-Density Polysynthetic Silver 3.5 g Thermal Paste $6.58 @ OutletPC Motherboard Gigabyte - X470 AORUS ULTRA GAMING ATX AM4 Motherboard $143.88 @ OutletPC Memory G.Skill - Ripjaws V Series 32 GB (2 x 16 GB) DDR4-3200 Memory $234.99 @ Newegg Storage Silicon Power - S55 480 GB 2.5" Solid State Drive $54.89 @ OutletPC Storage Seagate - Barracuda 2 TB 3.5" 7200RPM Internal Hard Drive $59.89 @ OutletPC Video Card EVGA - GeForce GTX 1070 Ti 8 GB GAMING Video Card $364.98 @ Newegg Case NZXT - Phantom 530 (Red) ATX Full Tower Case $99.99 Power Supply EVGA - BQ 750 W 80+ Bronze Certified Semi-Modular ATX Power Supply $49.99 @ B&H Case Fan Thermaltake - CL-F011-PL12BL-A 40.99 CFM 120mm Fan $5.49 @ SuperBiiz Case Fan Thermaltake - CL-F011-PL12BL-A 40.99 CFM 120mm Fan $5.49 @ SuperBiiz Prices include shipping, taxes, rebates, and discounts Total (before mail-in rebates) $1241.06 Mail-in rebates -$50.00 Total $1191.06 Generated by PCPartPicker 2019-01-06 02:12 EST-0500 Can´t believe I didn´t end up pasting the link last time
- 11 replies
-
- machine learning
- artificial intelligence
- (and 3 more)
-
Hi. I would like your opinion on the differences in performance of different types of DL algorithms this build I put together on pcpartpicker.com, can achieve? PCPartPicker part list / Price breakdown by merchant Type Item Price CPU AMD - Ryzen 5 2600 3.4 GHz 6-Core Processor $164.99 @ Amazon Motherboard Gigabyte - X470 AORUS ULTRA GAMING ATX AM4 Motherboard $143.88 @ OutletPC Memory G.Skill - Aegis 16 GB (1 x 16 GB) DDR4-2400 Memory $94.99 @ Newegg Memory Corsair - Vengeance LPX 8 GB (1 x 8 GB) DDR4-2400 Memory $54.99 @ Amazon Storage Kingston - A400 240 GB 2.5" Solid State Drive $36.00 @ Amazon Storage Seagate - Barracuda 2 TB 3.5" 7200RPM Internal Hard Drive $59.89 @ OutletPC Video Card NVIDIA - GeForce GTX 1070 Ti 8 GB Founders Edition Video Card $599.00 @ Amazon Power Supply EVGA - BQ 750 W 80+ Bronze Certified Semi-Modular ATX Power Supply $49.99 @ B&H Prices include shipping, taxes, rebates, and discounts Total (before mail-in rebates) $1243.73 Mail-in rebates -$40.00 Total $1203.73 Generated by PCPartPicker 2019-01-05 08:07 EST-0500 I know the 750W PSU looks like overkill, but I intend to upgrade the CPU in the future (if I believe I will get enough of a percentage boost on the performance) and to add either another 1070ti (the board supports SLI) or maybe even eventually upgrade it to two 1080ti´s... How would you improve upon these the most given that the budget is 1400$* for the build and case, which is missing because I haven't found one that 1 - is under 100$, 2 - offers good air flow and 3 - has decent aesthetics - I welcome suggestions in that regard as well. About the GPU... What is the difference in performance between the NVIDEA 1070ti - https://pcpartpicker.com/product/4ZrmP6/nvidia-geforce-gtx-1070-ti-8gb-founders-edition-video-card-900-1g411-2510-000 for 599$ and the Zotac 1070ti - https://pcpartpicker.com/…/zotac-geforce-gtx-1070-ti-8gb-am… - for 429.99$ . The Price difference is considerable, and they both seem to have the same number of CUDA cores (2432). How great would the difference in performance be? So many questions... you need not answer them all. Just speak your mind in whichever question/s you would like to answer. Assume I will buy all parts new when considering ways to add more bang to the 1400$ budget. Thanks
- 8 replies
-
- deep learning
- artificial intelligence
-
(and 2 more)
Tagged with:
-
Remember to not get political, this topic is not about the American Military, or any political parties. It is about Google and the employees protesting/resigning, and the implications of the use of AI for this purpose. Original Source: https://gizmodo.com/google-employees-resign-in-protest-against-pentagon-con-1825729300 First seen by me: https://arstechnica.com/gadgets/2018/05/google-employees-resign-in-protest-of-googlepentagon-drone-program/ Pre-article summary: Google and the American Military are working on "Project Maven", a military project to apply Google's image artificial intelligence know-how to drones and drone footage to help the military identify persons and/or places of interest, presumably to later capture or strike. (wasn't sure how to word this without being graphic). A previous letter of protest garnered over 3100 signatures (Ars reports up to 4000 at time of writing). Given how large Google is as a company this is probably a drop in the ocean, but appears to be gaining traction in media with stories from Gizmodo, Ars Technica, and The New York Times as well. A previous Ars article on the letter and issue pointed out how the letter invoked Google's "Don't Be Evil" motto. The Military claims that footage identified by "Project Maven" will not be solely acted upon but further examined. I do remember hearing (Can't remember where) that the American military is having a high turnover for drone operators due to what they're having to see and do in the role. So it might be possible that it may be looking at this project to ease staffing issues. Article summary: Google is still forging ahead, even after the letter of protest, so now employees have begun to quit. While "about a dozen" seems like an even smaller drop in the ocean to Google's 70,000 employees, it is quite a drastic action to take and I don't know if those employees will be able to get a reference from Google. Here are a few quotes from resigning employees from the article: The ICRAC (International Committee for Robot Arms Control) (about) has also released an open letter supporting the previous letter from google employees on the matter: https://www.icrac.net/open-letter-in-support-of-google-employees-and-tech-workers/ I recommend you read the whole letter but it has some powerful language and wise words of caution. The ICRAC claims to solely advocate against the use of robotics/AI in target selection and use of lethal force, and not against robotics in general. Personal Thoughts: Not sure what to add without getting political. I'm personally interested in AI and Robotics but agree that it's a field that should not be used lethally in war. Saving people in the battlefield is different from helping with lethal tasks, and in the rare cases the two are mutual is a task and decision still best left to humans. I've never heard of the ICRAC till today and am sceptical about them but the letter was well written. I do raise the question of our reliance on Google. For some this may be minimal, for others, extensive. I myself use google calendar due to my memory problems to remind me of appointments etc since it syncs to my tablet and phone. I rely on Gmail since in the past my website's mail server has been flaky so have ported most of my accounts over to it. I use chrome and regularly rely on the recent tabs/history to watch stuff on my tablet/phone in bed/hospital/doctors that I was previously watching on my PC or TV when I'm ill. So for me, it's be difficult to stop using Google in protest of this decision, but it's a thought I am considering and may look into alternatives that can do the same tasks. I'd like to hear what people think about protesting Google as users, and how that might be possible.
-
In light of @VegetableStu's post [here], here's is an appendix that sheds some light on how creepy and disturbing technology can be. @colonel_mortis, is there a way to have a warning message for particularly sensitive tech news theads that's not suitable for young audiences like a warning which asks "Are you 18+ to view this thread? Click away if you're not." Is there a way to prevent underage members for viewing such sensitive tech threads? Source: Motherboard (Vice) Btw, I am redacting the subreddit where those nasty AI generated fake porn clips and images can be found This is disturbing beyond imagination. The thing is that you can do this yourself. All you need are the following: Windows 10 64-bit Higher end NVIDIA graphics cards (e.g. GTX 1080TI) multi-core Intel/AMD processor Some knowledge of coding Lack of empathy if you're planning to make fake porn Once again, I am not linking where to find the AI generating application here. If you're interested, go look for it yourselves. As of now, the app now has a GUI. But installing the application is not a straightforward spamming the next button as it requires some tweaking of the Windows Control Panel. Basically what will happen is taking an existing legit video clip from someone you're sexually attracted to and then, take a video clip you want the face changed like a porn clip. I don't know why it requires an NVIDIA graphics card but given how NVIDIA themselves participated in making AI-generated fake faces, I wouldn't be so surprised if the app on Reddit is optimized for NVIDIA's 14 nm Pascal architecture. You can learn more about that in the thread below. I didn't realize that there are actually people buying AI made porn clips. Older people have warned that technology made people more apart than close but I guess people buying and self-completing using AI made porn are: People who got dumped or divorced People who are desperately craving sex with someone but resorts to virtual sex instead People who might be depressed People who are just sex maniac psychopaths What we know about the ramifications of AI and machine learning is just at the tip of the iceberg. As technology becomes better and faster, this can lead to people doing heinous crimes and I think it's time for every country to update their law books that is in sync with modern technology. If you want to see an example, check out this face swapping video from Dave2D: I can see court hearings becoming more complicated if AI-generated videos got involved as it can ruin someone else's life completely and get someone incarcerated for a crime they didn't do because the dummy judge/jury can't discern a real crime scene footage and an AI-generated crime footage.
- 6 replies
-
- machine learning
- artificial intelligence
-
(and 1 more)
Tagged with:
-
Sources: The Next Web & Motherboard (Vice) As compliance to the community standards, I've decided not to include any images of these fake porn GIFs. This reminds a lot of that NVIDIA AI (probably powered by their Titan V) that generates pictures of people but not really from living people. Here's another example of AI that can generate fake videos using machine learning This is creepy. It looks like Elon Musk's warnings about AI not being regulated is real. Google implemented some sort of machine learning to Google Photos to pattern match faces and objects. Say if I search for "Luke Lafreniere" or "Linus Sebastian" on my Google Photos app, it can tell which photos have their faces. I search for Christmas tree, it will show which ones have it. I don't think Google despite their notorious privacy invasions will be nefarious enough to create a fake sex tape of their users but this becomes problematic if the AI described in the OP becomes used in revenge porn. Remember how Facebook is trying to combat revenge porn by sending them nude photos and hashing it to prevent from being sent to other people. Looks like Facebook needs to up their machine learning algorithms further if they're serious in combating revenge porn. Basically with this new AI, anyone can take a normal video of you from YouTube and create short video clips of porn. While it is easy at the moment to spot fake and real sex tapes, the mere fact that many people on Reddit are already self-completing with these AI generated porn clips is already concerning. They also pointed out that you don't need special hardware to do AI porn and all you need is a decent graphics card and processor (probably RX 480 or GTX 1060 and Ryzen 5 1600 or i5-6600) to do this. But the problem is that way too many people are gullible enough to believe everything on the Internet. Someone with a beef or vendetta can use this AI as retaliation. The first victims of these are celebrities. There are studies that demonstrate that porn viewing desensitizes ones genitals and reduces white matter in the brain. https://www.wired.com/2014/06/is-it-really-true-that-watching-porn-will-shrink-your-brain/ The first part of this AI porn maker would be much worse celebrity beefs, then followed by fake political attacks and fake sex tape allegations to make up stories about your arch nemesis. It will be worse than a leaked burn book just like in Mean Girls.
- 53 replies
-
- artificial intelligence
- ai
-
(and 2 more)
Tagged with:
-
Source: Facebook Newsroom via CNET I think many people including myself tend to rail against Facebook given their history of privacy invasive practices and just a few weeks ago, I find their plans in combating "revenge porn" as something problematic. But as someone who experienced depression like myself, I'm actually in favor of this and with that, thank you Facebook . I remember last year someone in this forum made a thread on how they're going to commit suicide and thankfully so many people jumped in and told him no. I don't know what happened to that kid and I think that thread has been deleted by the mods. I know exactly how it feels to be isolated, feeling everyone hates you and your mind is full of self loathing and you feel helpless and many people especially young kids and teenagers go to social media to express their feelings. Adults tend to be more secretive and if they do post something that will suggest suicide, it is very subtle. There's an off topic thread about people's experience with depression and I actually shared mine there. My concern then would be, if someone posts a potentially suicidal Facebook post and the AI picked it up, will it automatically trigger a call from National Suicide Prevention Center or it will be just a friend sending a message as if the friend is tapping the shoulders of the depressed person? I'm guessing the AI's algorithms would range from posts about lyrics of sad songs to pictures of knives or guns pointed at the temple or anything that might suggest self harm. At this moment, it pisses me off when someone tells depression is not real or it's just in the head and you'll get over it but those narrow minded people will never know depression until it hits them. I wish Facebook and other social media services like Twitter, Instagram, etc have something like this to prevent and guide people having dark moments in their lives and show they're not alone and to put an end to such atrocious and disturbing practice of inviting other people to commit suicide.
-
https://www.military.com/daily-news/2020/02/24/if-its-not-ethical-they-wont-field-it-pentagon-release-new-ai-guidelines.html?utm_medium=Social&utm_source=Twitter#Echobox=1582596662 Air Force Lt General Shanahan, Director of the JAIC (Department of Defense) has issued a statement regarding the United States use of artificial intelligence in warfare. Shanahan said in a statement that "We will not field an algorithm until we are convinced it meets our level of performance and our standard, and if we don't believe it can be used in a safe and ethical manner, we won't field it." In other statements to reporters, Shanahan made comments that the use of AI by Russia and China raise serious concerns about human rights, ethics, and international relations. He also made comments that he does not want AI to be used by the military to track citizens as is practice in China. The Pentagon has pushed for more mandates regarding AI in recent years, but concerns that those guidelines won't be recognized internationally are there as well. Shanahan also made references to a speech made by Russian President Putin in 2017, saying "whoever becomes the leader in this sphere will become the ruler of the world." My opinion: Contrary to popular belief, there are rules to warfare. For example, you cannot target any medical facility or transport. I think the move to a more regulated atmosphere is a good thing. AI already brings up a lot of ethics questions, but it is much more hotly disputed in terms of how it could be used in war. I believe that AI should be used in war to lower the amount of casualties, but it definitely needs to be regulated by an objective human entity and undergo regular ethics testing. SkyNet, Hal 9000, and Ultron are good examples of what could go wrong in the movies, but I think with the right infrastructure in place, there could be a better system than just blowing each other up. Any AI that is designed and is functional should be under heavy watch and have a kill switch. I think that this is a good move and Shanahan is probably the best guy to help pave the way. He's made comments before on this and is one of the advisors for this kind of subject to the CIA and White House. Obviously, we are not to the point yet that we have an AI capable of thinking on it's own completely, but it is good to think ahead.
- 61 replies
-
- artificial intelligence
- military
-
(and 1 more)
Tagged with:
-
Original article from Forbes: Why Siri, Alexa And Cortana Will Destroy SEO People cannot spend the time to look for good resources and answers, sometimes they just need answers. Also, we don't want new businesses that are really good at what they do to have to compete by hiring Google-tactic teams to get 1st page results on Google against bad-product and bad-advice companies and pages. So, the author proposes that Google is not certain in today's changing world because it doesn't answer your questions, it just guides you to the best e-content by a method that isn't suited for the searcher's purpose as much anymore. Relevance: Google may potential face popularity problems in the future if it does not adapt. Good luck using Siri to find those cat pictures when she thinks you said "hat pictures."
- 23 replies
-
Saw this article on Tom's Hardware. I think it's absolute genius that Elon started this project! The thought of strong AI in the hands of governments only is terrifying. http://www.tomshardware.com/news/openai-nvidia-dgx-1-ai-supercomputer,32476.html One little DG-X1 might not be much compared to the trillions spent by governments, but it put's them on the map and besides these guys will work much faster than any government org ever could.
-
http://www.theguardian.com/technology/2016/jan/27/google-hits-ai-milestone-as-computer-beats-go-grandmaster http://www.wired.co.uk/news/archive/2016-01/27/ai-go-google-and-facebook http://www.bbc.co.uk/news/technology-35420579 http://www.nature.com/news/go-players-react-to-computer-defeat-1.19255 Recently, Facebook and Google have both been having a look at improving computer AI for the abstract logic game of Go, due to the neural networking algorithms currently being employed in the field. Go is played by 40 million people worldwide (mostly in East Asia, but is played by a few like myself elsewhere), and is considered to pose a vastly more daunting challenge for AI than Chess. Go has been a goal of AI for decades, but the search for a computer capable of defeating professional Go players in un-handicapped games has remained out of reach until now. For Context, The defeated professional (5 losses in 5 games) is Fan Hui 2D, http://www.europeangodatabase.eu/EGD/Player_Card.php?key=12633346 Fan Hui is the No. 1 ranked player in Europe, and a genuine professional standard player. However, he is still a fair way off the standard of the world's top players. Deep Mind now intends for its program, AlphaGo, to face Lee Sedol https://en.wikipedia.org/wiki/Lee_Se-dol in March, though at the time of posting this is unconfirmed. Lee Sedol is probably the biggest name in Go right now, and has recently been the World No. 1. These are exciting times both for AI and Go players. Almost since Deep Blue, Go has represented something of an Everest for AI, and the 'Kasparov match', previously thought decades away, could now be within sight (Though in my opinion this game should be against current world No. 1 Ke Jie, not Lee Sedol).
- 14 replies
-
- ai
- machine learning
- (and 7 more)
-
This is gonna be a weird topic , But i think it's a good discussion on topic which technology vs human intelligence showdown is on, The Topic is Peace & Keeping the peace for all form of living being a smart enough AI (& i certainly don't mean those portrait in movies), incorruptible yet which cannot be bribed or forced to change true judgement, that is aware could do things rather perfectly or even much better than human beings , Let's face it , we're emotional beings , whatever outcome there is it's directly proportional to the collective emotional state of mind, especially dealing with other human beings, wisdom of crowds can be emotionally altered & can be controlled to turn into a lynch mob , which is a contradictory thing to start the judgement with, , But Since AI can judge & sense dispassionately & contain the threat , which humans can succumb to like bribery , extortion,physical threat, can all inturn change the outcome or judgement , Take NSA as a primary example they , have some of the most powerful tools for intelligent Big data analysis , pretty sure 98% is dedicated to secure our nation, but the left over percentile is used for covert operations & other interest, which otherwise deem as punishable , but imagine the system was extensively & exclusively under an intelligent system that is incorruptible, Would it do the job close to a 100 times better without shedding too much blood? I'm not talking about AI robots ruling the world , but a static intelligent state which can control human caused error , preemptively strike with precision , but can physically quarantine threats by humans & make a logical judgement, Do you think such a system should be in place in the future or do you think humans should continue on with guiding & ruling over each other to keep peace, a sort of intelligent fair analysis, without human intervention that is malleable ..