Jump to content

mr moose

Member
  • Posts

    25,913
  • Joined

  • Last visited

Everything posted by mr moose

  1. Maybe were you live, but where i live such contracts and EULA's are illegal, a game is a product and a product cannot be made to be different after purchase. Aussie laws are why you in the US can enjoy a refund. So even though you think that the case is closed, If you don't need any laws changed at all I am sure you won't mind going back to the days when you couldn't get a refund on software that didn't work. Ethics are a huge part of it.
  2. I didn't know we could apply different rules for consumer goods depending on if they are a luxury or life support. Either it is ethical or it isn't.
  3. You haven't given a good enough reason to change the law, so far we have the threat of legal action because a law was broken by a computer program. If that is enough to cause you to go into full "the law needs to be changed" then you are not looking at the bigger picture or fully appreciating the law itself. Why? defamation is defamation whether you intend it to be or not. If people could simply argue "well, it was not my intent to defame John doe when I said he was being bribed by the mafia to keep city jobs in the hands of specific companies", then the law is pointless from the get go. Watering down laws because you don't want the makers of a computer program to be responsible for what their program does is not a good enough reason. That's because publishers are not taken to court when the guilty party is known. If you actually read all of that, I am wondering why you still think the laws should be changed or don't understand why people would consider google or facebook to be publishers in this instance? I mean I know it's not common parlance, but it isn't exactly rocket surgery either. Judges judge differently because cases are different. If all cases were the same then there would be no need for judges. Again, if someone could argue that wasn't their intention then the whole point of having defamation laws is moot. Publishers (first parties generally speaking and media, business etc) do not get to plead ignorance, second parties (people who repeat the defamation or re publish it) can plead no ill intent, but it's not easy because you have to prove you did due diligence to a certain degree. I wholeheartedly support this law in that matter because defamation ruins lives and legal recourse should not be stymied by a simple claim of ignorance. The reason we have defamation laws is so people can seek compensation for damage cause by reckless or ill willed people. It is more about justice for the victim than it is about deterrent. This is why it is a civil law not a criminal law. You haven't actually evidenced that it is vague, Not only was my link to a very detailed description but the link you provided echoes it. There was nothing vague about it. If you publish (communicate by any means) something that is untrue and it harms a person you can be sued for defamation. It's actually pretty simple. The only thing this trial will have to deal with that it hasn't before is if openAI are considered publishers as they are the one's who programed the software that did this. You wouldn't, getting someones age wrong is a mistake, fabricating a damaging lie that does do harm on the other hand could land you in big trouble. Don't make the mistake of conflating a benign mistake with a complex allegation. it doesn't matter how quickly you retract your statement if the damage is done you are liable. And that's the way it should be. no it didn't. windows failing is nothing like telling everyone a politician is a convicted criminal. That argument would be an argument to prove openAI's negligence and thus liability. They published a program that did exactly what they programmed it to do. Again, intention means nothing when you are the primary publishers of said infringement. Not too many sentences ago you said it did do what they programmed it to do. It failed in its task to provide the correct information. we all agree on that. but intention means nothing under Australian law for reasons I have already pointed out. I still think you are conflating to very different situations here. There is a big difference between windows failing and gross negligence leading to the financial damage of an innocent person. As a windows user I take precautions against product failure, like MS suggest you keep a backup of important information. I cannot take precautions against windows telling everyone that I am a criminal and committed fraud when I haven't. It's just not a fair comparison. It was evidence that the program can be altered to prevent it from defaming people without shutting down the whole thing. Remember that is the argument several of you have brought to the discussion, that this lawsuit will somehow ruin AI development, because it cannot be fixed. It's a software product, it does what they program it to do. there is little more that needs defining. Even the training they do is still defined as programming. I think your premise that it can't be fixed is wrong. All AI so far has shown to be improvable, if it wasn't able to be improved then there is no point in developing it. If I said to you these breaks only work half the time, I cannot improve the design any more and they will never work 100% would you want the product? No one would. It can be fixed and it will be fixed. I think that is an argument that is not really related to the current issue, anyone who sets a sequence of instructions to be followed is a programmer, regardless if they are doing it through a wysiwyg program or from the ground up using a programing language. For the purposes of this discussion and for the purposes of this lawsuit, OpenAI are the programmers of chatGPT. Which is something that can be fixed. Again, I have never argued it is not a useful tool, in fact I think I am the only one in this thread arguing that it can be improved to the point it won't make these mistakes. So I don't know why you are trying to defend it's "usefulness" when (somewhat ironically) you are the one arguing it can never be 100% accurate. Also preventing it from defaming more people and creating more problems than any good it might do. which isa shame, because unlike the US laws on hacking, aus defamation laws are quite specific. The problem here is that it is possible they might consider a software program as the method of publication. Citation? the law specifically rules out defamation if the information is true. It's in the link. Link to case? Then you would have a lot of people hiding behind tricked AI to create slanderous and dangerous misinformation. As I said earlier, it is already hard enough trying to source accurate information on certain topics without adding a million and one completely fabricated pieces created by people hiding behind non liable AI. That consideration doesn't change anything though. I could argue I don't consider people who make robots to be engineers, doesn't change the fact they are engineering a robot. yes, when they refused to remove a search result that defamed the man, the courts decided they were publishing the information. Because as the two links provided thus far have already explained, if you knowingly republish data that is defaming you are guilty of defamation (It is important to understand that when they determined google to be a publisher, they are not talking in the sense of a book publisher or movie studio, they are referencing the term publisher in defamation law). Google knew it was publishing a defaming remark when presenting it in search results because they were official asked to remove it. Search engines are not publishes when they are simply returning search results, they become publishes when they return specificily defaming results after being asked not to Most probably do, but that isn't the point and it certainly doesn't change how damaging defamation can be. I really don't think one lawsuit in Australia is going to have any impact on AI development. For one OpenAI is not an Australian company so the worst that will happen will be they won't sell it here, it will still be developed by all the leading tech companies that already exists everywhere but Australia. The reality is this lawsuit isn't a pimple on the tech world. No I am not, We don't need the method of defamation to be sentient. I don't and this law doesn't either. OpenAI released a program that when asked about brian hood it gave a defaming response. The courts likely aren't even going to care if it is AI or how it came to the conclusion. Sloppy programing. I don't know why you keep going on about understanding and the like, I don't care how it came to these conclusions, if your argument is true then the AI will have to continue with a ban on talking about people. if it isn't true (as I suspect) then the AI will improve to the point where any errors are not damaging enough to cause a lawsuit. Because even though you think it will somehow prevent AI from ever working again, allowing a program to break the law is the same thing as not having the law, which is not a very good argument. If openAI are found guilty of producing a defamatory program then stiff shit to them, they'll just have to not release it in Australia. Especially if they can't fix it. I don't want half broken programs being used by lazy people to determine who they are going to vote for in my country. It's always hard to say because every defamation case is different. But with regard to search algorithms, they are vastly different from chatGPT. The algorithms simply give you kinks to websites that most closely match your search terms, they do not create the websites based on your search terms. It is simply not fair to compare the two in this regard. Google have been found guilty of defamation when they refused to remove a search result, but that was after they were informed it was defamatory. Under Australian law broadcasting, disseminating or publishing a known defamatory comment by any means is illegal. They probably could have avoided the guilty verdict if they had only linked to the website and not had the defamatory remarks on the results page. They may also have been able to invoke "innocent dissemination" had they not been told about it. The problem for OpenAI is that "innocent dissemination" does not apply to first parties because they are charged with due diligence for their actions. Their defense will likely be along the lines that it was not their words but the result of many others works (likely this will fail because providing proof that others said that is going to be nigh on impossible and republishing defamatory remarks is still illegal and requires a whole other line of defense. Of which it will be hard to defend given they are the sole operators with control over chatGPT). To the thread, as I have said before, if this goes to court (because it is not set in concrete that it will), it will be about how the act of publishing is interpreted, not on intent, disclaimers or the fact it is an AI program. Essentially it will come down to are the creators of a program liable for what that program does with regard to defamation.
  4. Computers in the old days used to just crash if your IQ was less than or not equal to 0x000001, now that's a user error.
  5. You know that quote was from like the 1960's, when IBM really didn't want to bother with DRM let alone piss off their legit consumers who spent a LOT of money on their products. Most of the DRM stuff you mention was late 70's and 80's. By that stage IBM (like nearly every other software Maker) realized that if they don't do something about piracy themselves no one else was going to fix it for them.
  6. The whole and only reason this thread exists is because chatGPT did break the law, whether you like it or not. that cannot be removed from the discussion. It is the fundamental basis for this entire issue, you cannot simply ignore it. You either have to argue that it's o.k to break the law because you don't know how it could be done differently or you have to accept that the law could force changes to the way chat AI is published/used. New technology posses new situations and cases like these are what tests the law. Arguing that it should not be tested because the you think the laws are outdated is madness, how do you propose the laws should be updated if they can't even decide whether or not AI makers are publishers for the purposes of this case? Also, Australian law has always punished publishers where they will not or cannot reveal the identity of infringing content. Which as I have said is as it should be. You cannot make defaming remarks about someone then hide behind a claim it is from an anonymous source. It just doesn't work that way and never has. You don't seem to understand why defamation is illegal, intentions mean nothing, disclaimers mean nothing, the whole point is to stop a person or persons from ruining peoples careers and businesses with lies. Whether you do it on purpose or not is moot, if it is a lie and causes damage to someone it is defamation and it is illegal. I don;t give a fuck about any other countries, in Australia it is illegal and in Australia it will be challenged, maybe this time or maybe next time, but it will be challenged. And it should be, one of the best things about Australia is the fact that we have laws that go someone to giving people a way to seek justice when they have been wronged. Why you would not appreciate that is beyond me. So you think it should get a free pass on breaking this particular law because you don;t think it can be improved well enough? No it's not and you know it. windows is produt that sometimes fails, It is not illegal for a product to fail and be fixed. chatGPT did not fail, it did what it was programed to do and broke the law in the process. You're just going to have to deal with that fact. And this means nothing. you are conflating a product failure with breaking the law. they are not even in the same universe legally speaking. And? They will be sued on what chatGPT did say, changing a program, sign, or ad will not undo what has been said. Saying sorry will not make it all better if the plaintiff requires damages to be paid. You cannot get a "get out of jail free card" for saying sorry, just like you cannot get out of jail by using a disclaimer, defamation is defamation. Reality check, this is only going to be in effect in Australia, most AI development is happening in the US, UK, etc etc. And also if it does slow it down at all, then that is not a bad thing, the laws are there for a reason and it is way worse to muddy them and make exceptions than it is to slow the development of one arm of technology in one country. It did break the law. Of course that underpins what I am saying. Absolutely it's fair to call it a programming issue. there is no information that it could have been fed (without cherry picking the worst articles and facebook posts) that would have led it to make the claims it did. Hell, ChatGPT has a huge group of people in the industry who are not happy with the amount of errors (hallucinations) it has. How accurate it is a direct reflection of the programing. Feed it shit data and get shit data out = good programing, feed it good data and get shit data out = problem programing. So a problem with programming. I never said it wasn't useful, I said if it breaks the law then that is something that should not be permitted. But the difference is a google search is links to other peoples work on the subject that you can sort through to draw your own conclusion, it is also other peoples works who are individually responsible for what ever they publish. This is different, it outright made absolute claims regarding the legality of an innocent man. That is something that google has never done and hopefully will never do. They tried, but failed. https://topclassactions.com/lawsuit-settlements/lawsuit-news/apple-maps-class-action-lawsuit-dismissed/ But I would say that they should have been held to account for the people who got lost in the desert and nearly died because they were following those maps. Just like I am saying f a product breaks the law then that product needs to be changed. So their fix was to prevent it talking about him specifically? maybe that's all they need to do moving forward, prevent it from talking specifically about anyone. Especially if it is still going to get it wrong. The only way it could get inaccurate data was if the programmers went looking for it in dodgy places. Find me a single news article, government, legal or justices system document that has these "inaccuracies". Which is why he is seriously weighing up taking them the full 9 yards. IT is very hard if not impossible to find a document suitable to feed an AI that says Brian hood was guilty of anything. This is why I am putting it down to a programming issue, I seriously do not believe openAI intentionally fed it bad data. Even though there area few opponents of AI who have made such claims (it's all in the wikipedia page I'm not spreading other peoples opinions here). In that case they have a lot of work to do. That's the price of making a product with the potential of having legal ramifications. It's hard to find small startup companies that make medical grade parts for surgery for that exact reason, they are too small to be able to afford a single lawsuit over a failed implant or the cost of insurance against that makes the whole business nonviable. Then what's the point in the product if it is "always" going to make mistakes and will "never" truly understand a sentence? I wouldn't buy a car that only sometimes worked, I wouldn't read a book that occasional had a page that didn't make sense, I wouldn't use a search engine if it occasionally just gave me completely unrelated results. Especially if there was no way for me to know when it was wrong and when it wasn't without basically doing all the research for myself anyway. And if you have to do that then the product doesn't serve a purpose.
  7. They are victims of corporate greed, it has nothing to do with the technology, companies like Apple, Google, Tesla etc all have the money to pay ALL workers in their supply chain properly, they choose not to. There are enough resources on this planet for everyone to live a comfortable and healthy life with many if not all the modern technology we in the westerns worlds enjoy. One of the problems of having less real world issues and being more interconnected, politicians are on display 24/7 which means they need every vote which means they follow the social narrative. Those with the loudest voice or the biggest scare campaign will get the money. lead and CFC's was an easy fix as they had working alternatives and were actually threats. Many of today's "threats" are not actually being addressed as such so one can only assume they are not actual threats.
  8. People engineer their products for where the market is, this has little to do with MS's intent and mostly simply is a result of majority of the market using a windows machine (anybody can write a program for windows without having to pay a cent to MS). This, by the way, is changing at a reasonably fast rate as people move to tablets and phones instead of laptops/desktops. It's a shame others on this forum can't see this fact when it is the other way around, developers trying to engineer for a market but half that market is controlled by the OS maker (who you have to pay developer fee's to amongst other costs).
  9. Here is a hardened fact: This quote from IBM: is proof they never wanted to make anything harder for genuine paying customers, but piracy got the better of them and in the end they were left with no choice but to get on the DRM train and try to force their product to only work for paying customers. I don't care what anyone's personal opinion on the subject is, just don't pretend piracy is the result of DRM or spruik half arsed political/propaganda "research" as evidence to defend either.
  10. The problem is twofold, 1.when you break the law it is an absolute. If a device or program breaks the law and you know it is going to then you have a legal problem 1st and an ethical problem 2nd. And two, we know it will get things wrong, but the goal is not to just accept that, the goal is to get it as close to perfect as possible. Using the fact that it will get things wrong to justify it doing illegal things seems horribly wrong when the goal is already to not be happy with wrong, but to work towards better accuracy. I've been waiting for this statement. No one is throwing the baby out with the bathwater, I have never once said stop developing AI. This lawsuit if it goes ahead won't stop AI, in fact OpenAI has already fixed the problem that caused this issue. So the idea that this is going to stop chatGPT or AI development is unreasoned fear. Again, if this gets thrown out or OpenAI win and the judge says AI or it's creators cannot be held liable for defamation, then just watch the number of AI programs and reports get published with all sorts of claims that would otherwise be illegal.
  11. There are no articles or sources that say brian hood is guilty or convicted of anything, but that is what chatGPT said, it was not the data it was fed, it was not the availability of good sources, there was nothing written in abstract terms. It can only be a programing issue or intentional. I side with the it being a programing issue because they claimed it was not completed and released so they could see how it went. Between that, and the fact disclaimers don't mean what you think they mean (getting tired of pointing that out here), a disclaimer does not protect you from legal action if you break the law. If I created a facebook page with a disclaimer saying not everything on this page is accurate, then went on to say how person X is -enter some nasty thing that will cost them their job-, then I am liable for that defamation. That disclaimer on my page means nothing because defamation is fraud and illegal. So basically you guys are arguing there is no way to make it accurate enough to avoid this kind of thing happening again ( even though they already have improved it sufficiently for now) and that because of that this should get thrown out. If it is always going to get things wrong then it is a useless technology. Would you make that argument of cars if they were going to take you to the wrong place X percent of the time? If they can't fix it then the they shouldn't release it, if that means no more AI then so be it, but I don't think it will mean that. I am not convinced it is that hard to make it accurate enough to avoid defamation. I think the inverse issue that you guys might have missed is that if this gets thrown out or openAI win, it will literally green light people using software under the guise of AI to carry out misinformation and defamation attacks at will. It's already hard enough sorting out the real information from the propaganda. Also just so you know, it has been banned in most public schools in Australia because students were using it to cheat. so clearly it is not that inaccurate, alongside the fact it has been updated
  12. He only has to prove that one person other than himself has seen it. As a politician this is easily provable as damaging. The hardest thing for him to argue will be that openAI are publishers of it, unlike google search that only links to other peoples websites, openAI did program and release this, it's anyone's guess how this will go, suffice to say if it was a newspaper or human that said what chatGPT said it would literally be an open shut case.
  13. We don't have safe harbor laws in Australia, if you knowingly provide a platform or mechanism for someone to break the law then you are aiding that activity and and such are culpable. It's the same with our privacy laws, just because someone posts something on facebook doesn't mean that a company can scrap all that information without telling you what they are doing with it. https://www.oaic.gov.au/newsroom/clearview-ai-breached-australians-privacy They really only took data from social media and other publicly available websites. but the law clearly states you cannot do that. I disagree, the technology will still be improved and be legal, in this case if they fix the product so it isn't horrendously wrong then it won't be a problem. To be honest, I don't know why being forced to make AI more accurate so it doesn't break the law isn't something we all should be proponents for. EDIT: just some more on this, openAI has already released a fix for it so this won't happen again, which evidences that A this issue was rooted in poor development and B can be fixed. it will simply be up to the courts to decide if releasing a poorly programed product that breaks the law is sufficient for a defamation case. That has a very specific motivator behind it which isn't related to civil law. Don't know what you are talking about. In this case it took easily interpreted data and got it so backwards people questioned if it was intentional. I know it likely wasn't, but the reality is if it can't be improved beyond this then it isn't technology that's worth having.
  14. I suspect we will a see small rise in PC sales as people find another way to play the old games they already bought once legitimately. EDIT: and before anyone quotes me just to say it won't change anything noticeably, I fucking know, I'm just pointing out that people will find another way to do it because you can;t deny people a product they have already paid for.
  15. If a product can't be guaranteed not to break the law then releasing the product knowing it will is culpability at its most fundamental. Warnings and disclaimers are not a get out of jail free card when it comes to defamation and causing damage. Again, disclaimers are not a get out of jail free card, they do not carry the same legal weight as many people think they do. Under Australian law disclaimers do not absolve you of legal accountability when you have broken the law. I was talking to an actual Lawyer today about this case and she said as far as defamation goes it's an open shut case. The only thing to be played out is if the Australian legal system wants to hold the creators liable for their product or if some other less obvious argument can be made as to their culpability. Also it has been pointed out that there are no actual articles on public record that state in any manor what ChatGPT did, so this idea that it is parroting is flawed. There is clearly a problem with how it drew this conclusion, I would not be surprised if the entire data set it was trained on is subpoenaed. After that we will know pretty well straight up how it came to this. Until then all we have is a black and white case of defamation and the marketing company trying to remove itself from its product. All that I have claimed about this law and how it works in Australia is in the link I provided earlier in this thread.
  16. Firstly, No reasonable person would associate chatGP with a comedy. It is not a comedy, it is not a publication designed for parody or pisstake. The rules for obvious jokes from a comedian do not apply. Secondly, the disclaimer doesn't always mean anything, Disclaimers (particularly in Australia law) are viewed as a way for people to try and limit responsibility. The legal disclaimers activity parks make you sign are only good for obvious risks inherent in the activity, In Australia duty of care cannot be easily waived, EULA's in Australia are not very concrete because you generally have to buy the product and start the installation before you get to read them. In this very specific case, a disclaimer is irrelevant if the content is fraudulent. Claiming some things could be untrue then making absolute claims that are definitely untrue is generally considered as fraud. https://www.artslaw.com.au/information-sheet/disclaimers-exclusions-risks/ Some of the comments in this thread are basically a propositional fallacy, just because some people see a danger or problem with AI does not mean they are afraid of it and want it stopped, some people just want it fixed before moving forward, or at least making it clear to creators of AI that they can be held accountable for he effects of their AI in action. This is a pretty cut and dried case of defamation, we have an obvious imputation, we have the publication of the imputation, the imputation is untrue and the imputation carries severe damages to the career of the victim. The only point that needs to be thrashed out in court is whether or not ChatGP and is creators can be treated the same as any other content creating service.
  17. I'd need to get someone to watch my eyes as I move my head, otherwise there is no way to be sure what is actually happening.
  18. No, the visual cortex actually does not take visual input form the eye when the eye is moving, it's called saccadic masking.
  19. Cones and tubes, they have a persistence, which means when you look at something whichever cells react to send a signal, those cells will not react a gain for a period of time that is a function of your personal biology, how bright the input was and how dark the general environment is. Somewhere on this forum I linked to about 7 different research papers on the human eye and frame rate perception (but that was a few years ago now and I suspect they will be 404's). They nearly all concluded that after about 90FPS no one could perceive faster frames. So it is highly likely that if there is a difference in first person shooters with accuracy and higher frame rates it is more likely to be explained by a combination of system conditions and not by just one factor. Also the the picture that our eyes actually see's is rather low resolution, with bits missing, the reason we can see so much is because our brain (visual cortex at the back) takes all the data from both eyes then produces the image that looks the clearest to us. It is also the visual cortex that is responsible for us seeing all those trick shape illusions. Fun fact, the brain ignores visual input when the eye is moving, Don't know why but you can test this yourself by slowly panning your eyes across a room and seeing how jittery it is ( the slower you go the more jittery) compared to holding your eyes still and moving your whole head to get the same pan.
  20. Huh? They are some of the best written laws on defamation in the world. How often do you think crazy cases get won here? In Australia you have to prove quite a lot: https://www.thelawproject.com.au/defamation-law-in-australia Yep and fairly so, you can't just go around lying about people and ruining their lives then hiding behind "someone else said it first" argument. The only way around being guilty is if you source the original and make it known that you are repeating what they have alleged, Then you are posting a truth which cannot be defamation in Australia. And fair enough to, when facebook and google are publishing defamatory reviews they are the ones who should be sued. If the review is clearly listed as being from a real person then google or facebook are not sued, the person who made the comments are. Anonymity of the source does not give any platform the right to publish illegal content and nor should it. It hasn't for newspaper and tv and so it shouldn't for the internet or AI either.
  21. Every time there is a debate about manual versus auto I find myself wondering why people get so invested in such a thing, Drive what you like, you don't need to go making excuses just because you like driving something that is technically worse. Also many of the manual arguments are akin to defending Nvidia or apple products, they are either personal opinion presented as general fact or completely misrepresent the truth. EDIt: also we've been calling them autogomagics since the 80's, because try eating a hamburger and driving city traffic in a manual. It's fucking messy and not much fun at all.
  22. Having a supported list means that there are products that are not supported. unless your list includes everything you cannot have one without the other. The problem is people think not supporting something is a crime. In this instance I'll grant its a little bit more complex than what it appears superficially, but still. I deeply suspect people think it's a crime because trying to think more deeply about why AMD has feature requirements for their chipsets is too much. I.E If we ignore the fact that motherboards would become a minefield of hit and miss (the chipset having a feature but the motherboard not having it), AMD do not have to place requirements on motherboards for their chipsets, it could actually be up to the mobo maker to decide how much and what quality VRM they are going to use then price accordingly. this while move is just to ensure customers know what they are getting (which I'm not completely against).
  23. I am sure it is not that simple, but I concede that when you have politicians pandering to the people for votes, then the current social narrative will trump rational policy.
  24. The vast majority of people upgrade their systems after 5 years, by which point you can rarely get a CPU that would fit your current mobo let alone one that is worth upgrading to. Plus the fact that after 5 years we generally have a new generation of RAM and generally need more of it so it doesn't make sense to do just the CPU.
  25. I never said there was no market or that it's all arbitrary, just pointing out that this type fracture in support (a $70 mobo not supporting a high power CPU) is something we see all the time across the board. Is that addressed to me or to the thread in general? Because I wasn't arguing anything in particular, just pointing out (again) that this type of thing happens all the time. The only reason I did that was because every time it happens we get a several page thread of people bitching and carrying on like some big nasty company is making them touch their tongue on dog poo.
×