Jump to content

mr moose

Member
  • Posts

    25,913
  • Joined

  • Last visited

Everything posted by mr moose

  1. mr moose

    Happy international men's day! In today's cultu…

    I had no idea it was, I suppose if it was international woman's day it would have been all over the news.
  2. mr moose

    Here is an interesting take on the whole "Chine…

    I find that article echos some of my commentary from years ago about how the current style and use of phones has more to do with the the limitations of how humans can interact efficiently with such devices more than it does with one company being copied. After all there are only so many shapes a phone can be before it becomes awkward or how intuitive a UI can be before it becomes less efficient. Essential no one company has designed the bulk of what a smart phone is today, it's small iterations by many companies over many years leading to what we have.
  3. The magnet in it is one of those special magsafe magnets and that's what makes it very expensive. Also it took quite a lot of R+D to work out how to extract more money from apple customers without giving the game away.
  4. mr moose

    PC is back up with new cpu and mobo. what ever…

    I had exactly the same thing happen to my PC a few years ago, turned the power on and bang everything died. I never found if it was the PSU or just a power surge, but pretty sure it was one of them.
  5. mr moose

    well was what supposed to be a simple case swap…

    that just sucks.
  6. I've been trying to explain this to people for what seems like forever now, It seems people just want to believe the internet tropes about security updates and flawed windows updates.
  7. I did the math on how much I pay MS every 8 years (my average upgrade cycle) to have the latest OS and updates. Turns out to be about A$170 or $1.77 a month. They are currently offering me 100GB cloud storage with web office for A$3 a month. I would be happy to pay $5 a month to always have the latest OS on rolling updates/upgrades if it included office and some handy cloud space.
  8. mr moose

    This thing just showed up on my PC after a Wind…

    how would that go from a security perspective? Is the desktop as removed from admin as any other software or is it intrinsically a part of the OS?
  9. mr moose

    This thing just showed up on my PC after a Wind…

    I got that too, I don't know who to be more pissed at, all the easily led people who make this type of product promotion viable or MS for putting it there.
  10. mr moose

    I feel like the gn vs ltt threads are hitting f…

    It's neither dismissal nor elitism. Nor is the observation the causality of the situation. Having followed many a controversial topic pre and post internet I can assure you the ability to communicate faster and without barrier is the single largest issue that causes the observed bifurcation of most things. It does not matter whether it's tv shows, youtube channels, politics, religion, sporting teams or even poetry and philosophy, people get an ideal stuck in their head from either poor decisions, poor journalism, poor critical thinking skills or simply because peer pressure and then it takes more than evidence and logic to unstuck it. In this case it is something as benign as a two youtube channels at odds and there follows a bunch of users defending the honor of people they don't even know like their own validity depends on it. I think it is a fair observation that said people are slightly smarter than tomato, but not by much.
  11. mr moose

    I feel like the gn vs ltt threads are hitting f…

    In this day and age the ability to adopt a particular leaning will get you a job as a journalist, while you only need an IQ slightly north of a tomato to get onto a forum. Hence why the debates are pointless at best and intentionally dishonest at worst.
  12. you're still ignoring the fact that some humans will sound like others naturally, it is grossly unfair to tell someone they aren't allowed to record themselves simply because computers now have the ability to accurately copy the human voice. Again, computers will only copy a human voice if they are programed to do that, it is intentional copycatting when done with a computer. When it comes to humans sounding the same you can't just assume they are doing it on purpose.
  13. The only thing in place that keeps us at arms distance is under arm bowling in the cricket and if there is a left holding power in one country and right holding power in the other.
  14. It stops people from making wild claims, defaming others, false advertising and anti consumer practices in general becuase a warning does not excuse false or misleading material. How is that stupid? Because everyone here fails to be able to understand the difference between upholding a valuable law versus possibly hamstringing a small part of one program. This ain't the US, and thank fuck too. I just couldn't bare to live in a country where people think a warning or disclaimer somehow prevents damaging information from actually being damaging. you are conflating way too much and missing the core issues.
  15. That's what I got out of this sentence: It may not have been your intention, I apologize. We already have laws that do that though, they just don't specify the tool, like you aren't allowed to murder, it does not say you aren't allowed to murder with a knife, knifes have other uses and shouldn't be singled out. AI is the same, no law should specify AI unless the problem is exclusive to AI. So in the case of this thread the problem is AI being used to mimic artists. The solution already exists in law, it just needs a case and few judges to agree whether or not sonic character is trademark-able. It is likely that any ruling will not specifically target AI but simple say any use of software to copy a voice = no. Just like it is illegal to copy a traditional trademark, the law doesn't say you mustn't use a specific type of printer to make the copy, it say you can't use any printer to do it. Except there is a difference between using software to replicate a specific singer and someone just sounding like them. IF you ban all forms of singing that sound the same you deny legitimate artists their work simply because you don;t want a computer program being used to intently and very specifically copy an artist. I didn't miss it, I disagree with it. I think there is confusion over what you mean by other ways, you argue for intent to be valued a lot, but in this discussion I don't see it. You know that a group like greta van fleet might very well sound a lot like zepplin, even as a tribute band, but they have created all their own songs, worked their way up through the shit fest that is the music industry to get where they are. They did it 99% on their own sweat and hard work. But under your law they would not be allowed to busk let alone have a record. And this is all because we want to stop someone with no music talent, no hard work, nothing to risk (doesn't have to put in 10 years of flogging it out) from making a direct copy of a specific artist to cash in on their fame. It's just that these are two very different situations and I don't see how one law banning the hardworking artists because AI can do it faster and more specifically seems fair. I don't think you quite follow what I mean, The problem is you can't blanket accuse all artists (who sound like someone else) of doing it solely and intently to cash in on said artists fame. It doesn't even follow logically because the amount of effort and work an artists has to put into developing their music to sound exactly like said artists is the same effort that has to be put in to be successful as themselves. You can't just mimic an artists and be as good as them, you have to work at it for decades. The same cannot be said for AI, people programing AI to sound exactly like a specific artist can absolutely be accused of targeting and mimicking to cash in another's hard work, in fact there is no way AI will re create a perfect impression of just one artists unless you train it specifically to do so. They are just two very different situations completely. Not AI, but any software or computer aided tools, why? because they are done specifically with the intent to cash in, you cannot say the same for humans for the reasons I have said above.
  16. mr moose

    Just a reminder that Microsoft Edge is awful in…

    It's what MS did with skype too.
  17. Yes it's a real problem tbut the problem is it doesn;t effect 100% of the population and banning/censoring does not remove 100% of the problem, the perception is wrong. The problem with your argument here is that you think defamation laws in Australia are somehow wrong because you want AI to be able to be wrong without consequence. Changing these laws to allow that to happen is a real problem. It's like your logic changes depending on the law in question. I at least am using the same logic to critique all the laws, previous, proposed (because they are proposing back doors in encryption) and current. There you go again. you keep banging on about these warnings. I am tired of repeating myself, read the damned links that were provided. warnings don't mean jack shit when the law is broken. And you think I fail to understand. Your problem is your lack of desire to understand why it matters that companies can't have a scapegoat warning to excuse them from breaking the law. Whether you like it or not defamation occurred and Brian's character was impugned, a warning does not remove the legal conditions that surround such an event. If they did then it would be open slather for false news and slander, every newspaper would simply have a warning at the bottom and then write whatever they damned well felt like.
  18. You have quoted three separate issues that are not the same thing, that last one was about the end result not how it came to it. I.E the court does not care how the AI concluded Brian was a convicted criminal, but it does care that the AI made those claims. I really don't think you want to understand how the law works. I don't need to know how chatGPT works to know that when it defamed Brian they can't use the excuse "we are working out the bugs" Defamation does not work that way. You really need to lose the idea that defamation requires some sort of origin context. It does not, and the courts will throw out OpenAI's argument if they try to excuse what happened under the guise of error testing, we warned people or it was an unexpected anomaly. I'm done. The law is pretty simple, you people just don't like that it applies to AI as it does to everything else.
  19. That's for critics ,it is for people who want to say things like how they think the food tastes, or how good the service was. It does not apply to when you make claims that are objectively wrong. That is why some google reviews that could not be verified by real people or about events that actually happened were removed under defamation law.
  20. It seems to me that this whole thing can be drilled down very simply. Under Australian defamation law, in order to win a case you need to do the following: 1. prove that something someone said/published is false 2. prove that the alleged perpetrator is the source of the publication 3. prove damages significant enough to warrant a case going before a judge. In this case 1 and 2 are basically proven, We know chatGPT is the source of the publication and that what is said was not true. 3. is yet to be seen, but we can judge the likely outcome being thrown out if Brain can't prove damages, if he can then it goes to court and then the court will have to decide whether the creator of ChatGPT are liable or not. Nowhere in Australian defamation law is intent or reasoning considered important. This means that if I call the head pediatrician at the RCH a pedo and he loses his job, the courts will not care why I think that, they will not care if I truly believe it or not, they will only care if it is a true or false and if it caused damage. In this specific case, the courts will not care how chatGPT came to the conclusion it did, they will only care if the publication was true or false and if it caused damage. What sets this case aside form many other cases and what makes it the first will not be so much because it is AI, but because the publishers (the ones being sued) are one step removed from the output of CHatGPT. This could literally go any way, but my feeling is that because they can't claim ID and they can't rely on warnings, they are going to have to sway the courts using either an argument we haven't discussed here or claiming that this is par for the course and we should just get used to it. I can't see the courts accepting that because most of Australian civil laws are setup to support the victim, but stranger things have happened.
  21. I think you will find no one is arguing there should be a law against AI. I don't even know where you got the idea from. What people are saying is that you shouldn't use tools like AI to infringe copyright (assuming you can) or to infringe trademark (which is yet to be established, maybe...). Just like you aren't allowed to use other legitimate programs to break the law, you know, like the internet. It is basically used to break the law on some many different levels every day yet most of us argue it is an essential service (it just needs to be monitored in some cases). AI, like every other technology that humans have developed will have to be used within the realms of the legal system of each country it is used in. If that means not using it to create music that sounds exactly like another human then so be it, that is not a law against AI, that is a law against it's use for a specific thing. What it seems to me is that you seem to be arguing that AI should be allowed to break any and every law simply because it is AI. That makes no sense. I know there are some dumb laws and some corrupt laws in every country, however the vast majority of them are there to make life better for everyone, Any tech that you think should be above the law must have some exceedingly positive uses (like solving world hunger, cancer, HIV and maybe a half dozen other things) for that argument o be even worth making.
  22. And in the real world they would be just as open to a litigation case as any other. It really comes down to the damage as to how likely a case will be made. The courts won't care what AI has or does not have, they will not care how it came to it's conclusion, It is pretty much black and white case of defamation, all that needs to be proved is damages and to make a case for the publishes to be culpable. That's it. Arguing that the program has limitations only tells the court the program should not be released to the public with such known limitations. If Brian goes through with this then OpenAI are going to have to prove that it is reasonable for their program to respond to questions about people in a non factual way that absolutely will break defamation law. That's it, it does not matter if we think this is fair or not, that's how the law works, defamation only happens when there are damages, and when their are damages the person who did the damage (or permitted it to happen) are culpable. As I have said countless times before, this will be the first time (if it goes ahead) that the courts will have to decide if a programs output/effects is the responsibility of the publishing company. To date it has not been the case because most times the product is used in a manner beyond the control of the manufacturer (i.e knives, guns etc), however this is the first time the product did exactly what they made it to do. Again, whether you think that is fair or not is moot. Really? you're asking if a program designed to answer specific questions based on it's creators training is different to a random word generator that might spit out actual sentence once every million runs? Of course they are different, very different. ChatGPT is not a random word generator, it is literally programed to respond to inputs in the most relevant way. Your not a public figure who's election and future job opportunities hang on public image. Yes you can, the issue is not is it defamation, the question the courts have to decide will be is it openAI's responsibility, Remember chatGPT is a closed system, it was created and trained by OpenAI, there is nothing anyone can do to alter training or interfere with the process beyond trying to change the way you ask it questions. That means that chatGPT was the original source. You can;t have it both ways, either the data they fed it was wrong and chatGPT is only guilty of innocent dissemination or the data they fed it was accurate and the program defamed Brian by being the first source of defamation. That is how the law works and to be honest I like it that way, it means you can't wriggle out of being held accountable by trying to argue you made a mistake. And yes there are plenty of other laws that are exactly the same, if you take your eyes of the road for a second and career into oncoming traffic (assuming you survive) you can't just say "I made a mistake" you are going to be charged with culpable driving and you will face lawsuits for damages. Defamation is the same, if your words (being the original source of information) cause damage you are going to become open to having to pay damages, ignorance is no excuse. And? did any of those articles say what ChatGPT said? Because As I said earlier, I cannot find any articles that claimed Brian was guilty of any of it. All the articles say he was a whistle blower. It's not common knowledge among common people, however (as I have already pointed out) defamation law doesn't care why it is wrong. How many times does it need to be said, the LAW does NOT CARE why something is wrong, only that it is wrong, is the first instance and that damage has actually occurred. Yes they do if the court is going to side with innocent dissemination. Wihtout any prior article or public statements to the effect of what chatGPT said then chatGPT is the original source.. Other sources of similar data is crucially important to openAI defense in this case. Again you are trying to reason the law is wrong simply because AI gets shit wrong. That's not how it works. Which I have been saying from the very start. This is an open shut case of defamation, it has not gone to court yet and may not. If it does the facts are not going to centre around AI or programing or training or any of that. That is my point. I don't know why people have such a hard time understanding this. You can't make those claims without any of the evidence. We have no access to anyof it so we can only speculate on what has been stated. Which if you go through all of my posts you will see they are all qualified with if statements, i.e if brian goes a head he is going to have to prove damages, if he does prove damages then he will have to prove chatGPT was the original source (not hard given it is and the publication timeline for everything is there to prove it), then if/when that happens open AI will have to defend the actions of chatGPT, the courts are not likely going to accept that AI is just dumb sometimes and we should all accept it, they are going to have to convince the judge/s that it is reasonable for a program to defame someone because it can't be made more accurate. On that reasoining if this goes a head it doesn't actually look that good for openAI in Australia. You are aware that I have already spoken to a lawyer personal on this very subject right? One that works for the government overseeing advice to departments concerning these problems and the problems of privacy law in a tech age. You haven't seen the evidence, nor have I, so you cannot argue there are no damages or defamation. (except for the fact that it looks like defamation, smells like defamation and is concluded by many lawyers to actually be defamation). Awareness of the case does not change the existence of the defamation. That is a logical fallacy. What I know is no the deciding factor. What you know is not the deciding factor, what was published and what he can prove is the deciding factor. Only one has to have asked it, if they went on to publish it, as they did, which then got back to him means it likely did the rounds on social media, it has been a publication originating from chatGPT. Everyone else who asked and shared it are only guilty of ID at best and likely they won't even get questioned about it. Remember defamation law only requires that one other person has seen it and that damages have occurred. So this line of reasoning is irrelevant. Again, arguing that taking someone to court only makes the defamation worse is literally arguing that no one should be able to seek justice for defamation because the second they do they make more people aware. You know it is exactly the same with rape cases BTW, victims do not speak out because they feel judged, embarrassed, dirty you name. But I am sure you wouldn't argue that going to the police and having a perpetrator charged is worse for their image than getting a rapist of the street. A little more grounded? so far everyone is arguing without considering the facts, some are even trying to argue this is going to set back AI development. I mean you can;t get much more grounded than my statements. I am only repeating the facts and the likely outcomes based on IF the story as presented is true. I did read what you said, you are not comparing apples to apples. One is a proposed law that effects everyone negatively (in my opinion) and the other is an existing law that only effects people who actually break it (causing harm to another) and that requires good old fashioned proof existing to support the allegation. bollocks. Think of the children is an argument to abolish/ban something for everyone because of a "perceived" problem. Successful defamation is not a perceived problem and it occurs on a case by case basis, therefore not a think of the children argument. IT is a make sure your products don't break this law or you will have to face court every time someone can prove it did them harm. I really don;t see what the problem is with that. irrelevant to this case, so not good reasoning. Defamation law is does not care about percentages, it is solely there to give victims of damaging false information an avenue to get justice. It does absolute nothing else. It can not be used as a government weapon, it cannot be used to undermine legitimate information. I guess you have failed to read any of the links I provided, those warning don't mean shit in Australia. No body can put up a warning and then break a law and say "but I gave everyone a warning", Warnings do not excuse you from breaking the law. FULL STOP. I think you will find that the opposite is true, by weakening defamation laws to exclude AI you then give people an avenue to commit defamation on a corporate, political or personal level and then get off scott free because "it was the AI".
  23. Which is why I said it would need a good trial through all the courts before a determination/precedent could/will be made regarding this emerging ability to more easily copy a voice and style.
  24. There's always 2 sides to every story and the media usually only tell us the side that sells their articles. Is it no wonder we think every company is run by a sexist racist megalomaniac greed driven narcissist dinosaur?
  25. I did not actually say copyright, I said trademark, but that's kinda moot anyway because you are conflating all things as being equal, Using AI to mimic a singers voice to the point people will be hard pressed to know it isn't is a very intentional thing to do. This does not happen with AI if someone just asks it to sing a song, they have to specifically train the AI to sound like person X. That's is intent which if the voice could be trademarked would be illegal. Another singer who just happens to sound like Person X on the other hand should not be told they cannot perform because they sound like someone else. To assume the two are equal is unfair at best and illogical at worst. And it seems some people have become so enraged by this misleading that they defend AI beyond reasonable logic. I am not advocating for the blind leading the blind or supporting "fuzzy laws", I am merely pointing out that somewhere along the line these things are going to find a common ground. It seems more accurate to say that the legal ramifications of AI has you more upset than the actual ramifications.
×