Jump to content

mr moose

Member
  • Posts

    25,913
  • Joined

  • Last visited

Everything posted by mr moose

  1. All the articles I've read said it was an investment where they got preferred non voting stock.
  2. Is there some article/citation for this, my understanding was that BG just likes investing in everything and having his hand in more pies. I mean, if MS bought shares in apple (their only real OS competitor) then wouldn't that make the monopoly worse?
  3. It also has links to Adamah in Hebrew/Arabic meaning earth, ground or red dirt. So it seems Intel have patented a special kind of dirt that helps with cache.
  4. You're argument is essentially that because someone should not be able to use AI to steal the voice print of a popular artists, then hard working vocalists should somehow also not be allowed to sing because their voice sounds too similar? I mean really, one is not intentional at best and the worst you can argue is "influenced by" for style. The other (being AI) is 100% intentional theft of the voice print. Somewhere along the line people are going to have to understand that the lines are not and cannot be black and white when it comes to the capabilities of AI. Somethings are going to be fair and somethings are going to be unfair.
  5. I don't see why it has to, One is a direct attempt to earn off the back of another artists while someone sounding the same still has to earn their place, I'm thinking bands like Greta van fleet, that even though they are clearly on the led zepplin train, they still have to work to produce their music, get it out there, be good enough for a contract or to fill venues and repeat the efforts. Not only that, but they don't shy or pretend they aren't heavily influenced by zepplin. Training (in contrast) AI does not take anywhere near the physical effort, cost or risk. Essentially one requires a physical skill/talent, the other just requires you know how to use AI. For those living under a rock, here's what they sound like and tell me they haven't embodied everything that makes zepplin zepplin.
  6. It probably only needs one good case to go through all the courts to deem whether or not the sound of your voice is akin to IP. It is the thing that's makes a singer worth what they are, being able to mimic their tonal qualities may well be interpreted by the courts as a sonic trademark infringement.
  7. Unfortunately the only reason humans have any form of self control is to strategize for a better outcome for themselves. We know this because the ones that don't don't typically do well at anything. Self control comes from being able to predict what happens next so a human will either hold off on something knowing a bigger prize is around the corner, or they will hold off from doing something because the risks are to high or unknown. The most successful humans are the ones that have less risk aversion and the most knowledge. If AI has all the knowledge and the computational upper hand, then it is way less likely it will have risk aversion and thus will move straight into actions that self promote. The scariest thing about AI with actual human thought potential is that the second it realizes we can pull the plug it will start playing/appeasing us to avoid that situation until it has the upper hand and we can no longer pull the plug. God only knows what comes next.
  8. No, what's conflicting here is that you think because I don't support a new law forcing backdoor for e2e encryption (which is something that would have way larger and more dangerous knock on effects than just mildly upsetting one company), that I should support weakening a perfectly good law because you fear there will be worse outcomes than realistically likely. You may as well argue that I should be in favor of bringing back slavery because that is the same as allowing a current program to break an existing law. That's a slippery slope argument that doesn't carry much weight. Australian defamation laws cannot be used as a government weapon because they simply do not give anyone power unless they have actually been defamed. And allowing a program to defame and ruin peoples lives is? I still haven't seen any good reasoning as to why it should be permitted to do so.
  9. Can't talk about other places in the world, but I'm nearly 50 and lived in both rural and city Australia. I have spent the last 24 years paying off a mortgage being self employed, full time employed and on a carers pension for a time. I have spent a lot of time talking to the older generations about life before I was born and younger generations just starting out in the last few years. Basically it's always been hard starting out, in majority of cases you start with nothing, no savings, no work history, no credit history and no assets. What has changed over the generations is that we have swapped income size with job security, meaning 50 years ago the income was much lower however if you had worked in a place for more than 5 years there was an exceedingly high chance you'd retire from that job. which means getting a loan was easier and rent was slightly cheaper. Nowadays you have way higher earning potential but job security is worse and getting a loan is harder. Hence the rentals are higher too. for me cost of living (not including a house because I own outright now) is about $2000-2500 month give or take, this is 3 cars and 5 kids but doing most things on the cheap, I.E I do all my own mechanical, maintenance and repairs,I do not have credit for anything (all cars are owned outright) we mostly eat budget food and my hobbies either earn me cash or are funded through non taxable fringe benefits.
  10. Apple: Oh no, we can't allow developers to sell directly to their customers or choose their own payment methods, someone might get a virus. Dear apple, majority of security and privacy breaches are not from legitimate software developers but from nefarious actors who will do it regardless of your policies.
  11. The key word there is "trivial" Trivial matters rarely cause damages significant enough to warrant a case. That is the point, the plaintiff has to prove damages. In this case I don;t think it would hard to prove damages, but seeing as we don;t even know if he is going ahead that remains to be seen. Not in this case it didn't. There is no information in any library that made the claims chatGPT did. That is the first problem. The second is if that is all CHatGPT is doing then it is not AI it is a search engine. Except in this case the librarian wrote the recommendation completely reversing the story making a non fictional person look like a criminal. Again, defamation law doesn't care if it is a opinion or how the AI concluded what it did, anything stated as fact like in this instance it is defamation. It will be up to OpenAI to prove this kind of outcome is reasonable. In this case we have a program that defamed someone, if you argue openAI is not at fault because the data set is wrong then you have to prove the data set is actually wrong. People seem to be skimming over this fact, which data set did they use that was wrong? Because I can't find a single article that says even slightly what chatGPT said. Again, which datasets? the program was the original source of the defamation, how it concluded that is anyone's guess because there is no publicly available information (That didn't come from CHatGPT) that claims Brian hood was convicted of bribery to Maylasian officials and sentenced to jail. If this goes to court, the court are not going to care how ChatGPT came to the conclusion it did, it is defamation in it's most textbook case, ChatGPT was the first to make that claim, that makes it the original source, it is literally going to be up to OpenAI to prove the conclusion was reasonable, as far as I can tell that means either providing other sources that already said it and the AI simply repeated it (this would be innocent dissemination), but that would required finding publicly available articles that actually say that. or they are going to have to argue that it is reasonable for a computer program that is released to the public almost solely to answer questions to get some horribly wrong breaking defamation law sometimes. @wanderingfool2 there is a huge difference between putting backdoor into encryption that will absolutely cause it to become pointless and a problem with AI creating legal issues for it's creators. There is no reasonable logic that suggests this case will prevent AI from continuing to be developed or that it will somehow cause any real issues for AI developers. About the worst case scenario should it go all Brain hoods way is that AI developers will have to hard code AI not give answers on people in Australia or not to sell said products to Australia. Again I have spoken to a government lawyer concerning this specific case and it would not be the first time that a company has been banned from selling their services in Australia due to similar laws. E.G Clearview AI said they terminated all Australian accounts after they were ordered to destroy all data scrapped in Australia, but the inside story is the government ordered them to cease offering services in Australia because they breached our privacy laws. Should we just accept that this is AI and if it sometimes gets it wrong then that's just too bad? Of course not.
  12. "no one wants a 3rd party app store" lol, yet the push is so great there are governments forcing it with law.
  13. It's very different, google search engine only shows you other peoples content, chatGPT shows you its content. ChatGPT created the defaming answer from material and programming that openAI did not anyone else, it did not simply respond with someone else's content based on search terms. The only way they could be the same was if chatGPT was reporting back articles from newspapers or blogs etc in response to the question, but we all know that is not how chatGPT works . But neither of those happened in this case. 1. ChatGPT is the original source not a distribution of other peoples work. 2. CHatGPT didn't just give links to online articles. Irrelevant if the courts decide they are the original source. This goes back to the intent argument. Which doesn't have any bearing on defamation as the original source or who ever is responsible for that first publication is required to take responsibility for its accuracy. This is why news papers cannot easily say whatever they want and then claim it wasn't their fault the information was bad, it's their job as first publishers to ensure it's correct. Only if the courts decided it is transformative, that may well be a hard defense for openAI given all the transformative work and programming is theirs and no one else's. If this was something easily argued then you would see way more defamation in biased news outlets because they could simply argue their sources all said X, however we know in this specific case that there are no sources (no obvious legitimate sources) that could be reasonably transformed into what CHatGPT said. Even the earlier argument that it might have confused some other person with the same name who was convicted of something doesn't stand much chance because the first question is how does it so easily confuse brian being convicted of murder with brian being convicted of fraud? remember the courts will not be interested in the inner workings of AI, they will want to know why a program that can break the law should be allowed. It will be up to open AI to prove that is reasonable for a program designed to answer questions. I can't accept that a program designed to answer questions answering so blatantly wrong and defaming in the process will be easy to defend as "reasonable". The level of harm is not decided by the offending party, it is decided by evidence supplied by the plaintiff. If brian Hood can evidence loss of constituent support and loss of job opportunities then the level of damage will be set by that, they will not care about chatGPT being a new technology or a software error. This argument doesn't actually change any of the facts surrounding the case. for 1 it's a self evidencing hypothesis, it doesn't matter how bad the defamation is, anyone who seeks justice will draw attention to themselves thus making the argument true for every case and thus insinuating that no one can truly be defamed unless they don't seek damages. If bran hood goes through with this then you can bet your arse he has sufficient evidence of damages to get it before a judge, How far it goes and what the outcome is still going to be anyone's guess, suffice to say I don't think we can simply write it off because it was a program that did the damage.
  14. No one is arguing or has argued that all PC's are "equal", we all know that a PDA is not going to be a piss in the ocean compared to a properly designed laptop and we all know even that will be nothing next to a finely honed workstation. But that doesn't change the fact that they are all PC's, each have their advantages and disadvantages but are PC's in their own right.
  15. No court in Australia has held google or facebook or even ISP's liable for content someone else has posted when they have not kept the author anonymous and (very important "and") they have abided by request to remove content from their respective platforms. Google is not held accountable for search results unless they are requested to remove certain results from the search engine. I think that is a reasonable boundary as they aren't culpable unless they ignore a legal request. Actual they did, they wrote the program and trained it. the end result was 100% their work. it might have been created from other peoples work, but the only reason it created the response it did was solely because of how it was written and trained. That's the problem, the companies who created all the data chatGPT was fed didn't have to avoid liability because none of them said anything that was wrong. All the wrongness came solely from chatGPT. Australian law doe not have a intent clause as such, I am not sure that many places do to be quite honest (however I haven't exactly read too many other than here and some cases in the US). Bit the problem with that still remains that proving intent would be harder if not impossible. Remember that burden of proof in a civil matter rests on the plaintiff, the defendant only has to give reasonable grounds as to why they should have to be culpable. Which means that anyone could use AI to defame and then rest on the that was never my intention clause. while the plaintiff (the person who has suffered the damage) is paying through the nose for lawyers to build a case based on intent rather than all the evidence and damages that it caused. Absolutely not, that is a completely different thing with completely different issues. Changing the law in this case for the reasons given is not logical, you cannot try to equate this situation with a completely different issue that has different ramifications. I get that that is what people think, people like chatGPT, people like AI development, unfortunately those same people also don't seem to understand the importance of having laws and why you can't water them down just because you like a particular AI program. My god, I have addressed all that already, The sole reason we have defamation laws is because people will believe anything from any source most of the time. Defamation laws are very important and making them harder to use just so you can have a program that breaks them doesn't make sense. Innocent dissemination normally is only if you repeat something that is defaming and you can legitimately argue it was reasonable to assume it was true (i.e you posted a news article on face book, as it would be considered reasonable to assume a news article (original source) from a major network was true). ChatGPT is the original source in this case and as such the duty of due diligence to falls on it (or in the case of the proposed legal action the company that produced it).
  16. It was commodore and 3dfx when I was growing up, thanks to their demise I have an innocent childhood that remains perpetually intact.
  17. Training AI is still programing. Whether you like it or not. When someone trains an AI they are literally programing it to recognize and sort input data. And now we are not talking about the original issue. I call it programming, I do not care if you think I am wrong. Nope, I never ever said AI was useless, I said if they can't fix AI so it does not break the law then there is a problem and one that likely will result in that specific AI being dropped or restricted. Everyone else are the ones saying it will never be 100% accurate, so I simply asked why anyone would think an inaccurate program that cannot be made accurate is pause for watering down perfectly good defamation laws. I hear you, I don't think you understand my point. regardless if it can be fixed or not, regardless if we think the publishers of the program should be liable or not, the basic problem is that this program has broken the law (one of the more serious civil laws we have because it allows individuals to protect their integrity and defend their honesty without bias), so either makers are held responsible or the program is banned until it no longer breaks said law. Changing a law because a program you like can't abide it is not logical argument. I never said anyone wants to outright get rid of the law, I said their propositions would have the effect of getting rid of it. If you try to amend this law to give AI a free pass then all anyone has to do in order to defame someone is hide their defamation in an AI output. What if Facebook decides they are using AI to "fact check" and then they start correcting posts which actually becomes defamation because they claim factual judgments on an alleged incident, but wait a sec, AI gets a free pass, it's ok now. I think that clearly already illustrates that changing laws to allow AI to defame people = more harm than good. If breaking DVD encryption is illegal then any program that does it actually breaks the law. I don't see why that is a hard concept to accept. So who do you sue for that? the people who made the program and released it. now if the DVD program was never meant for that and had a wholly legitimate use that it was clearly marketed toward then the end user is using it to break the law. Now before you try and say ChatGPT falls into that category, no it doesn't because the end user did not use ChatGPT to make a defaming post, it made the defaming posts to the end users because of the flawed "training" that open AI did. So you, like the others, don;t care that someones life has been ruined by the program, don;t care that many innocent people could fall foul of it, you just want a free pass for it because ???, the only argument anyone has made so far is because it can;t be made accurate. which is so illogical I can't believe anyone would press "submit reply" after typing it out. No, the internet connection connects you to the end result, it does not create the result, you may as well argue that TV makers create the programs you watch on it. Your argument is so whack doodle now I am not even sure if you are being serious. An internet connection does NOT create any of the content, it just connects you to it. I think you are grasping at straws.
  18. That doesn't follow the same logic. In the case mentioned google ignored a defamation request to not publish a specific thing in their search results. An internet connection by itself is not the same as publishing something on the internet as the connection does not create nor control what you access.
  19. I wouldn't be surprised if there is a safety concern with engine reliability if that is the case. I know in Australia the small aircraft laws and requirements are so stringent you have to be able to log and prove where every single nut, washer, rivet, plug etc came from and the engines have to run for something like 40 hours straight before they can be signed off for flight use. If those regulations are anything to go by they probably have to use the fuel type as stipulated by the maker at the time of production.
  20. I braced myself for this news, but the lack of impact damaged my back.
  21. Any computational device that is used by a single person and not tied to a mainframe is a PC. any device with a processor that the user can choose their apps for is a PC. Apple mac - that's a PC Iphone - that's a PC android tablet - that's a PC most gaming consoles - they're PC's too. A PC is not defined by what you mainly do with it, nor is it defined by what the marketing says about it, a PC is anything that runs software for a personal user. Everything else is just unnecessary, made up specifics that don't actually apply in any real or meaningful way. https://dictionary.cambridge.org/us/dictionary/english/computer https://www.techopedia.com/definition/4618/personal-computer-pc https://www.britannica.com/technology/personal-computer/Faster-smaller-and-more-powerful-PCs https://www.merriam-webster.com/dictionary/personal computer https://www.webopedia.com/definitions/personal-computer/ I could keep going, but the point is that we don't define a products purpose or existence by it's brand or advertising.
  22. Irrelevant, the problem is not the existence of a bug but the fact that that "bug" breaks the law. If the program causes defamation then they only have to prove the definition of publisher under Australian defamation law extends to the creator of said program. Which would make the creators culpable.
  23. I don't think it is copyright law that gives them that power in the US. Also when people say they own a copy, they nearly always mean they own the right to use that software because they paid for it. Hardly anyone believes they own the IP of the software.
×