Jump to content

Sauron

Member
  • Posts

    28,096
  • Joined

  • Last visited

Everything posted by Sauron

  1. I know, I'm wondering why they left that path in despite no official support, only to remove it later.
  2. I guess in that case the question becomes "why did it ever work in the first place?", considering C2Ds were never officially supported
  3. I agree they wouldn't care, I just don't understand why they'd intentionally and suddenly start using this instruction they didn't use to before. I guess it's possible they had a "core 2 support" flag in their pipeline that they intentionally removed and it caused the compiler to sprinkle the instruction around, but I doubt they went out of their way to use this specific instruction more.
  4. I think it's unlikely this was intentional, someone probably accidentally changed whichever compiler option was telling it not to use that instruction and the rest is history.
  5. Sauron

    They caught this idiot in his red Mercedes doin…

    if that guy was going 200 I assume the guy in front was going at least 180, otherwise that would have been a guaranteed crash with nowhere to swerve to for the mercedes
  6. There are plenty of proprietary drivers for Linux, they just aren't listed in debian's repos because debian only offers foss packages by choice.
  7. It's not so much that AMD has great linux drivers but rather that at least their drivers are open source, as opposed to nvidia. In my personal experience, intel iGPUs have worked the best on Linux. The difference being that while Torvalds is actually competent in the field, Jobs was not; his talents lay in marketing and business management, not in tech. While Torvalds is known to have a temper, he can be reasoned with and has endeavored to make the kernel community more welcoming, as well as trying to be more polite in the mailing list. Well, we have the timestamp: "always" seems to mean "less than a week". On real hardware, where you might encounter legitimate driver problems, at most we're talking two days since here he was still in a VM:
  8. One of the most respected textbooks on operating systems is the "dinosaur book" https://archive.org/details/operating-system-concepts-9e/mode/2up
  9. I assume most people are just not caring, and if you do care that is probably the only real way around it.
  10. So... just use something else. I told you you should not be using i3 or other barebones window managers if you don't know what you're doing: The advantage of a wm like this is that you can spend time (I'm talking months) customizing it to behave exactly the way you want. Just throwing in someone else's configuration defeats the purpose, you might as well use a preconfigured desktop environment which won't require weeks of muscle memory training to be usable. The terminal emulator is a program like any other, if you don't like the one you have just install another. There are dozens of options.
  11. https://wiki.debian.org/DesktopEnvironment#Installation_of_a_desktop_environment
  12. "debian desktop environment" is just gnome afaik
  13. No Yes, using kernel mode settings. Or you could just open a full screen terminal on your gui. That's still a graphical UI, just a more barebones one than the average desktop environment. It looks like DWM at a glance but it could also just be i3 or another tiling window manager, you can achieve more or less the same things with either. I've used i3 extensively, it has many advantages if you know what you're doing but I would not recommend it to a beginner. If you try i3 I can already see the dozens of posts here about "how do I get a desktop background in i3" and "how can I get audio in i3"... maybe stick to something ready-to-use if you don't know what you're doing
  14. I try to read as close to direct reporting as possible (or at least outlets that are clear about what sources they're referencing), but yes, a lot of online news is like that, which imo speaks more to the generally poor state of reporting rather than to the quality of AI articles. I did some work for a Linux blog once but I quickly gave up precisely for this reason; you were pretty much expected to pump out at least an "article" a day and there was no differentiation in pay or recognition between regurgitating other outlets and actually doing your own testing and research, which obviously took considerably more time and effort on my part. What I'm trying to say here is that while humans can potentially do actual reporting, as rare as that is, AI pretty much can't, and therefore an outlet using AI en masse will never be able to be more than a shitty content farm unless the technology changes significantly.
  15. While of course humans can write shitty articles as well, AI as it is right now is pretty much incapable of journalism because it can't contact sources or witness an event. It's one thing to summarize your research and have the AI convert it to article form, it's another to expect it to pump out an article given just a title or a recycled press release...
  16. Part of this is just the inherent problem with people being forced to work to survive and companies seeking profit over anything else. If they can screw you over to make ever so slightly more cash, they will in a hearbeat, of course. We should seek social solutions for this rather than take it out on the technology. I don't get the impression we're in for a situation where AI can take over most human jobs. So far the only areas it seems to be actively threatening are the ones producing dribble; low quality blogs, content farms... I see it going the way self driving cars have; it's always "almost there" and "at this pace..." while never actually being able to completely replace taxi drivers, much to uber's chagrin. Moreover, often it's possible to substitute a human for a machine, but the machine is more expensive than just hiring a human. Despite everything, companies still abuse workers in sweat shops because it's cheaper than automating the task... which isn't a good thing, but it goes to show that just having the technology that could in theory carry out a human task doesn't mean we'll use it. Large models like chatgpt and sora require immense computing power, in an age where generational hardware improvements aren't as large as they used to be.
  17. It's actually a lot more now, they just aren't manually mounting the cars anymore. They operate the machines that do. Sure, we no longer have the Modern Times style "sit here tightening (deez) nuts for 12 hours a day" type of job anymore, but that's a far cry from saying that humans are no longer part of the manufacturing process just because robots can do those mindless repetitive tasks for us. As I argued before, computer scientists and engineers aren't just code drones.
  18. I would say it does matter, having built-in checks decreases the likelihood of obscure errors regardless of your experience level. It makes sense to recommend usage of languages with these features when possible, especially in mission critical applications. Now, that doesn't mean that a language having these features is automatically suited for such applications either...
  19. As mentioned by others, if you know how to program then the specific language is almost irrelevant. I wouldn't spend all my time coding in something ultra obscure like Zig if I were looking for a professional career, but as long as you have some background in a fairly popular language like any of the ones you mentioned you'll be fine.
  20. Maybe a larger version capable of longer outputs could make something that's at least coherent over a hundred pages or so, that's not theoretically impossible... although it may require more computing power than would make sense to give it right now. But either way I don't think the output would be a very good book, if not directed by a human who has some idea of what a good book even is. By virtue of how these systems work you always get the most statistically probable output, which is a mediocre one by definition. You might get a serviceable dime-a-dozen young adult sci-fi novel but I doubt you'd get Dune. And as we discussed before, a program could also be perfectly written and extremely efficient while doing the wrong thing. My work involves designing and programming industrial machinery, I have colleagues whose job it is to make sure the finished machine works as intended; they aren't programmers or engineers by trade, but they do need to know the basics of programming. If they could reliably get decent code snippets from chatGPT (not gonna happen, because automation languages are too obscure to be well supported by something like this, but still) it would certainly save them a lot of time, allowing them to focus on the electrical and mechanical side of the machine. That's how I would envision this being useful more than anything else.
  21. It's kind of misleading to say "in the past 2 years alone" when what we're seeing now is the result of a decade of work. The past couple of years is just the moment where it got good enough to make mainstream news. Assuming it will just keep improving at the same pace is also likely wishful thinking since, to get it this good, we pretty much fed it the entirety of the internet; getting significantly more data will be really hard. Car plants still employ human workers. It doesn't really matter what he does or doesn't know because he'd say this either way. Even if he knew for a fact this was a bubble about to burst at any moment, he'd still have a vested interest in keeping the hype up a bit longer so he could cash out. Right now GPT isn't doing any computer science, at best it's doing code monkey work and only on small assignments. Even assuming a fast pace of improvement, currently we're not even close to CS becoming irrelevant. Consider writers; chatGPT can write some pretty convincing text that is also grammatically correct (most of the time). This is more than can be said about its programming performance. And yet writing a good novel or article is not just about writing correct and coherent English. Maybe we can be rid of the busywork involved in actually typing down the prose or code, but there's no indication a gpt will be able to autonomously devise a good story or a well structured code base given just a vague description of what a system should do.
  22. I mean... saying he has a vested interest in saying this would be a massive understatement so I wouldn't really take his word for it. What's next, Apple telling you to stop buying computers because ipads are the future? (oh wait) There's a good reason we don't use human language for programming... and it's not because we've previously been incapable of translating, say, English to machine code. It's because human language is, by nature, ambiguous and contextual in a way that programming languages aren't. Even assuming that an LLM will one day be capable of producing correct code that is thousands of lines long with no errors, which is a bold assumption imo, people who know nothing of computer science won't be able to describe accurately what they even want. Heck, they won't even really know what they want most of the time. The purpose of studying computer science isn't to learn a programming language, most people can manage the basics of that in a couple weeks; it's about knowing what a system should do, and how it should behave to achieve that. Now, some fields of computer science may well be made obsolete by modern high performance LLMs; for example natural language processing has been a field onto itself (keep in mind, voice recognition and voice based "assistants" like Siri or Alexa did not use an LLM at launch) and could become irrelevant if AI based systems are able to solve the root problem. That just means those computer scientists can move on to work on other problems that LLMs have no relevance to. Well, that's what they'd have you believe. I guess in theory it's not completely impossible... but even if it were to happen and you could get a tailor made CNN for your specific problem just by asking an LLM for what you need it wouldn't mean CS is dead as a field, for the reasons I detailed above. This kinda reads like a calculator manufacturer saying kids shouldn't learn math anymore because calculators have made the field obsolete.
  23. Once again you're looking at things that as a beginner should be completely irrelevant to you. Adding debugger hooks will not impact the performance of your 100 lines of code in any meaningful way, and besides you can just build it in "release" configuration if you don't want them. If you prefer VIM there are multiple VS plugins that emulate its behavior. Or you could just use VIM if you have no use for the features VS offers... using GCC or MSVC is completely inconsequential when you are dealing with small, beginner level C programs that are not meant to be distributed to anyone other than yourself. What does this mean? Do you know what the differences between these compilers are, or are you just going off vibes and hearsay? Have you tried looking at the official documentation? Many programming questions, especially ones as basic as "how do I use X", can be answered by just reading the docs, rather than immediately going to post the question on a forum where someone will go and read them for you... https://learn.microsoft.com/en-us/cpp/build/walkthrough-compile-a-c-program-on-the-command-line?view=msvc-170 tl;dr: cl hello.c
  24. Fingers are not consistently shown in photos whereas faces tend to be fully shown. Finger count is also something we can immediately spot as "wrong", whereas stuff like face wrinkles, for example, can vary a lot from person to person and are not necessarily "right" or "wrong".
  25. Also consider how you'd generate that length number in the first place... you'd have to walk through the string at least once to find the terminator, even if you never actually need to know the length of that string in your program, so what would be the point? QOL functionality like this only makes sense if you already have layers of abstraction well above what C generally offers.
×