Jump to content

rsethc

Member
  • Posts

    37
  • Joined

  • Last visited

Everything posted by rsethc

  1. Thanks. Haven't tested for very long but it seems to be stable at 3133 MHz so far. If it turns out to be that small of a decrease for stability I can live with that (I got this memory at a significantly reduced price for being open box - so maybe this is the exact reason someone returned it before). Still would like PSU suggestions if you have any, as I do plan to replace that fairly soon regardless of the memory situation.
  2. Looking for some suggestions on a good PSU to upgrade to, & also a bit of diagnosis help with stability problems. Yesterday I finally built an upgraded system for gaming & code compilation usage, with some old and some new parts: - AMD Ryzen 3900X - ASRock X570 Taichi - G.Skill Trident Z RGB 2X32GB 3200Mhz - (old) Nvidia 750 Ti - (old) 2X 7200rpm 3.5" spinning rust - (old) 1X 5400rpm 2.5" laptop spinning rust - (old) 1X SATA SSD - 2X Samsung 980 M.2 SSDs - (old) Corsair CX600M After I finished building it, I did get the system to run, but had some trouble getting the memory to work at 3200 MHz as it's rated for. The XMP profile didn't work, and I thought I solved it by increasing the memory voltage from 1.35V to 1.38V since it then booted just fine, but playing some BeamNG.drive with a bunch of AI cars (yeah not the most scientific test but it was pretty consistent) the game would crash after a few minutes, and after a few game crashes it would be a BSOD with some memory related error. Under the assumption that memory errors meant not enough memory overclock stability, I increased DRAM voltage, increased SOC voltage a lot too, and still got the same exact behavior. I did run across this article: https://www.ifixit.com/Wiki/Troubleshooting_Computer_Memory So before recommendations for new PSU (I do plan to get one anyway to be able to upgrade my GPU when the market is no longer experiencing a resonance cascade), if anyone could speak to whether they have personally had or seen a build where memory issues were happening but then swapping PSUs solved it? The PSU is old, from day one that I bought the Bulldozer board and CPU something like 7 years ago, but it never seemed to cause similar problems in that system. Could it just be because the new stuff is drawing quite a lot more power that I am seeing this happening now? Second question: What PSU should I go get in order to comfortably support up to the following items as I continue to improve the system? These aren't all items I plan to get but do want to at least know I can upgrade to without worrying about stability. I did go X570 for a similar reason as I don't want to compromise on PCIe 4 capabilities. - AMD Ryzen 5950X - ASRock X570 Taichi - 4X32GB DDR4-3200 - RTX 3080 or the top AMD offering - 3X 7200RPM spinning rust - 3X M.2 SSDs - 2X SATA SSDs I don't know how important listing SSDs of either kind is in terms of power draw, but I thought I might as well list them too since I'm specifying mechanical hard disks (I do think the moving parts in those are demanding enough to potentially matter to the PSU choice). Also any suggestions for retailers are welcome, I like Microcenter but they don't appear to have a great PSU selection, with anything over 1000W being their house brand PowerSpec which has not so good reviews on their own site. Thanks for suggestions on any aspect of the above.
  3. What I'm wondering about in terms of the "how do they work" part is more like: how do they access memory (do they have onboard memory too?), and how does the OS become aware of those cores? If everything has to go over PCIe routing, is it only certain motherboards that support them (like you'd have to have a certain chipset, such that those mining boards won't work)? Or is it something the OS has to have had support implemented for, or is it a driver...?
  4. I don't know any in-depth details about how Xeon Phi's work but my basic understanding is that they behave like normal CPUs that you can add via PCIe add-in cards. Today I wondered about two things: If you bought a mining motherboard (you know, one of those that has like 20 PCIe slots), and enough Xeon Phi cards to completely populate it, would it work at all? And if it did work, and you ran some synthetic CPU benchmarks and productivity workloads on it (I would expect gaming to completely suck), how would it compare to just a single 64-core Threadripper? Or if anyone wants to chip in w/ an explanation of just how those cards worked in general, that would be pretty cool to know as well.
  5. What about just caching the files on the editors' machines? Instead of having the cache be in a centralized location, each workstation would have its own cache and you would also have less network traffic. I am not sure if Windows supports this but there may be some third party software to do it. The idea would be that when you attempt to open a file in the cache directory, if it does not exist (or there is a mismatch of last-modified time stamp) then Windows (or whatever third party software) would stream in the file, in its entirety or maybe in certain sized chunks, to the cache, and meanwhile also serve it to the file handle that is trying to read. As for the "in its entirety" / "chunks" this would mean that there would not have to be an explicit request by the application to read a small region, in order to go ahead and also load what is likely to be read next anyway since it's nearby the requested region. Although I'm not sure if Windows / certain file APIs used by the application already try to pre-load more data themselves and buffer it in memory.
  6. By the way here are the mounting options (they used to work fine before this, but I thought I might as well post them).
  7. Now I have a much larger concern. After reinstalling Samba from the repositories like a normal person, I am unable to mount one of my disks... what the heck could this mean?
  8. I'll leave the compiled-from-source stuff alone for now. I tried that command (sudo update-rc.d samba defaults) and did not see any output (not sure what it does anyway), opened up Synaptic and reinstalled the samba packages. The version of all of them appears to be "2:4.3.11+dfsg+0ubuntu0.16.04.12".
  9. I have an ancient machine I use as a NAS, with some various old hard drives in it and a Pentium 4 CPU. It's got a 100-mbit NIC onboard, so network file transfers are of course significantly slower than file transfers to the rest of my old network (I have three other machines I use as NAS, each of which has a Gigabit NIC.). I also had a bunch of old ethernet NICs sitting around, salvaged from other old computers. So, very recently it occurred to me that maybe I could speed up file transfers in some situations by adding some of those NICs to the ancient server. Currently, the server has two 100-Mbit NICs in it: one is integrated into the motherboard, and the other is in a PCI slot. The server runs Ubuntu 16.04.3, and I've been using the Samba version from the Ubuntu repositories. At first, I tried to set up NIC teaming but that didn't go so well (and, since I manage the server remotely, screwing up the network configuration means having to go attach a keyboard, mouse, and monitor) but I found out that Windows 10 (and Server 2012) are capable of a technology called SMB Multichannel which will make use of multiple NICs by splitting communications across multiple connections. Samba supports this feature now, but the version in Ubuntu repositories is older than the first versions where this feature was implemented. By the way, I took a look at FreeNAS but it is nowadays only for 64-bit processor systems and requires much more performance and memory than this machine has. No dice. So, I went and grabbed the latest stable Samba source, compiled it, cd'ed to the extracted location, and ran: sudo ./configure over and over and over again until I resolved every obstacle I ran into by installing whatever package it complained about, sudo make and then finally sudo make install But I have no idea how to do the final thing: installing the freshly compiled Samba as a (systemd?) service so that it will run at all times, and I can use it normally. Basically, I need help figuring out how to do the same thing the package manager would do automatically.
  10. cfosSpeed works fine for me in Windows 10 as it had in Windows 7, but I had to uninstall and reinstall the program at several times: when upgrading from Windows 7 to Windows 10 it got broken and I had to reinstall, and then this also happened with major Windows updates too. I suspect it has something to do with the updates breaking drivers the program installs to intercept network traffic.
  11. The results are consistent between both so I don't think it's an issue much farther out than my local dealings with the ISP.
  12. Alright, I power-cycled my modem by unplugging not only its power cable but also the coax cable, and made sure to leave it off for a generous 20 seconds. No dice, I did another speed test and everything is the same as before I did that.
  13. Actually I unplugged the modem accidentally while I was setting the new router up, but the coax cable was plugged in the whole time. I might as well try disconnecting everything from it if the cable itself can provide power. 10 seconds fully disconnected from power & coax should be good, I assume.
  14. The QoS features were already disabled, though.
  15. Did a speed test with the program uninstalled and as I had expected there was no difference. It wouldn't have made sense for the program to have been the cause of the issue anyway though, since my upload speed had *dropped* between routers at first, from 6 Mb/s to 1.5-2 Mb/s, without any change to the program (though, for the sake of not ruling anything out I have to admit that perhaps it would be possible for the program to detect a hardware change and change its behavior as a result, but I highly doubt that was what happened).
  16. Haven't tested wired connection with my laptop yet. I am uninstalling the network manager just to see if it changes anything but I doubt that will help. cfosSpeed is pretty good actually, has done a great job for me in the past. It's definitely not a scam, but I haven't paid for it as it came as a sort of indefinite free trial with the purchase of my motherboard. Unfortunately if I want updates then I do have to actually buy it. I will do another speed test shortly with this program uninstalled, and if I get dramatically better results then it may be that the program needs to un-learn its previous limits.
  17. That doesn't change anything.
  18. The thing about the QoS stuff is that it's all disabled (at least according to the configuration pages - early on I even turned it on, saved, rebooted, then off, saved, rebooted, and no difference was made). Also yes I updated the firmware quite early on as well, though the newer firmware I upgraded it to was from 2014.
  19. I got a new D-Link router (DIR-855L) and replaced my rather old Linksys WRT54G with it, thinking that the old router was the reason I haven't been able to take advantage of an internet connection advertised as capable of 100 Mb/s down - not the cable modem, which is quite new and rated for 343 Mb/s. I also replaced a short Cat 5 (not e) cable between the modem and router with one rated as Cat 6. However I did not experience any improvement in internet speeds, but actually the opposite: instead of 30 Mb/s down and 6 Mb/s up, speedtest.net revealed I now only had 2 Mb/s up. To see if the issue was WAN or LAN, I watched the WAN show (yes, bad pun). got a huge file on my PC (approx 15 GB in size) and stuck it in a directory accessible by my web server, and tried downloading it from my domain (resolving to my public IP) - http://glassblade-games.com/video.mp4 - and then from a local IP within my router's subnet - http://192.168.0.100/video.mp4 - and as would be expected, the video streamed way better off of connection to the LAN address than off of the connection to the public address. By the way, that file was only temporary - if you try to download it, you're just going to get a 404 Error Not Found. Don't worry, the content isn't important. It was just a file I could find that was big enough to max out my bandwidth for a sustained period of time. In this screenshot you can see that the local-address download is quite quick. I upgraded to this router partially because it supported Gigabit ethernet whereas my previous router only supported Fast ethernet. The local-address download is already going faster than what would be possible over a 100 Mb/s connection, and would probably be even faster but my mechanical hard disk can't really keep up, I don't own an SSD, and creating a RAM disk would be unnecessary for this test, not to mention that it could only be maybe two GB in size before it would start to compete with my billion open Chrome tabs. Then I went and tried speedtest.net from a WiFi-connected laptop, and got more favorable upload speeds that looked more like what I remembered having gotten with my previous router. And then I did the LAN vs WAN test, and the local-address download seemed to go at a speed limited by the WiFi signal strength, while the public-address download was stuck at 2 Mb/s as I thought it would be. I then read some scary things on tech support forums where people said the DIR-855L was garbage, blamed it on the fact the device was made by D-Link, and either they talked about having fixed the issue by upgrading to a newer model such as the DIR-860L, or they just abandoned the threads entirely. So at this point I was concerned that I'd bought a piece of junk with some kind of crippling problem that made it worse than the device I bought it to replace, which would be quite sad. I then decided to return to my desktop to try again investigating my router settings (I did plenty of poking by this point) but started by doing a few speed tests to get a baseline before I changed anything. But I noticed that between the first and second speed test I did, the upload speed went up slightly, though it was still quite anemic. Then I started doing more speed tests to see if that trend would continue, and though I initially expected it not to, I was starting to form a new hypothesis as my reported upload speed climbed consistently, test after test, until it was over 6 Mb/s (back where I started). Was my upload speed just really bad at first because of a break-in period? Was the router just trying to slowly raise its congestion control throttle so it wouldn't overwhelm my cable modem, or maybe the cable network it was connected to? A sort of long-term equivalent of the Transmission Control Protocol's congestion control (wherein TCP sockets will start out sending data at a low bandwidth and gradually upgrade the bandwidth until the connection between the endpoints is saturated)? As I was writing this I decided to do the test again to get a screenshot to show the difference in download speeds. But I just noticed that as I've left the downloads going, my network manager (cfosSpeed - great program!) is now showing outbound traffic of 1.4 MB/s (Bytes) - OK now it's crept up to 1.6 MB/s and just continues increasing - but the crazy thing is that I've never seen my network manager show any higher than about 800 KB/s up, which makes me wonder, not just whether my new router is "learning" and trying to adapt to its connection environment, but also whether my old router was artificially set to operate at whatever speed the engineers thought was reasonable and wouldn't overwhelm people's modems. However, I suppose it's also possible that the older device also had gone through a "learning" period when it was new - that was a long time ago. But still, why would the router need to do any kind of learning for the wired LAN outbound traffic speed, yet already provide my connection's full speed to WiFi-connected devices? And then there's this: While I was doing the many speed tests that seemed to be training my router to send a bit more down the line at a time instead of holding it back so much, I encountered an occasional *massively faster* download speed result that was much closer to the 100 Mb/s advertised than the 30 Mb/s I've been used to for so long, which was reported by the vast majority of the speed tests I ran. There were about 50 of them in total, so to put it as a percentage, the 30 Mb/s figure was reported, very consistently, for about 95-98% of the time. Weird, though, that this would happen seemingly randomly and then my router would just retreat back down to the slower speed again. That doesn't seem like the way a router should be designed to "learn" if that is indeed what is going on, because if that's what's happening then the intervals are way too far apart - if I can get 85 Mb/s down but not, say, 100, then I'd much rather have 80 Mb/s reliably than 30 just because the router is afraid of stressing out the cable connection. Which makes me doubt this is part of the "learning" thing. I didn't write this up and post it here because I want or expect help with the issue, but rather because I found it interesting, so don't feel obligated to attempt to solve it, though if you have had a similar experience in the past I'd be happy to hear about it.
  20. I recommend you store strings as... (unsigned numerical type) Length, EXCLUDING the NULL character, followed by the string's characters. The reason I recommend against including the NULL character in the file at all is because it's a bad habit to expect external sources of information, such as a file, network peer, or... a file from a network peer... to play by the rules. It's a bad habit, and it can contribute to security vulnerabilities if you're not careful. Let's say that the string in the file is reported to have a length of 10, and the next contents of the file are "Legit Text Follow by Some Malicious Code". If your code inspects the length value, allocates that many bytes, and then reads bytes and places them in the allocated buffer continually until it finds the NULL character (which would be a very sloppy implementation, but keep in mind this is for the sake of example and I am primarily emphasizing why it's a bad habit, not why it's like totally going to open your computer up to scary hackers or something ridiculous like that) then you would have a "buffer overrun" wherein the stuff after 'Legit Text', "Malicious Code", is placed into arbitrary memory that the computer might have been using for storage of instructions, and thus shortly after the computer might execute "Malicious Code" that has been written over "Actual Code". For a user-mode application the OS or part of the hardware would probably detect the buffer overflow in the beginning anyway, then simply terminate execution of the program and raise a SEGFAULT exception, but in the case of something like a kernel-mode driver, this could go unchecked, allowing the malicious code to brick your computer, send your information to an outside source, etc. I want to be clear in stating that this is not something you really need to worry about for just messing around with files and learning how things work, but if you ever plan to get into stuff like device drivers or operating system development it is very important to consider these things. Buffer overflows are one of the most common vulnerabilities exploited by computer worms to propagate to other computers through poorly programmed networking code. But even outside of security concerns, it's worth considering malformed input data if you are making anything that needs to be resistant to crashing due to an inability to deal with unexpected inputs. Sure you can tolerate a game crashing on your home computer if something goes wrong, but if you're developing an application for a server you definitely don't want your program randomly crashing when a client sends garbled data, whether intentionally for malicious purposes or unintentionally due to some sort of malfunction.
  21. I imagine most modern text editors are smart enough to deal with converting applicable characters to their ASCII counterparts. Here is the Encoding menu in Notepad++, for example: You'd select "Convert to ANSI", save it, and *Boom* - done.
  22. Wow I'm stupid, I just now realized that there was a video you were linking to... Thanks for posting it, it helps clear up some of the confusion. But I am still quite curious as to the actual challenge given to the applicants.
  23. I'm not sure exactly what the project in question even is though. Maybe it's more akin to a Google Code Jam problem, designed to test the candidate's abilities to come up with efficient and reliable algorithms. If anyone - presumably applicants or someone already working at LMG - knows what the actual task was (and you're not under an NDA - don't violate that if you are!) I'm really curious to hear the details of what you were asked to do.
  24. Yeah that makes sense, I didn't think about that particular aspect.
  25. GTX 1050's at my local MicroCenter are priced at about $120 so the "no more than $100" seems to make a lot of sense. By the way, yes I do have the original packaging. Do people really care about that for a used card though? And I mean other than as a bargaining point, like "if I haggle then I can bring up original packaging and knock a few bucks off the asking price".
×