Jump to content

lowstrife

Member
  • Posts

    22
  • Joined

  • Last visited

Everything posted by lowstrife

  1. Nope - tried enabling that. No change. I find it interesting how that setting in parenthesis, has the name of my 144hz monitor. And not my other two secondary 60hz monitors. I wonder if that setting is only applying to the primary display.
  2. I've been searching\googling for years trying to figure this out. Tons of topics, tons of people commenting about it. Year of our lord, 2023, and this is still a bug within Gsync and I can't believe it still exists. x470, 3900x, RTX 3700, 64GB, tons of SSD's, a powerful system. 144hz primary display, two 60hz secondary displays. Issue: When playing a game on the primary display, any video stream playing on a 2nd monitor which is using GPU decode has it's output framerate linked to the game framerate. This is fine if your game is getting 8 million FPS. But if you play endgame strategy games, or hit a loading screen, your FPS tanks to 15, and so does your videos FPS. This also becomes a problem with Youtube, because it automatically downgrades the quality to 360p because you're dropping half of the frames. The games play perfectly fine & are completely unaffected by video playback. Gsync functions normally, as it should. Remedies: Well, I've tried everything. Obviously drivers. Fullscreen\windowed. Disable hardware acceleration only works on a web browser, and it breaks a ton of other functionality within the web browser which is not acceptable. Happens in any video playback application. I came across a reddit thread that had a video profile inspector to force multi-monitor performance mode. Disabling windows game mode. Making sure power profiles are at max. None of this works. The only solution is to disable G-Sync. If I do that, video playback is perfect, it never drops frames. Which fucking sucks to have spent the money on a display which supports it and not be able to use it. Am I missing a solution here that has been found? I've searched to hell and high water for YEARS, maybe my SEO terms aren't finding a solution on reddit or any of the other forums.
  3. I've used Craigslist\FB Marketplace for the last decade to buy\sell used computer hardware (among other things, even two cars). Never once had a problem. 99% of the problems can be filtered out by being smart on the messaging\selection process. Don't use your phone number on craigslist, work through their anon email relay only. Police station parking lots are fine I guess. But I have always done mall\store parking lots. Plenty of space, plenty of people. If it's a particularly high value item, I bring a friend. In-person transactions sometimes are cash, but these days, it really is a Cashapp\Venmo game, so you'll need to have (preferably both) setup. There are scam risks using these apps, but they are pretty low and someone isn't going to be using them on a $180 GPU transaction. Above about $800-1000, I would start wanting more secure\verifiable payment xfers like cash or cashiers check from a bank. Ebay has a broader marketplace, but it's own set of risks with the shipment and buyer protections people can take advantage of. IMO it's more risky.
  4. Interesting. There is still a separate cached version within the search results, even though the actual video has been updated.
  5. There is also embargo's. Where a video is done, uploaded and rendered on the platform. But not "published" or marked private until a specified time. Or videos which had been expected to be taken down. Plenty of tricky subjects. They're fortunate there seems to not really have been anything amiss with what was public for a time period. That could have caused some serious problems were an embargo'd review leaked by no fault of their own. I could only imagine the legal shitshow that would entail. Also really goes to show how important thumbnails are for a channel and the "clickability" of a video. Seeing the a\b difference is wild.
  6. I completely disagree. No sane person would knowingly sign off on re-publishing those scam links to get 1 more hour of adsense revenue. I find it highly unlikely Linus would knowingly agree to that. I suspect what happened is the recovery spat out what it spat out. Then the guy from Youtube dumped it onto LTT's lap "here you go" and the LTT team had to scramble to fix it.
  7. Yeah that's a fair point. There could be a large amount of technical debt of how the backend systems work. Youtube accounts, Google accounts with Youtube accounts, video editing, flags, everything. Imagine how they would have had to bodge in merging of Google & Youtube accounts into one unified system. Two probably completely different db systems, done live, seamlessly to the user. Still though. You would think after 4 years of dealing with disaster recoveries someone could come up with a process to un-fuck it...
  8. The backup itself is super easy. The video files themselves never actually get deleted. "deleting" things just edits the database and various flags around the videos, which is more or less just a line of text in some arbitrary file aka metadata. The entire LTT empire is a million video files, organized by maybe a few megabytes of text including all the descriptions, stats, flags, etc. The vast majority of the data is within these video files, not the metadata, so backups of the metadata is super easy. "backups" are done by saving snapshots of these database files. The videos themselves never get modified or copied. They probably have a seperate redundancy process within the CDN system of Youtube itself. Which is why this is so shocking to me that the recovery process is this FUBAR. It's unbelievable.
  9. Like I said, unbelievable this is the process for a problem which has been ongoing for years. I first started seeing this scam in 2019. The Youtuber Marcostyle got his channel hijacked in exactly this same way. I can only imagine the stress the team is under right now seeing this back catalogue exposed to the public, with apparently, no ability on their end to fix it (because they would have started privating things again). What a disaster.
  10. Thankfully they are all from a million years ago. God forbid something more recent becomes available. Apparently some genius got flags mixed up and all the private videos are public... Can't believe this is what the recovery process is. I've seen it happen before on other channels, still can't believe it is this much of a clusterfuck. And why the channel is public during this.
  11. Computer of Theseus. Bit by bit it's rebuilt over time as I upgrade the weakest link. Some components are 7 years old. The latest upgrade was 1080 --> 3070, bronze PSU --> titanium. Tech: 3900x stock because stability > speed 64gb 2666 stock because stability > speed x470 Tachi 3070 Noctua gigantic big-ass-tower-of-cooling-LTT™ Edition™ Phanteks case Seasonic titanium 1000w 4 spinners + 3 SSD's, 27TB total 3x 27'', one of them 144hz Bic Acoustech sub, Edifer speakers, custom stands Schitt hardware and HD700's Some other misc stuff including $100 in extra-long cables for the monitors, interfaces for the microphone and a 14-port power strip under the desk to power everything Quality of life stuff: Desktop is a wooden door I got from a resale shop for $5. Desk frame is a $800 standing desk rated to 500lb because a) standard desks don't go high enough B) the setup weighs 200+ lb and I needed the capacity A 2 inch section of oak handrail I cut from the stock at Home Depot, drilled into the side of my desk to act as a headphone holder Best aesthetic decision I ever made was getting two $10 lamps from Amazon and throwing the lowest wattage bulbs in them behind the screens. Fills out the room with a beautiful soft cast. Everyone should do this. Three cheap Arctic whatever monitor arms for like $20 each. They actually work unlike the triple-arm scams that cost $200-500 and don't work. Highly recommend mounting monitors on any setup. Take the arms off your chair and use your desktop as armrests if you're a tall person. It just works better. Sound treatment in the room. I have a nasty 38hz room resonance that I will never get rid of because physics, but the cheap treatment I did do makes for huge improvements to audio "tightness" in the listening position. Controlling those first reflections is so important.
  12. Then why not do both? If a formal warranty doesn't actually make a difference - why not just include it as a formality? People feel like they aren't being told WHY it isn't being included, if, according to Linus, it doesn't make a difference and the end result is the same. If that were true, you would just include it anyway, especially when pressed about it. But because it's completely off the table, there must be some "Cost" attached to it that is potentially more expensive than the honor system he's currently saying is the policy. EDIT:
  13. It's got to be a widespread systemic issue for this to happen to a tech journalist like this. The probabilities otherwise are just too remote.
  14. Bump for more help if possible. It seems that Seagate Exos drives commonly have SMART issues because they report weird values as stock? https://www.truenas.com/community/threads/scary-smart-values-on-new-seagate-enterprise-drives.58973/ When I first made this thread, one of my discs (DA5) had value 10 Spin_Retry_Count as failing. This actually triggered my array to be in a degraded state and actually nuked my pool at one point forcing me to recover it (I forgot what I did, I think just remounted it) Now, 3 weeks later, disc DA5 has it's value at 100 and is healthy, yet disc DA1 has the same exact value also being reported as 95. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 081 064 044 Pre-fail Always - 139807616 3 Spin_Up_Time 0x0003 093 090 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 22 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 160 7 Seek_Error_Rate 0x000f 076 060 045 Pre-fail Always - 40526476 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 246 10 Spin_Retry_Count 0x0013 095 095 097 Pre-fail Always FAILING_NOW 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 22 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 098 098 000 Old_age Always - 2 188 Command_Timeout 0x0032 100 097 000 Old_age Always - 3 3 4 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 055 045 040 Old_age Always - 45 (Min/Max 23/48) 191 G-Sense_Error_Rate 0x0032 012 012 000 Old_age Always - 176073 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 9 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 190 194 Temperature_Celsius 0x0022 045 055 000 Old_age Always - 45 (0 20 0 0 0) 195 Hardware_ECC_Recovered 0x001a 081 064 000 Old_age Always - 139807616 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 Something to be worried about? RMA this disc? RMA both?
  15. Oh dear. Well I rebooted the system to give this another whirl to diagnose, this was on the screen: I then tried running this code to get the firmware version for my drives: "I used 'dmesg | grep mpr' to find the driver and firmware versions (SAS3). I think the previous SAS2 version use the mps driver so 'dmesg | grep mps'." And the system completely died. Frozen - requires hard reboot. I looked at the ebay listing, and both the description and many of the reviews say their cards came in IT mode. EDIT: finally got firmware printout, yes the card is in IT mode Controller Number : 0 Controller : SAS2308_2(D1) PCI Address : 00:09:00:00 NVDATA Version (Default) : 14.01.00.06 NVDATA Version (Persistent) : 14.01.00.06 Firmware Product ID : 0x2214 (IT) Firmware Version : 20.00.06.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9207-8i BIOS Version : 07.39.02.00 Also TIL they sold me a 9207 and not a 9217.
  16. If anyone finds this through google, I did the good thing any person should do. I FIX THE ISSUE AND IM HERE TO REPORT. I started getting additional boot errors where the computer started to not post when I added\removed PCI or SATA devices. And then devices stopped getting recognized. Replaced the motherboard, it's all fixed. All of it. The motherboard shit the bed, and it had been dying I think since I first built this server. 1700x 32GB 6x6TB Exnos discs ($140 each for Exnos drives? What a deal) LSI 9217 HBA (EDIT card is a 9207-8i) 80GB boot SSD Installed onto the SSD, booted, configured, setup the pools, setup SMB, everything went great. Ran some tests for a few days, speeds were great. Then problems started. 1) The server constantly crashes - I'm getting maybe a day or two of uptime before it completely blanks out. It doesn't reboot, it just dies. Screen is blank, SSH and webui disconnect. Keyboard becomes unresponsive (numlock light is always off), only way to recover the system is to hit the power switch. 2) multiple discs giving errors When the server is online, I can't run SMART tests. The page just endlessly loads and doesn't give me the results. The only way I even know drives are failing SMART is because of the monitor I have hooked up to the machine gave me the above readouts that I took pictures of. The results are just endlessly loading: I've done the usual checks for hardware issues. Swapped some cables around. The components for the computer worked fine the last time they were used - and again the system worked fine for a few days after first building it. Compiled fine, transferred 1TB of data as a test just fine. I could really use some help troubleshooting form this point. There's no data on the system now to lose so I can do pretty much anything to it.
  17. So I'm in the final stages of building my first home server. Hardware is my old Ryzen system: 1700x 32GB 6x6TB Exnos discs ($140 each for Exnos drives? What a deal) LSI 9217 HBA 80GB boot SSD (side note 1: I know the system only has gigabit connectivity to my desktop, but that's fine. I can always upgrade to 2.5G or 10G connectivity down the road) (side note 1.1: My main desktop has nvme SSD's, and I would expect the RAID array to bench around 3-5G with these drives, so I could take advantage of >1G connectivity) (side note 2: My main desktop is running Windows) Goals: New fileserver for my data collection which is currently ~7TB out of 7.3TB. This currently exists on two mirrored 8TB discs on my primary machine. Host to run t*rrent clients Host TrueNAS Samba Share for the data it contains Host for Plex (optional, I can also run plex off of my desktop which accesses this NAS) Host for data replication of the ZFS pool data to my on-site backup as my old 8TB discs will become backup discs. (this can be done within TrueNAS I believe?) Host for any future VM's I may want to deploy in the future, considering the abundance of hardware. I want to start toying with a linux system and expanding capabilities. This may include: Automated data collection from API's Security camera host Garage door opener host (a'la Linus's most recent video) VPN host so that I can access my home network from abroad My main question is - should I setup TrueNAS within a VM? Would that make my goals\future expansion easier? Or, should I run virtualization ontop of TrueNAS. I've done some digging to show that TrueNAS can run within a VM: But adding complexity like this might not be the best idea for a beginner like me. Any input on how I should structure my system, and whether running truenas in a bare metal state would be able to achieve my goals? I know the optimal setup is to have a NAS be a dedicated and unique system just for data storage, and all of your VM hosting and excess processing sh ould be done on a seperate bare metal box. I'm just curious on my options here. Many thanks everyone.
  18. Alright, thanks so much for the suggestions everyone. I have one final question then: I do need to have a priority on the gigabit internet connection being stable and high performance. What router solutions should I be looking for that can handle this? Actual, real, gigabit duplex. Needs to be able to handle PPPOE. Obviously you can spend the $800,000 on the Cisco solutions, is there anything out there that is more achievable on the prosumer or light commercial side of the spectrum? I think that using an actual dedicated SFP+ switch to handle 10 gig internal LAN would be the best, and leaving the dedicated router to do dedicated router things inbetween the fileserver and the workstation. I will be using the Ryzen CPU & legacy hardware I have solely for the fileserver\VMhost at this point I believe.
  19. Alright you gave a ton of insight, thank you. I just have a couple followups that I edited down: 1) This sounds like a pretty big no. I've seen... too many people recommending that you shouldn't visualize these things now. Especially when we're talking about hardware passthrough, network bridging... That does sound really messy. 2) So you can't add more drives to an existing vdev (or at least RaidZ-1 or Raid Z-2). Once you set that up, it's locked in. And the only way to expand the pool is to create an entirely new vdev. Got it. This sounds like I am leaning toward UnRAID at this point because it's easier to migrate and add additional storage in a... cleaner way. I just have one final question at this point: Dedicated box for the router. Where should I go here. Most people on most normal connections can get away with a cheap-o Atom or Pentium legacy processor in a 10 year old Optiplex. However, I am going to need something quite high performance - as I explained above - but I've been having a really difficult time tracking down the best way to do this without just throwing expensive hardware at the problem. My initial conclusion isn't far from my original plan: Dedicated box running pfsense with RJ45 add-in card with 4 1GB ports Does it make sense to run a router basically with PCI-e add-in cards? And how well would this setup scale if I were to add 10GB add-in cards in the future so that I can have a workstation <--> fileserver link be at the full 10gb speed. The pre-configured hardware from them has some fairly low routing bandwidth once you start getting into 10gb links and you need a pretty beefy quad core chip to properly saturate a 10gb link. Also thanks for your input. 4) As seen above, I think I am leaning more toward UnRAID at this point. 5) I can understand using one if you need a HBA because you've run out of motherboard slots, but why would you need a hardware RAID card on a software UnRAID system? 6) I am using media files, so it will be max sequential reads. A single platter can saturate a 1 gig link most of the time unless it's reading data on the inside of the disc. 5-6 disc benchmarks I've seen usually land somewhere in the 250-500 MB\s range for sequential read\write speeds. Especially since I have local NVME storage on the other end of this fileserver. But I think I will be going the dual-system approach. Existing Ryzen 1700 hardware for the fileserver, new hardware for the pfsense router. I think you guys walked me back from the edge of trying to combine them onto one system.
  20. Starting to run into bottlenecks\headaches on my "one computer to rule them all". I recently upgraded to 3rd gen ryzen, so I have some spare hardware sitting around to address these problems. My current solution + it's problems: Gigabit fiber + Cable backup operating in failover Edgerouter Lite Ryzen 3900x & 64GB memory 2x4TB and 2x8TB discs manually mirrored once a week. 2x1TB nvme SSD for boot & scratch disc Problem #1: The Edgerouter can't handle the loads I'm putting on it. It can do gigabit to a single host (Speedtest.net), but in the real world it falls apart. This screenshot is ~400mbps overall network load, yet my connection can handle symmetrical gigabit in both directions co-currently. Yes, I am running with hardware offloading enabled. That's why I bought this router, but it's still not enough. Problem #2: Local internet is only gigabit, and if I'm building a NAS, I'm going to need a 10 gig link to handle the media files I work with. Problem #3: My current backup solution is manually mirroring the data, and it sucks. And it leaves gaps. And it's space in-efficient. Problem #4: My current storage is filling up. I'm sitting at about 80% capacity on my spinning discs. Problem #5: I now have a Ryzen 1700, motherboard, SSD and power supply just sitting around. Oddball parts to sell, especially in this economy. Would be worth far more to me to put them to work rather than selling for 20 cents on the dollar. ____________________________________________ My proposed solution: I would build a new machine out of these parts. Run a hypervisor on a base OS to support FreeNAS and pfsense in their own VM's with hardware passthrough so FreeNAS can get the hardware access it needs to the discs. Get some add-in PCI-e network cards so the box can have the necessary ethernet ports, including 10 gig to my main machine. I don't want to mix\match the 1gb lines with the onboard NIC so I will be getting a 4-port NIC. Questions that I haven't been able to answer in my research for this setup: 1) For security reasons, I've seen that you want your fileserver as far away as possible from the internet. But this fileserver will only be hosting media, nothing sensitive. And since they are on separate VM's, they will be "seperated". People say not to run pfsense ontop of Freenas, but what about running them both side-by-side ontop of Ubuntu or some other distro? Will network & firewall configuration play nicely? 2) Is it going to be a problem running pfsense and freeNAS within their own VM's? The hardware is powerful enough to do it, but I'm worried about stability & disaster recovery. 3) How will disc partitioning work for running these layered systems? I have the 250GB SSD, will it be possible to partition part of that to the operating systems, and then leave the rest for a SSD cache to the fileserver? 3x 16GB partitions with the rest dedicated to the FreeNAS cache. 4) What "root" operating system should I use to host all of this, and what hypervisors would be recommended to host these two instances. 5) I'm looking to run RaidZ-2 for double redundancy. Using 4 new 8TB discs to start the array, migrating my data onto it, then expanding the array using my existing two 8TB discs. 6) If my whole network is 1gb, is pfsense able to identify the 10 gig link to my workstation and allocate that accordingly so that I can have higher speeds to the fileserver? Or... Do you guys think this a stupid idea and I should just build a dedicated box for both. Thanks for the input everyone.
  21. Moved to a different forum, idk how to delete threads.
×