Jump to content

kwakeham

Member
  • Posts

    12
  • Joined

  • Last visited

Awards

This user doesn't have any awards

kwakeham's Achievements

  1. Just throwing some more follow-up. After the DOA drive from Seagate that was a 3 months of constantly asking and asking and asking, I spent yet another almost 2 hours on support with them where they kept asking me stuff they already knew (can you send me a copy of the RMA order we sent you?) which finally reached "find the receipt" for the drive to which they kept trying to end the live chat as I searched saying we'll do it by email -- without providing any email contact. The drive, while in operation for 2 - 3 months was not placed into a system for a few weeks according to my notes I dug up that I had forgotten. Because the original support request was past 120 days since date of purchase they again offered a refurb drive after 1.5 hours. I didn't care, I wasn't fighting for a new drive, I just wanted a working drive, but they never said why I was digging through 11 months of company receipts to find the physical one only to be smacked in the face. To Wseaton's point I can't say I see eye to eye on your evaluation. A NAS is just a computer that has a dedicated function. If a single drive is reliable on its own, then their reliability mathematically can't increase with more drives only the contrary, it will only increase as redundancy is increased but then you need to factor in increased drive failure probability. I do think people are relying to heavily on Backblaze data and referencing few month old things for enterprise drives when there is little consensus how these relate to SME and high end home use targeted drives. My experience and professional work had me with a work desktop in 2012 with 64gb ram and 8-12 cores over two processors where simulations were on the order to 40 - 100gb of working space and took hours to days to run - we could easily be working on multiple projects and have multiple runs we needed saved. That work environment had a few dedicated system that would run for days, but we were given the cream of the crop power / ram / hard drive in our desktops so that we could run one or more simulations locally and still do other work. Running these over gigabit at the time would have been tedious at best back then -- and there was an attempt due to some regulatory stuff that we had to find compromises for -- it would be like LTT editing 4k footage in 2012. Today I do different work, but yes, some of it still involves giant files and maybe a 10gbe ethernet could cope, but it'd still be slower than local unless I could have 25 - 40gbe connections. In addition to that is client videos of experimental tests, tutorials and explanations of simulations, all that need editing locally. While my use cases are varied and could be considered high end and unique, most of my peers too do similar stuff and most of us don't have the budget and free hardware as LTT. I think it's easy to look at someone's situation and think "I use my nas for video's and music, so it's idiotic for that guy to want 40tb of space in his system" without knowing fully what they do. My frustrations are mainly limited to Seagates change in service quality and apparent decrease in hard drive reliability / luck with purchasing them from Memory Express who I've had very good luck and support with things. Also a bunch of new P4610's sitting on my desk for migrating to a high reliability SSD server, but I am feeling that's a separate issue at this point. I got out of the BS nightmare caused by seagate, but I think it's valuable for people to know that your options right with seagate might be wait wait wait and hope you're not forgotten only for them to send you defective trash or for their poor packing and UPS handing to hurt you anyway. Alternatively, you can have a story like might where they say every time 24 - 48 hours and 1 - 2 weeks later you still have to contact them yourself.
  2. When they did the CRM or ERP or whatever replacement at end of June I heard they laid off a lot of people in North America. But 3 months and 7 support requests to replace a drive that had 58 months warranty remaining and declined data recovery that's included. I've heard I'm not the only one dealing with such hardships with seagate warranty replacement in Canada. It's like most peoples drives around that time are basically service nightmares. The other bad drive theory is interesting.... the only common drive now is.... the original 12tb seagate ironwolf drive that has not had an issue.... now that is curious
  3. Further to that, I moved in Nov. So two locations with one machine, started happening before the computer was physically moved on historically good (for 3 years) cables / controllers with other (seagate barracuda's) drives. So known good machine and cables, simple drive swap started dying. Moved, they kept dying. Completely different machine and different cables. Same failures, no physical movement of computer. Cables are now on 4tb Samsung 870 evo's under decent usage. no issues whatsoever. And the WD I check nearly daily on same cables and it's been the exact same time to the 18tb failure, says it's healthy with not an issue. That's all why it feels like it makes no sense.... there is only one common denominator, the MemEx I bought them from. But the replacement went into an external sata to usb to test to see if it works before I opened the system and it screeched, minutes after it showed up from ups.
  4. Started happening in my Ryzen 2700x system. Full system replacement in Feb, only SSD's and drives came over to Ryzen 7950x system. Not even the PSU or case is the same, Seagate failures still happening.
  5. I wanted to follow up on this. The 18TB drive finally arrived after 3 or 4 months waiting and about 10 hours on slow seagate live agent response over 6-7 chats. I didn't send back the newly failed 12tb replacement yet that failed after 2-3 months. The new 18tb drive won't initialize on anything, makes a horrid screeching noise. Thanks seagate or ups, one or the other. I've gotten to calling them SeaGARBAGE IronTrash drives at this point. The WD Red Pro drive I am keeping an eye on like a hawk and if it seems good in a few months I'll build a two tier NAS of SSD's and Spinning in Raid 5 or 6 or something. It's just curiously does random seeks every 15 - 20 minutes even when not being used so not willing to commit yet. Online seems it happens some say it's bad some say it's not. I also now have 5 NVME SSD's and 2 big sata SSD's and live under 4-3-2-1 backups because I feel like I can't trust anything.
  6. As much as it sucks (for both of us), it's kinda reassuring Ligistx that ya, maybe this is luck of the draw. I think I'm on a trip to MemEx today for something to at least hold me over until I come up with a better solution and I think this is where I get off, start upgrading stuff to 10GBe, build a NAS server with lots of parity drives. They'll probably be WD
  7. I'm a backblaze subscriber and I've nerded out looking at stats before. Though those are always exos drives and some of bad drives are high (ST8000NM0055) and some low (ST16000NM001G) but none approaching 66% percent failure of my tiny sample which means little. The 18tb drive I had was "updated" to a new model. Figured they "found something". For 12tb exos drives are 1.39 and 3.25% on the august update. Yet, for 3 drives I (Iron wolf ones) have 1 good and 2 that have been the devil. One thing is in the back of my head is that I had one failure on my old 2700x system and two with the same cables on my new current system a 7950x. The Mainboard in this (Tuf x670-e) has been stable but it's never felt as "solid" - no it doesn't crash or anything, its always felt like it's a bit of a baby - crazy long posts, it was very temperamental with USB-C connection to a Quest 2. A little werid sometimes with embedded dev boards (though almost everything can be). Is there any reason to suspect the MB or PSU (AX1500i if I recall) could be at fault.
  8. There does seem to be more negativity to Seagate than WD here in these forums overall. I do still find it hard to believe this is all going wrong over and over and over for me over two systems (with no other shared hardware, the 12tb's was the only thing moved to a new system in Dec). The replacement 12tb only was in use just over 2 months... when the 18tb failed.
  9. While back, started doing more video stuff - a lot of it is to document testing for clients. Old barracuda's replaced with Seagate Iron wolf NAS 12tb drives for local storage. First one, then other. Just using them as single drives in desktop. One fails. Register it with Seagate, warranty turn around end of last year was quick. While waiting bought an 18tb drive to give more breathing room. The 18TB drive just never started one day after 3 months. Into Seagate. They upgrade their CRM/ERP system or something. Their system shows they didn't ship a replacement, but says it's completely processed? Live agents always take 5 - 20 minutes between any comments and I've got nothing back. They've had the drive for 9 weeks. Friday they'd guaranteed to have some info within 48 hours. Nothing. This morning the replacement for the first 12tb started acting weird. After an hour with a chat agent about my missing 18tb for the 5th time (at least, lost track) I tried to RMA the replacement. Couldn't. Had to spend another 1.5 hours proving to a live agent it was dying. I practice 3-2-1 Backup. It's just a hassle and a fight. I'm terrified to buy any more Seagate drives, so is WD RED PRO any better? Did I just have bad luck. They were all bought at MEMex so could they have just gotten a bad lot? Or dropped a box of them? I'm currently doing an insane amount of contract work and losing days to this is unacceptable. My thoughts are - Full NAS with an old Ryzen 2700x system I have and WD RED PRO - OR with SSD's like 870 EVO 4TBs? (Heard WD SA510 are basically scrap from some enterprise people with 30% failure) - buy a used second gen EPYC system and use SN850x BLACK 4tb's on the 16x to 4x4 pcie nvme boards if they ever go on sale for 350/drive again - I have a bunch of the 2TB's I use and feel some trust in them. Get a WD RED PRO to hold me over? Any other ideas are appreciated.
  10. I believe I need to clarify. My mention of Resolve was that it's project cuts, edits, etc are all listed in it's project databases which would then enable to find which files are used and which are not. Sorry wasn't implying to use Resolve to do this. My whole goal has been automation. The alternative solution I have is batch scan for create or modified dates and use the command line interface for handbrake to do it. I'm sure I'm not the first to think of this as a space saving concept so figure I'm searching for the wrong names of tools that do this. Maybe that's the solution for now, script scanning based on age, run through handbrake.
  11. I make some very niche videos -- hackery, reverse engineering stuff, etc related to the cycling industry. Not big, not consistent, more of an offshoot of the startup I'm building grassroots. However, I shoot a bunch of footage, most of it never getting used, and what does tends not to get reused if ever. As projects age I've just been wiping out source footage because 99% chance I won't reuse it. But I've been thinking on a better way. As footage ages unused, recompress it. First from 4k 200mbps h264 (shoot mainly on XT-30 or Gopro hero8) to say 4k 100mbps h264 (say 6months), then 1080p 50mbps (say 1 year unsued), and finally 20mbps (18 months unused). I'm hoping I can some how hook into my Resolve database to check and just once a month overnight schedule it to check which footage has aged out, recompress and delete the original and rename the newly created file. In theory I'm hoping that Resolve will then see this and not get angry. I'm sure it will irk purists and recompressing over and over on that chain will be a no no for big pro's. Anything that gets used isn't recompressed, so if I keep going back to footage every 3 - 4 months, it's left alone. That's where the project database comes in. Has anyone implemented such a scheme? How? I mean we can't all have a Petabyte project or just keep everything especially when it's only for the sake of trying to educate folks on a tiny youtube channel and footage reuse is minimal, but manual deleting or picking and choosing seems to time consuming and on the off chance it's used, 1080p 20mbps in long gop h264 will likely suffice if it's old, and maybe 100mbps or 50mbps is fine if it's newer but older than say 6 months.
  12. I have a project that might involve me needing temporary access to a Mac. I have a Ryzen 2700x in a Gigabyte B450M DS3H mainboard with a Asus Strix 1070TI. In order to virtualize a hackintosh I want to add a graphics card, probably a Radeon RX580 or similar and maybe a dedicated USB controller for passthrough. The board has a 1x and 16x physical that are 1x and 4x (but also pcie 2.0 I believe, since these run from the chipset). The need is being able to edit and compile some code for Apple Watch for testing in xcode. I wish to avoid having to purchase a macbook as I'm not into that kool aid. I'm less concerned with the software setup but I believe my NVME is using 4 3.0 lanes from the CPU, and my 1070TI is using 16 lanes of the cpu, so the rest are chipset. I don't think this has an impact on the setup (except performance, just need it not laggy in the desktop) but with the mATX and a 2.5/3 slot GPU I can't access the 1x or the 16x slots. I already have a hard enough time with the GPU and 90 degree sata's. Anyone have experience with using risers to run a USB controller and GPU and if any can fit under oversized GPU's. Is new mainboard the only real option? I'd rather not put the GPU on riser as it's going to be a mess in there. Case is some fairly cheap generic corsair (270R I believe) thing with a window and doesn't support vertical GPU mounting. Thoughts?
×