Jump to content

skaughtz

Member
  • Posts

    76
  • Joined

  • Last visited

Everything posted by skaughtz

  1. Using the onboard controller on an old ASRock Z75 Pro 3 (LGA 1155). I have never had to rebuild a RAID volume before, so before I potentially brain fart while installing the new drives and wipe the existing one or something stupid like that, I thought I would get the details on it.
  2. I will try to keep this simple. I have a two 250GB drives in a RAID 1 setup that I use for simple file storage for personal and work. One of the drives is on its way out (acting wonky and CrystalDiskInfo indicates a Current Pending Sector Count of 200). I have two more 250GB drives on the way. My understanding is that if I swap the bad drive for the new drive, the setup will rebuild from the existing good drive (if anyone could confirm this, I'd appreciate it) and no muss no fuss there. But I snagged two drives, and would like to add another to have redundancy across three drives. If I did that, would I need to wipe and create a new volume across all three drives, or would the good drive rebuild the setup by mirroring itself on the other two? If the former, I have read but am still somewhat confused about a RAID 1 that utilizes something like a 2+1 setup, where one drive exists as an emergency backup to step in should one of the other two fail. Any info on that would be helpful. I just finished backing up the files on a few external sources, so the data is safe regardless of what I do. Thanks in advance.
  3. I also have an EVGA 2060 KO Ultra Gaming card that is shorter than my 2070 (EVGA Black Gaming) but has sensors for memory temp. That card runs warmer just because of its smaller size and cooler. I realize that it is not apples to apples, but if the 2060 hits a certain memory clock overclock with its memory running at a safe temperature (say, +1000MHz at 74C), would it be safe to assume that the 2070 is not running any higher than that at the same speeds? The 2070 runs cool but does not have sensors for the memory temperature. At +500MHz on the 2060, the memory is reading 72C with 85W total draw, while at +1000MHz it rose to 74C and 91W total draw.
  4. Thanks, man. Any suggestion on what memory temperatures are safe to run constantly?
  5. I have a 2070 I use for mining Ethereum (but this question should apply to any card). I have the voltage locked in at 700mV, with the core clock at stock, and the memory clock at +500MHz. The card runs at 47C and only pulls 92W. All good. Yesterday I tinkered with the memory clock and managed to get it up to +1350MHz stable, before I decided to look more into what I was doing. The card temperature remained at 47C, but the power draw increased to 99W. The hash rate increased 5Mh/s, but I was concerned that I was destroying the memory, either through heat or stress (there is no monitor for the memory temp) so I took it back down to +500MHz. +1350MHz seems like a crazy amount to just be luck of the silicon. Is it dangerous to run an overclock like that, or should I be grateful that the card can do it (and potentially more) and rock it out? Appreciate any advice.
  6. That's exactly my concern. There is a notable hash rate increase with each step from +1000 on, but I have no way of telling what the temps are outside of the main GPU temperature reported, which is 47C, through Precision X1 (all of the cards are EVGA, and I use that program for everything but the voltage, which is locked at 700mV in the mining command line). I figured that by locking in the mV through trex it wouldn't feed anything on the card more than it could handle, since that is quite a bit undervolted to begin with. My original goal was to drop the total power draw. Now that I have that down I'd like to eek out what hash rate I can, but not at the expense of burning the cards up in a month. Edit: With all of that said, it did increase the reported power draw from 93W to 99W. That worries me since the GPU temp didn't increase.
  7. I would really love some more insight into the whole memory degradation thing. I decided to just mess with my EVGA RTX 2070 Black Gaming to see what I could get out of it. As of me writing this, I stopped at +1350MHz on the memory clock and found that -150MHz was the sweet spot on the core clock. -200MHz core clock would drop the hash rate, while anything above -150MHz didn't make a difference. It is still locked in at 719mV, running at 47C, and I increased the hash rate more than 5MH/s. It still has not crashed, which worries me. Am I burning up something unknown? +1350MHz seems way too good to be just silicon luck. Or is that all that it is? Forgive me that I am not in the habit of massively overclocking hardware. I like to protect my investments.
  8. Thanks for the info. The bit about Pascal and ECC memory makes sense. My 1080 Ti gives me the best hash rate with a -400MHz memory clock. The 1080 and 1070's both showed improvement bumping it up to +500MHz. The Pascal cards also do not like the decrease in power. The 20-series cards didn't drop a blip locking in the power at 700mV. I guess I just need to go through one by one and increase until it crashes. Any advice on the core clocks? My understanding is that the Ethereum algorithm is memory clock reliant, so I have not even bothered touching the core clocks. Since the cards are all way undervolted and running in the upper 40C, I might as well squeeze every bit of hash rate out of them that I can.
  9. I have my rig set up mining Ethereum and I plopped a +500MHz memory clock offset on all of my mining cards. 2070, 2060 Super, 2060, 1080, and 2 1070s. Everything but my 1080 Ti (which needs it reduced because 1080 Tis are the spaz of the GPU world). All of the cards are fine and +500MHz seemed like a reasonable offset. But I have seen people discuss pushing the memory clock as high as it will possibly go stable, sometimes putting something like a +1000MHz or more memory clock offset on their cards. As I would like to reuse the cards for other purposes when they are done mining, I have been leery about going too high with the MC. As such I keep them cool and undervolted. But does it matter? Is there any appreciable difference in memory degradation between +500MHz and +1000MHz if the card is stable and greatly undervolted (my 20 series cards are all locked in at 700mV and my 10 series at 900mV... 711W draw for all of them)? Am I just leaving hashing power on the table or is it safer not to push things too far?
  10. Just playing Devil's Advocate here, but I would put it on them to put the manpower towards pouring through the blockchain with my wallet address (when they found it... probably through IP connections or something) to confirm that I received X share on X date. That is probably a pretty solid roll of the dice on the part of the basement miner. They don't have enough resources to deal with people filing normal taxes. It seems silly to me that they would even attempt to collect on the income when there are so many barriers in the way to it being at all accurate. Anyway, fun conversation. I learned something.
  11. Okay. This is what I was after. Hypothetically for the sake of argument, outside of storing it in an exchange wallet, they could not track it back to you though, correct? Cold wallets can be lost. Hot wallets don't collect identifying information. I'm sure with the latter they could figure it out if they really, really tried, but that isn't happening for your average citizen. I'm not trying to skirt paying my taxes... only they and death... I am just curious as to the ins and outs of this whole thing being relatively new to it. There seem to be some gray areas here.
  12. But how would they know if you mined it on April 7, 2016 or April 7, 2021? Pretty big difference in what you would owe them. And while you might not be able to prove it, neither could they (I imagine without expending WAY more resources than they have available for your average home miner), and you aren't guilty until proven innocent.
  13. I'm not worried about me. They have far bigger fish to fry than me and my laundry room rig. But I do like to know the ins and outs of things that I am involved in and this just seems silly that the best tactic that they can employ is only fear of audit... which anyone with some sense should be able to get out of at this point, for the reasons I mentioned above. Uncle Sam always gets his in the end, but I see why crypto pisses off the government so much.
  14. Hm. This seems to be something of a shitshow. Some Googling came up with "Pursuant to IRS Notice 2014-21, when a taxpayer successfully “mines” virtual currency, the fair market value of the virtual currency as of the date of receipt is includible in gross income. This means that successfully mining cryptocurrency creates a taxable event and the value of the mined coins must be included in the taxpayer’s gross income at the time it is received." That isn't feasible in the least, lest you sit at the mining computer every second of the day and record the time/date/value of the coin every share you mine. You can also apparently write off equipment and electricity costs, but with the latter having the same issue. It appears it is more or less an honor system at this point until you sell it, but then there is the question of how you obtained it, which I suppose you have to answer... but how can they prove you are lying if you said you mined it when it was worth $20 in 2016?
  15. I've recently been mining ETH and have been storing it away, hoping for the price to rise again. I have also been reading about how the IRS plans to come down harder on crypto profits through capital gains taxes. What I don't understand is how can they know/verify how long you have held onto the coin if you store it in a non-exchange wallet? It would be the difference between short- and long-term capital gains rates. Would they only go by when it was transferred into/out of an exchange wallet and when it was sold/traded? What if you mined it and have held it for over a year in a Trezor on your desk or hot wallet that doesn't know your identity? I imagine there is a catch somewhere but I'm not able to piece this all together yet.
  16. Passengers was an underrated sci-fi flick. I am only concerned because if/when the crypto market crashes, I will probably throw all of these back into the gaming computers that they were pulled from, which have now been turned into simple HTPCs. If I can sacrifice a few bits of coin for extending their lives, I would like to.
  17. 25 year computer hobbyist, first time miner. I dropped 7 of my GPUs into an Ethereum mining rig (1080Ti/1080/2070/2060S/2060/1070/1070). I have it in my laundry room (underground and 10 degrees Fahrenheit cooler than the rest of the house) exposed to open air, with a box fan blowing over the whole setup. Each card is basically overclocked to +500MHz on the memory, -200MHz on the core clock, and running at 75% power. With the box fan blowing over them, all but a tiny half-sized 2060 is running at around 50 degrees Celsius or less with the fans set at 45%. Without the box fan most of the cards don't go above 60 degrees Celsius under their own cooling. I have been hesitant to turn the fan speeds down because, God forbid the box fan dies (I just bought it two days ago), at least I know that the cards will be able to keep themselves from frying. But now I am running the fans faster than I need to, potentially lessening the life of the fans without need. What is the best course of action here? I check on the rig daily, so any disruptions to normal function should (hopefully) be caught pretty quickly. Is my concern about fan life valid, or can those things run at 40-50% for infinity? Would dropping them to 25% extend the life at all? I appreciate any advice.
  18. New to mining and going to set up a rig using all of the GPUs I have in the computers throughout my home. The rig will consist of: 1080 Ti 1080 2070 2060S 2060 2060 Asrock H61 Pro BTC Pentium G2020T (35W TDP) 2 x 4GB DDR3 1600 I have three power supplies that I can use to power the rig. A 750W Gold, 650W Gold, and 620W Bronze. My plan as of now is to use the 750W to power the board, 1080Ti and 1080, then use the other two to power 2 of the remaining cards each (with the 650W gold powering the 2070 and 2060S and the 620W bronze powering the two 2060s). The 750W has four pcie cables, but both the 1080Ti and 1080 are 2 x 8pin so I wanted each plug to have a dedicated cord. Considering that I will likely be undervolting the cards, or at the very least limiting their power to 80%, can I split power between two supplies, or should I stick with my original plan and use one PSU to power 2 cards each? I am getting inconsistent wattage totals online. Whattomine, for example, suggests the cards will pull 870W. The rest of the system, according to Outervision's power supply calculator will pull 121W. So 991W total. However, calculating everything with Outervision's power supply calculator estimates the total draw at 1567W... soooo any advice would be appreciated. If I can get away with running only two power supplies I would prefer to simply because it is one less moving part that will be running 24/7.
  19. I'm just going to go with the Xeon, especially since the kid is 9 (10? I'm a terrible uncle...) years old and wouldn't be able to troubleshoot if anything hiccupped with an OC. With that said, I am now curious to blast the 3570k as high as it will go and see how it performs by comparison. People like to blast the earlier generation Intel Core series now (and rightfully so sometimes), but I still maintain that they punch above their weight given their age. I still love Ivy Bridge.
  20. Ah. Well good to know. Thanks. I will have to play around with that. So if it is stable and cool, would you take a 4.5GHz 3570K over a 3.7GHz Xeon? Would 800MHz outweigh hyperthreading nowadays?
  21. I was in the same boat with two 1155 systems. I went 3770K with one, got that up to 4.5GHz, and at 1080p 60Hz, matched with a 1080 Ti I am more than fine. A second option is to look for is an 1155 Xeon. They used to be quite cheap by comparison ($40ish) for 4 cores/8 threads. You can't overclock them, but you get the extra threads (hyperthreading) out of them for far less money. The current used computer parts market has probably bumped them up in price, though. Still, it would be far cheaper than upgrading your entire system. Assuming that you are upgrading for games, it really depends on what you play and what framerate you are shooting for, as these older processors will certainly limit more powerful graphics cards. But, if you are only shooting for 60FPS like me, then who cares? Check out the Xeon e3-1240 V2 (or e3-1245/70/75/80 V2). Only the models with a 5 have integrated graphics, and each step up is basically just another 100MHz on all cores. Edit: I just noticed that your motherboard isn't a Z-series motherboard, which means that you can't overclock the CPU on it. So forget the 3770K as it would be a waste of money. Your best option would be one of the Xeons I mentioned or upgrading it all.
  22. I was using a Cryorig H7 on the 3570K when overclocking it. Basically a Hyper 212 EVO, if not a couple degrees cooler. I suppose I shouldn't care at this point about the longevity of the chip, but I've always liked to use as little voltage as possible on my CPUs and that is a tough habit to break for an extra 300MHz.
  23. If I recall correctly, I have that disabled as I read that it can sometimes raise temperatures a bit. I could be mistaken, though.
  24. Thanks. This is what I needed. I figured the extra threads would pay off more than the speed, but wanted to double check. Now to decide how generous I am feeling and decide between giving him my extra 1070 or 1080...
  25. I can get the 3570K to 4.2GHz on a -0.02V offset, but have to pump it up to +1.0V to get to 4.5GHz, along with a 25% LLC. Temps are way too high for my liking at that point. 4.3/4GHz doesn't run on the negative offset, so I never bothered with it for 100/200MHz.
×