Jump to content

AraiBob

Member
  • Posts

    81
  • Joined

  • Last visited

Everything posted by AraiBob

  1. Hi, I have two laptops, which came with Windows installed. after some time, I installed Ubuntu Linux and it worked better. Then I replaced the drive with an SSD, and the performance was amazing... Both the laptops have been upgraded to newer versions of Ubuntu, as I test the newer versions on my main pc, first. Currently on 16.04LTS on all three pc's,. The boot drive on all of them is an SSD. Samsung 830 for the two laptops, and a Samsung 840 for the 'normal' PC.
  2. Hi, I have noticed a change in how files are copied. I use a bitTorrent client to download to my work SSD, and when ready I move the result to a WD black drive. About 2 months ago I noticed a change. Normally, I would move the file, and rename, move up one directory and other minor things. A few times I did this the file got messed up. And that was when I noticed each file copy / move was actually copied two times. I think the file is copied to the O S drive [SSD], and then to the WD Black drive. So now I watch the disk activity light on my PC, watch the two separate activities happen, and then proceed with my work. Annoying. Any idea what the problem or option is?
  3. Thanks for the tips. As it happened, I have had second thoughts. I was attempting to solve an issue, with what I now consider 'patches'. I consider the methods used in NAS as a quick way to cause hard drives to fail. The better solution, I am guessing, is the larger SSDs. If I chose to do something different, large ssds would be the way to go, today, and I would forego any form of NAS or SAN. As usual, with technology, one solution turns out to be a 'work around' of some fundamental limitation. While useful, they tend to be expensive, and time consuming to manage. As it happens, I can wait, and be patient. Yes, I do have that kind of money available, but why should I spend it on what I now consider a 'patch system' or process. I hate patches. I would rather 'fix'. I chose to purchase external USB drives (much cheaper solution) which I have to manage, but I do have a nice bit of software to ease the backup issue, ViceVersa. It works great. Best regards, Bob Hyatt
  4. I see a lot of 'almost' flaming here. I thought such things had become out of 'flavor' and considered rude, too. If I did like most PC builders, I would be building with all new parts. case, motherboard, drives, etc. And if I expected that I would NEVER upgrade any component, etc, then limiting the power supply, and other parts to 'just enough' would make sense. But as an old-timer mainframe programmer of 37 years, I don't do things like that. I buy quality [expensive] components I expect will last me years. Far longer than a single build. This means the actual cost per build is comparable. And I mistyped my case, a FT02 Silverstone case... As usual, your mileage will vary. and for anyone to tell someone else they made a mistake is what I consider 'rude'. Besides, how do you learn? By making mistakes. And a lot of mistakes can be skipped by asking someone a bit more 'expert' than you and then considering what they said against your purposes.
  5. Hi, Each person is 'allowed' to choose their components and this includes the power supply. In my case, I buy parts I expect to use in more than one build. E.g., Motherboards change faster than cases and power supplies. Why? Features that I 'need' that are not on the current motherboard. Or perhaps the motherboard has developed problems, even though it has not failed completely. Currently, I am on my 3rd build in my Silverstone FT04. My 2nd build on my P C Power and Cooling 1200 watt power supply. The Power supply comes with a 7 year warrantee, and I am about 4 years into that. From one combination of motherboard and CPU to the next, I cannot be certain what kind of power I will need. But I do know I will need power to deal with 2 SSDs and 4 large hard drives. I don't need the latest video card, as I don't do video games. I do videos, whose requirements are not as extreme. Yes, it is fairly certain I don't need 1200 watts. But this particular PSU has some smarts. The fan won't turn on until I hit over 600 watts of draw. Which has happened seldom. So, the hint is I might get by with 750 watts. For the current configuration. What about the next configuration? I do believe the power draw of PCs has hit a relatively calm point. And I do expect CPUs to get 'smaller' in size and power draw. But I cannot be certain, and I like having 'reserve' capability. And I live in a nation with notorious power issues. 240 volts? Naw... It comes from the pole at 260+ volts, which my PC's power supply handles easily. Same with my UPS. Both have to deal with high voltages that can dip. So each person has to decide what will be worth the expense.
  6. My 'help' will be different than what I see above. First, do you intend to do one and only one build in a specific case? I do multiple builds in a case, until I find a better case, then I switch to that one. This means the actual cost of the case is not a big issue. I bought an Antec 1100, did 3 builds in it before finding what I considered a better case. That case? the Silverstone FT02. I have done 3 builds in that case and have not seen another case I would like to switch to. Well, I do like the Case Labs models. yeahhhh. If you intend to do only one build in the case, then it becomes more difficult. You look at videos in which someone points out it good parts and its not so good parts. Then based on your skills, you pick one. I did this my first years of PC builds. All for myself. I don't do builds for other people. the features I am looking for, are probably not the ones you are looking for. I don't care about all steel or aluminum, or part plastic. I don't take my pc apart all that often, so as long as other factors are taken care of, not an issue. I do like a place to hold a lot of drives. I also purchase higher priced PSU, as they will also work for multiple builds. I started buying P C Power and Cooling in 2000, and continue with them. They work, and have a 7 year warrantee. Which I have never needed. I have been told they don't actually build the PSU's anymore, but they still back their PSU's. I also do not game, so one video card is enough for me. I do have a aio water cooling for my cpu, and it has worked flawlessly for 2 years, so far.
  7. Perhaps I can help. I have two laptops that originally came with Win7 on them. After some time I installed Ubuntu Linux and it all worked just fine, in both laptops. I did have to help the connection to wifi the first time. If you installed 'default' the upper right panel will show either an 'up arrow with a down arrow'. If you see that, the connection is ok. if you don't see that, I would see a changing pattern in that same spot. It looks kind of like a sound wave. starts small at the bottom and gets larger towards the top. and it fluctuates (probably a gif). This means it is trying to make a connection. If the laptop has a hardware thing to turn off the wireless connection, make sure it is on. Next, check the connections the network tool has found. Look at 'edit connections'. You might have to provide a password. If there are more than one network, then you will have to tell it which one you are interested in. Later, I replaced the 2.5 inch hard drives in both laptops with ssd. Yippee... very fast starts.
  8. You did not say, what 'slow loading' meant. Unfortunately there are factors that can mean a slow start to a game or program. -Where the program is stored, Disk speed is definitely one of them. -Small amount of memory is another. If you need a lot of memory and don't have enough, as the program is 'loaded' into memory, parts of it are put into the 'cache' or 'swap' part of the O S drive. -Slow CPU is another possibility. -Less likely, is how is your drive attached? I have a motherboard with two different controllers for SATA drives to be attached. While I don't notice a difference in response time due to that, some motherboards have been reported with a noticeable delay (Don't ask me which ones, I can't remember even one). -Slow drive for the O S. These days, most people know that if this is your concern, put an SSD and load ALL of the OS and the programs on it.
  9. The only drive(s) I could hear under normal circumstance are the Velociraptors. Spinning at 10,000 rpm means they can get your attention. However, most of the time, I did not hear them, either. My point? if you can hear the drives, you might want to investigate.
  10. If I understand correctly, you have one drive in your case? In that case, buy two more SATA hard drives and split the data between those two drives. The newer SATA drives are better and can help. If you are willing to 'explore' buy on of the drives that has SSD built into the drive. When processing large files, this helps. Don't bother with RAID. But do ensure you have backups of the data before you begin adding new drives. If you truly do have only the one drive, then also buy a small SSD (128GB should do it) and put the OS and your programs ONLY on that drive. You will be surprised how much difference this will make.
  11. Sorry, but I have a different opinion. If you ask the question, then you already know the answer. RMA the drive, if you can, immediately. Back up the data on that drive, before it fails. Remove the drive and put one that you KNOW works. Praying or hoping the drive will get better, never works. I don't care how many tools you use to check the drive.
  12. Hi, I am a retired programmer, analyst, project designer / architect / manager. 37 years in the business. and in that time I used over 3 dozen languages on the job. First was basic, fortran, and assembly. Next 'CASH', whose form I did not see again, until I saw Turbo Pascal. RPG and RPGII. Cobol, too, where i spent most of my time. I came up with general rules, First, get the basic form of the system working. If you have a lot of options, and they are to be on the first page, get that working first, then fill in the details later. Some might call this the GUI part. Second, when the whole switching / gui process is working, look for problem areas. Not fast enough, or error prone, and deal with them. third, fill in the details. This is the major part of the coding effort. 4th and 5th - test the hell out of it. I worked on 32 bit memory systems, and 36 memory systems. Real time OS vs batch OS. Network DB, Hierarchical DB, and 3 different Relational systems none of which matters today. I don't code, that is what retirement is for. As good as a programmer and analyst I was, it turned out my real talent was in project architecture / design and management. I got paid a lot more that that one. I enjoyed the changing world of computers that happened in that time. My first 'mainframe' had 32 kilobytes of memory. 10 years later I had a DataBoss watch that had 64 kilobytes memory- he he... The first hard drive I saw was so large, I can't describe it today. (think the size of a small refrigerator, holding a 'cake' about 9 inches tall, and 12 inches radius) It had 5 megabytes of storage. Today, on my pc I have four 4TB WD black drive. Programming was fun, and seldom a chore. If I had an issue, I learned DO NOT stay up all night trying to fix. Instead, GO TO SLEEP, and you will wake up with ideas to try. When I learned that one, programming problems became much easier to deal with. Because my projects worked, I was handed 'leading edge' work. I found ways to succeed in work that was supposed to fail. Mr Lucky was one of the terms that came my way. I won't be able to tell you specific tricks to do in the languages of today, so don't ask. But architecture and project planning, I could help.
  13. the 'old' rule of thumb is to get the project working with as high level language as you can stand. Look for the bottlenecks, where the function is too slow, then code that function in Assembly (not machine code). Any more than that is asking for a long project.
  14. Excellent! Now if more people would use those parameters, life would get better... You have heard of Murphy's law? If something can go wrong, it will go wrong" O'Shea's law ... Murphy was an optimist...
  15. Hmmm, I see a lot of interest in this topic, because of the many responses. I consider that a good thing. My reason for starting this thread is to encourage a synergy of hardware and software people to fix what is apparent (to me, and only me) in the very process of rebuilding a drive can cause another drive to fail, thus ending the rebuild process. There are more than one 'bad' guy in this flawed process. It is NOT my opinion. it is a fact, based on the stories I have heard. One of the ways to fix, is to avoid hard drives - use only SSD (or something yet to be invented). Problem? SSD don't have the capacity, as yet and are expensive. And the way life is, a new problem will surface. Technical progress tends to take care of problems. But if no one thinks there is a problem, no one will be working on the problem. I am old enough to remember when the fastest storage device on a computer was a 'drum'. It rotation speed was about 11 milliseconds, and that meant its access time was 11 ms. Far faster than the hard drives of that day. Now, I can have a faster drive in my pc, and much larger, too. Raid was created to deal with slow and unreliable hard drives. Adding more options to the raid 'equation' did not fix the drives. But progress in the hard drive arena did. Raid has its uses, and also its failures. ZFS, as far as I can tell, demands more of the hard drive than it can actually provide. Recognition of that is in the resilver process. However, that process should (until a better solution comes along) be gentle in the rebuild process. After fixing one drive, it will have to fix another drive, and so on. Beating a drive to death in this process is foolish. So a better way is needed. Best regards
  16. I agree with most of what you say, but I find it interesting that it is ok to kill an already weak set of drives, when the resilvering process happens. It is a lengthy process, and I bet every admin fears and hates that moment. The process is complicated, and the way it works seems to ensure a failure. Instead, I think it should 'tread lightly' to allow those ancient drives as you say, to survive the process. Because, if you have to do one drive, it is likely you should do the other drives, as well. Having the resilver process destroy the drives is foolishness. Yes, you should have multiple backups. But the labor involved in getting them set up, and retrieving them could be saved by having the original set last longer by having processes and procedures that tell you 'early' to replace the drives, one by one, and the resilver process should understand it should be 'gentle' with the old drives in each process to rebuild one of the drives. A wholistic approach. Rather than blaming the drives. Yes, drives do die, but why 'help' that death? Silly
  17. Excuse me, But it seems you have missed the point. The whole point of a backup is that is should NEVER fail, without some warning. The horror stories we hear are those in which the information has been lost, with little or no recovery possible. So I am looking for a total system that is reliable not just when 'holding' the data, but when I attempt to use or transfer that data. Pollyanna seems to rule the IT world. This is why 2 of 3 projects fail (late / over budget). I got plenty of criticism at work because of this 'obsession' over reliability. The persons who appreciated the results best were upper management. The results of my planning and effort were visible within and without the company. And our senior management heard about the good results from our customers. I was called 'Mr Lucky' so many times, simply because things worked and projects succeeded when those people expected failure. So, I see little reason to change my views on how things should work, as opposed to how things do work. When I point out an issue, I get apologist responses. What I hope for is some effort to address the well-known issues. Instead the finger is pointed at someone else. Software blame the hardware folks a lot, when with just a little thought they could have made their software match the true capabilities of the hardware, instead of their idealized view. A synergy is the result of diverse things working together. Yet, in the IT world, instead of a synergy, it looks more like a 'blame game'. It is ALWAYS someone else's fault. When I point out an issue, I expect some considered thought, and some planning to deal with the issue. Not a denial of the issue. I cannot fix this all by myself. It will take an effort by many people and companies to achieve this noble goal - an absolutely rock solid backup and retrieval system.
  18. Captain_WD, You have given the 'standard' responses to the issues I raised. I am NOT saying the problem is the drives. I AM saying the issue is the systems using the drives are poorly designed. In particular they seem to be expressly designed to cause drives to fail, at exactly the moment they are needed to work. Example, In the videos and reports I have looked at for ZFS, it has been noted that when one of the drives has failed, and they put a replacement drive in its place, and begin the 'resilvering' process, another drive will fail and that is the end of the data in that specific array. This tells me the reslivering (or building process for the new drive) process has some serious issues. At the exact moment when their work must be the most reliable, somehow they cause another drive to fail. And the drives are blamed, not the software using the drives. WD and other drive manufacturers should be pushing back on this particular issue. Hard drives have come a long ways, both in size, density, and reliability. But bad software can kill good hardware, and make it look easy. This was one of the reasons why I made the topic title I chose. An enterprise can 'almost' afford such failures. but a home or small business? No. As a retired computer person, who has had many 'jobs' over the 37 years of my work life, I have seen this happen over and over again. The 'genius' guys doing the coding are very clever, too clever. If they understood the hardware side more, I believe they could code better solutions. I would prefer they stop showing off how clever they are, and show us how smart they are. I hate clever, I appreciate smart. Best regards
  19. LeadEater, I just watched the video from Linus of the failure of his all SSD backup server. What failed? the Raid card... My personal opinion is that when they work things are great, but when they fail, total disaster is likely (not possible). So I will stay away from hardware raid, and I wonder about software raid. And I consider the demand that all drives in the 'array' be the same size another ridiculous demand. There are serious design flaws in the NAS arena in that at the time of a failure, is when they need to be the most helpful. Instead, they add to the problems. I am happy to avoid such a mess.
  20. PH politics and 'economics' are based on the old Spanish Conquistador ideas. If it has to be imported, and you want to buy it, tax it - a lot... I don't really know why, but after buying a few things and shipping them the 'normal' way, and getting a tax that matched the original purchase price. This also applies to used cars. No matter the age of the car, if you bring it into the PH, the tax is the original purchase price. I have heard of other such craziness. The 'ordinary' Filipino is unlikely to order such things. So the tax is on rich Filipinos and foreigners. And rich Filipinos will have some way to avoid the tax. Annoying
  21. Sorry, I just had another 'thought'... Perhaps I am overthinking this issue for myself. Perhaps all I actually need, today is a server containing the files I want to back up and to share. No NAS or ZFS, just an os designed to run and run without errors, and ready to share. FreeNAS might be the way to go. or simply Ubuntu Linux Server...
  22. I have purchases a bunch of hard drives from Amazon in the US over the last 5 years, and air shipped them to me in the Philippines, and all of them worked without issues. Amazon, plus the shipping charge is less than buying from the local places. PH has a tax on normal imported electronics of 100 percent. The cost of Amazon plus shipping avoids that tax, and is cheaper...
  23. Yes, I agree with KISS. File management is a useful skill. But it takes time to develop. I forgot to mention I have one other issue with ZFS, It requires all the disks to be the same size. This prevents a gradual enlargement of the 'pool'. You might start with some 2TB drives, and after a while decide you need larger drives. One by one you 'upgrade' each drive from 2TB to 3TB. and use the resilver process to fill in the new drive. Then later replace one by one with 4TB drives. This gradual process means less chance of a failure (ignoring the resilvering issue for the moment). I have plenty of drives, of varying sizes as I upgraded my main pc. It now contains 1 ssd (OS), one 1TB Velociraptor, and four 4TB data storage drives. I have an Antec 1100 case, sitting unused at the moment. I also have over a dozen SATA drives of varying size, used as my USB backups. I am considering putting what will fit into the Antec 1100 case, with a motherboard and an older Intel I7 cpu, and plenty of memory on hand. With as many of the hard drives as will fit (and can connect to the mb), and a couple of SSDs for the OS and 'ZIL' drives. The SSDs would be a new purchase. However, I have two 300GB velociraptors sitting around, that could work. As for the connection of the drives to the MB, I have considered a couple of HBA controller cards, as they seem to be reliable, too. But I am held up by the two issues I have described. I don't know enough about this process to experiment. I would prefer to have it all worked out first. Simplest would be to put a Ubuntu Linux server OS on the drive, specifying the use of ZFS as the file system. But I don't know all the ramifications of this setup. Does it care about varied disk sizes? Will the HBA also care about the sizes of the disks? Perhaps another 'way' can be found, via advice from someone like yourself.
  24. LeadEater, Thank you for your considered reply. I have encountered 'resistance' to the notion of no defrag before. Perhaps the solution is to better manage the files, so they cannot be fragmented so much. Yes, you cannot 'fix' this issue on O S files, and for that I put the O S onto an SSD (Which you NEVER defrag). Problem solved. Yes, defrag can be a useful tool, but I feel it is used too much, recklessly. In my case, I have a work drive (A velociraptor) onto which downloads go, and video conversions go. When those tasks are done, I move them to my 'storage' drives which means very little defrag. e.g. I manage the files to reduce defrag issues. My O S is on an SSD, so no defrag there either. My issue with ZFS is I have seen video and seen reports on the web of failure exactly at the moment they are attempting to 'resilver' a replacement drive. That moment of crisis is NOT being managed well. Perhaps the issue is really two issues. Online file usage vs backups. I am really interested in a super reliable backup, that I might put online for the family to watch movies, listen to music, etc. In that case, there will be few writes, and a lot of reads. My main pc will deal with conversions and downloads, etc. This backup system would hold relatively whole files (almost no defrag) and I would never do a defrag to ensure long life. As a retired person, I do watch my pennies. And I buy parts that should last a long time. My power supplies has a 7 year guarantee, for example. I buy cases that work the way I want to work, which means they cost a lot. However, because I do 3 or 4 builds within that case, the actual price per build is small. Each build is for a specific purpose. For example, on my main pc, the last build was to allow dvd conversions to be done quickly, as they have blackouts here, all too often. Now I can do an entire dvd in less than 40 minutes. 20 minutes to load the dvd to a drive, 20 minutes to do the conversion into a mkv file. Yes, I do have a UPS, but even this 20 minutes of high cpu usage would be too much. So, you might say I have a peculiar and specific view of systems and how I expect them to work. If they fail not at their normal work, but at the moment of crisis, then I stay away from that, or find a way to mitigate that issue. Something about the way the resilvering is being done in the ZFS system can kill drives. And does, far too often. They need to work on that issue, but I have not heard of such an effort. I am sure they are working on it, but I have not seen an update on this critical issue. And I am used to being the one 'far out' on the edge of requiring things to work. I did some things on the job that were supposed to be impossible, simply by paying attention to what works vs what is problematic. Far enough 'out of the box' that one manager called me 'out of this planet'. Not criticizing, but noting my requirements and solutions were great, but so far from what others thought was possible. So I am used to criticism, but I persist because the end results are worth it. So, please continue with your comments and suggestions.
  25. Hmmm, I think one of the 'issues' is one of attitude. Some people 'expect' things will always work. I am of the other school. I know things fail, and I wonder what will fail with it. And how long to recover. I appreciate hearing the success stories. Has there been a satisfaction survey of NAS systems? If so, point me to it. As for RMA, in the Philippines, they give a very short warrantee period, and if you actually try to use it, they simply refuse. Western Digital, as managed by those in Manila, have refused to replace two drives for me. So that is not a viable way for me to go. I need them to work and work and work. I need as conservative a process and set of tools as i can devise or find.
×