Jump to content

alpenwasser

Retired Staff
  • Posts

    4,129
  • Joined

  • Last visited

Everything posted by alpenwasser

  1. Looney's original criterion is that it's all running off of a single motherboard, not sure how he wants to handle this. @looney?
  2. There are a few possibilities. One, as mentioned, is that the number is simply based on the amount of HDD slots and the largest capacity on the market when the server comes out (which seems to be the case here since apparently you can go above 8 TB). However, there are also SATA/SAS controllers which can't address HDDs larger than 2 TB correctly, in which case you couldn't get around that limit without using a different controller. For example, I have a server with a motherboard from the LGA1366 era, and the onboard SAS controller has this limit, so I've needed to buy additional SAS controller cards in order to be able to use HDDs 3 TB and larger. Lastly, the manufacturer could have decided to be a dick and locked the thing down to prevent people from using larger HDDs. Not sure if that is actually ever done, but I wouldn't be surprised if so.
  3. Damn, those Silverstone boxes really do look nice. Imma need to get me one of those one day.
  4. Excellent, welcome to the list proper! Splendid, welcome to the list! I've taken the liberty of adding two pictures from your imgr link directly to the post for the people too lazy to click the link. And with you we are officially at 70 systems 10 TB and larger! @looney, jubilate!
  5. Hey, five warning points for heresy! Nicely written post though. Maybe something to add: What language(s) make(s) sense to learn can depend a lot on what you want to do. If you want to go into web dev, it most likely still makes a ton of sense to learn PHP, because no matter its deficiencies, it's still extremely prevalent in that area and will very likely stay for quite a while yet. As for Perl, well, some people love it, some people hate it, such is life. There are still a ton of websites running on Perl however, and quite a lot of software based on it, so knowing Perl might still be a very good asset. Besides that, if you're targeting sysadmin in the UNIX world, it's very handy as well, tons of stuff around for it. But regardless of whether or not you (want to) learn Perl, knowing Python is definitely a good idea these days, completely agree on that. To be honest, the lines between languages are starting to blur a bit for me (C, C++, Python, Perl, Ruby, PHP, w/e), especially scripting languages.Sure there are differences between them, but once you've learned a few languages, picking up another, as long as it's at least somewhat similar, doesn't really take too much effort. I mean, as an example, sure Perl and Python code looks vastly different, but in the end, you have your packages, your functions, your datatypes, your loops and control structures, and so on. I don't really notice a big difference anymore. By that I mean that my thinking process is pretty much the same whether I work in Python, Perl or PHP or C, unlike when I'm trying to wrap my head around Haskell for example, which actually does require different thinking due to the rather different underlying paradigm. Switching between Perl, Python etc. feels like switching between German and English to me. It's mostly the same thing you say, just with different vocabulary and syntax, unlike, say Chinese, which actually does have a rather different way of thinking behind it, so you need to adjust more, although you still can express any arbitrary thing if you're good enough. Yes, there are quirks and idiosyncrasies for each language, but those I get kinda used to after working with the language for a bit (or they make me hate the language so much I abandon it or only work with it very, very reluctantly, such as Javascript ). But I haven't really been able to find any objective reasons why Python or Perl would be superior to be honest. Pretty much every debate about the subject I've ever come across (and that's quite a few, because I did quite a bit of digging before starting to learn Perl) eventually just comes around to "I don't like Python's whitespace idiosyncrasies!" or "Perl is just weird and I don't like it.", something like that. Personally, as somebody who really likes C, I feel more at home in Perl, and a bit less in Python, but I can't really say that I've noticed one or the other to be inherently, objectively superior. I've also been looking at Go, very intrigued by that considering who developed it (one of the guys was one of the inventors of C after all). And yes, +1 for functional programming. As for Java: It's popular, it's prevalent, and I don't really like it, but in many cases you need to know it to be employable, such is life.
  6. Looking nice! Sure thing: Discussion thread: http://linustechtips.com/main/topic/30316-cable-lacing-tutorial-upd-2013-oct-05/
  7. Interesting, especially since it's still based on FreeBSD last I checked and that does not seem to have this issue (at least not that I've heard). Ah well, these things happen. Yeah, that sounds about right with that I'd expect, since that's pretty much what ZFS was designed for (among other things). And since its primary target is the enterprise market where you'll pretty much always have ECC memory, this makes sense to me. Sounds like a plan.
  8. I have read on multiple occasions that Powerline's performance and reliability can be significantly affected by the quality of the infrastructure on which you run it. Two buddies of mine used to run it in their appartment and seemed to be very happy with it, but their appartment had decent wiring (built in the 60s, and generally speaking we have pretty good wiring around here unless it's a really old building whose wiring is ancient). I don't have any personal experiences with it though, but this is what I've read/heard.
  9. Excellent, welcome. You wouldn't have any pics of the machines themselves by any chance? Well, feel free to post your system(s)... Updated and fixed the OS.
  10. The OS was Arch Linux, but you could try other distributions with ZFS on Linux running on them, it's not exclusive to Arch. Yes, I have come across the 8 GB limit. From what I understand, the stance of the FreeNAS community on <8 GB RAM is that they say they've had too many people who've come to them with issues and turned out to not have enough RAM or something like that, so at some point they got too annoyed with people constantly coming to them with not enough RAM and won't bother helping you if you don't bring at least 8 GB to the table. I don't know enough about the internals of FreeNAS and its interplay with ZFS to accurately judge whether or not this makes sense to be honest. For example, FreeBSD recommends at least only 1 GB, and an additional 1 GB for each TB of storage, although they do say you can go lower: (source) I don't know what FreeNAS does differently that its ZFS implementation needs so much RAM, but they seem to be pretty insistent on it for some reason. Note that deduplication (which no normal user will ever need unless they're doing something very special) does actually require a ton of RAM, but that's a different story. This link mentions a system which is running ZFS with 768 MiB of RAM. Additionally, these links may be of interest: http://hardforum.com/showthread.php?p=1036865233 http://open-zfs.org/wiki/Performance_tuning http://louwrentius.com/things-you-should-consider-when-building-a-zfs-nas.html http://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ http://www.zfsbuild.com/2012/04/18/let-zfs-use-all-of-your-ram/ This will depend on what the cause of your error is. As far as I have been able to determine, cosmic rays may be the most frequently cited cause of RAM errors, but most likely they are not actually the most common cuplrit in practice (if this really was a common issue, non-ECC systems would constantly crash I would think). But yes, this does happen, and it is random (both with regards to time and location on your memory). It's also not permanent unless the cosmic rays are strong enough to permanently fry your RAM (no idea how likely that is tbh). Aside from that, another cause of memory errors could be radiation from other sources. How frequently this happens would depend on the source of course. If your computer is located in an area where there's tons of EM radiation which is strong enough to cause memory errors for example (and no, I have no idea how strong it would need to be for this to happen), it might happen rather frequently, but the locations affected in your RAM chips will still be random, and the errors won't be persistent (unless, again, permanent damage to your RAM occurs). Then there's of course just the good ol' defective RAM chip. In that case, the error will always occur at the same location, and it will most likely be permanent (so, it would make sense to replace that RAM stick as soon as you notice this). Depending on what is the cause of the defect (say, old age), a larger area than just one bit might be affected.
  11. I have never seen any information which would indicate that ECC RAM's failure rates differ significantly from those of normal RAM. Which is to be expected, because they're really quite similar on a hardware level. I ran a 17 TB (raw storage) ZFS pool on a Linux box with 4 GB of RAM for about a year without issues. FreeNAS might be different in this regard. As for UFS, I recommend looking at what FreeNAS recommends or just trying it out. Trust is relative. Even ECC RAM is not completely impervious to errors. It can detect and fix 1-bit errors, detect but not fix 2-bit errors, anything larger than that is neither detected nor can it be fixed. Of course, the chances of a three-bit error are so low that I've never heard of this being an actual issue in practice though. Personally, no, I do not use ZFS on a non-ECC system these days. For further reading, I recently found this paper which analyses various data integrity aspects of ZFS, but I haven't been able to go through it yet. Will get to it at some point though.
  12. Reds in both cases I'd say. While I haven't found a huge deal of info on how software RAID behaves with regards to dropping unresponsive drives, MG2R's experiences with mdadm lead me to err on the side of caution on this one.
  13. Hey, welcome. It depends on how you implement your NAS. Greens lack, among other things, something called TLER, which can lead to trouble with them being dropped from RAID arrays. Very roughly speaking, when it encounters an issue reading data, it will keep trying to read that data for quite a while until there's either a time limit reached or until it succeeds. For many RAID controllers, that time limit is so large that the RAID controller will basically lose its patience and just mark the drive as non-functional (since it hasn't heard from the drive in too long a time span, basically). It will then try grab the data from another source (be that parity data in a RAID5/6/7 array or another drive in a mirror configuration) if one is available. Red drives (or RAID-optimized drives in general) will not keep trying for as long to read data they're having difficulty with, but instead will just tell the controller "Boss, I'm having some issues here, get this stuff from one of the other drives. In general I'm fine though." The controller will then get the data from another source without dropping the troubled HDD from its array. Both approaches make sense in their correct use cases: If you're not running a RAID setup, you probably only have one copy of your data online (although you do hopefully have a backup), so the drive keeping to try to read the data is the sensible approach because it can't be gotten from anywhere else (except a backup, which isn't very convenient often). In a RAID array with redundancy, this isn't what we want though since we have secondary sources for the data, so it doesn't make sense to keep waiting and waiting for that one drive to give it to us if it's much faster to just get it from somewhere else (this doesn't apply to RAID0 arrays of course, where you have no redundancy at all). Depending on your RAID controller, the time limit until it tries to fetch the data from another source can be adjusted, but I haven't personally worked with one of those yet, so I can't really say much on how effective that strategy is when trying to work around dropped drives. I think @MG2R recently had this happen to him, so it can even be an issue with software RAID, not just hardware implementations. So, if you're just running JBOD (no RAID config), the Greens might be perfectly fine for the job; if you're intending to run RAID configs, the Reds are generally considered more suitable. I'm running both Greens and Reds in that config and so far all is good. I'm sure there are more differences between the Greens and the Reds, but that's the main one I usually see thrown around, maybe @Captain_WD has some more input or can correct any erroneous assumptions I've made here.
  14. Excellent to know, I shall delve into it when I have some time and motivation then, maybe the difficulties you mention might be an excuse to first look at functional programming in Perl, since it's apparently quite powerful in that area while probably not being as finicky as Haskell. Well, for me personally, since there's quite some similarities between Perl and C (for example, references act rather like pointers, which I find pretty awesome, although it might scare many people ), and I like C a lot, I just might have a natural inbuilt weakness for Perl. Python is definitely on my todo list; so I suppose I'll see which one of the two I prefer when I start getting into it more seriously. Whether I like it or not, it's a language which one ought to know anyway these days. As for Ruby, I'm not really sure how fond I'd be of that one to be honest. I know a lot of people seem to be really into it, but from what I've been shown so far... not sure yet. Anyway, as said, my attachment to Perl probably primarily stems from my attachment to C and Perl's philosophy, both of which are admittedly just personal preference and not really any objective reasons of superiority. Probably quite a few of Perl's idiosyncrasies which make other people dislike it are what attract me to it (same with C really). In any case, I wish Perl6 would finally come around, I'm really curious to see what that'll bring.
  15. So you're saying Haskell is worth looking into? I have been feeling the itch to do a Haskell project for a while, but haven't really had the time to properly acquaint myself with the language. Haha, yeah, funny story actually (well, sort of). I originally started looking into Perl for sysadmin purposes (Linux/UNIX guy), and I noticed that I rather liked it. At around the same time I had the idea to start developing a file organising tool, so I did that in Perl as well. And the more I worked with it, the more I started to really like it. Not really because of any objective reason of superiority, but it somehow just clicked with me, so to speak (hence why the 10 TB+ sorting and stats script is in Perl ). I also still have a very interesting book to work through about functional programming in Perl. Not really sure how great of an idea it is, but I'm just somehow drawn to the language. Probably because you can do all kinds of really unreasonable stuff with it, sort of like C (which I also really like). My primary problem with learning new languages at the moment is that I just don't really have the time. But in the long run, there shall me more added to my repertoire I'm sure. Oh, right I forgot about Matlab and Mathematica, kinda need to use those for college actually.
  16. Since it hasn't been mentioned yet, here's a study Google did a few years ago; temperatures well into the 40 C range are not an issue, and if you check out HDD manufacturers' spec sheets, most HDDs are spec'd up to ~55 C or so. https://research.google.com/archive/disk_failures.pdf The most prevalent killer of HDDs, once they've survived an initial timeframe, is simply old age according to that study.
  17. Damn, those RAM sticks look so beast. Yeah, I paid ~40 USD per CPU when I bought my L5630s in spring (before shipping and VAT). Of course, they're not exactly the most high-end of the line (I paid about 900 USD per CPU for my X5680s back in fall of 2012 IIRC), but still easily enough for a SOHO server. Yeah dude, you've been slacking... Not bad, not bad at all. Welcome to the forum, and the list! Damn, the past week has been great for this list. EDIT: Oh, we've also passed 300,000 views:
  18. Great to see the command line getting some love. Some do, it depends a bit. I use xterm, where you can access the cut/copy buffer via <shift><insert>, but other terminals might have <Ctrl><v> as well. But the way buffers are organized in X is a bit weird anyway IMO. But yeah, great to see Windows getting some better command line stuff. A great command line interface can be an excellent asset.
  19. It already is a competition, first of all about who has the most storage, and secondly about how much combined capacity we can muster up. Indeed. Considering the entire list is only at roughly 1.7 PB at the moment, I think we're still a few years off from a single (private) system having that much storage. Uhm, we live all over the world, so I don't see how that's doable. Also we kind of need our storage actually, it's not just for shits and giggles. You are of course free to build a 1 PB system, or contribute to the list with more modest numbers...
  20. Yeah, sounds like you'd need to send it to a data recovery service. Be aware though that such an undertaking is usually pretty expensive, so it's only really worth it if you really need that data.
  21. If you want to know more about the health of your HDD, try to run some S.M.A.R.T. diagnostics tool on it, like Crystaldiskmark or somesuch. A list of the most common SMART attributes can be found in this wikipedia article.
  22. Yeah, when I first read about this, it just sounded so completely ludicrous I wanted to see if I could get some numbers to see just how ludicrous it actually is. I would not be surprised if that were indeed the case. Definitely curious about what is actually going on with this. Well, just collecting and processing existing data created by others, even if it's all the data created by everybody, is still much less demanding than what this company is claiming to do, but it would indeed be interesting to know just how much data they have. Found some more interesting numbers in the Wikipedia article on the Exabyte (10^18 bytes) and the Zettabyte (10^21 bytes). (note: 330 exabytes in 2 TB HDDs would be 165 million HDDs, just as a frame of reference) So yeah... the more reserach one does, the less plausible these claims become.
  23. Not necessarily. I mean, the website looks pretty elaborate for a drunken hoax IMHO. I think it's either part of an awareness campaign as @MG2R has guessed, or part of an extortion scheme (write to a few thousand people telling them that they've violated copright, a few will pay because they're afraid of going to court, even if it would never hold up in court). Or, it could also be a scam to trick investors into investing into a non-existing company and run off with their money. Or something else, who knows. In any case, I see no way in which this is legit.
  24. As said, it's not real, and the US copyright office clearly states that works created by non-humans can't be copyrighted anyway (see the pdf in my previous post). And also, as laid out on previous posts, it's just not physically possible to do what they're claiming to do, the amount of data creation, processing and storing would break the laws of nature as far as I can tell.
×