Jump to content

Alphy13

Member
  • Posts

    13
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    Alphy13 reacted to CWP in Why Separate Boot, Home, and Var Partition in Linux?   
    /boot: For legacy reasons. In the olden days, BIOS cannot reference any sector that is located higher than 1024 cylinders. Forcing the /boot partition to be on its own and be the first partition created ensures that the necessary files are located below the 1024-cylinder limit. The Linux kernel and the initrd/initramfs image are located here to ensure everything is loaded into memory to start the actual storage drivers for direct access in order to get rid of the slow and limited BIOS for reading data. I am not sure if UEFI solved this issue though...
     
    /home, /var, /usr, ...: Compartmentalize the storage space. For example, if you have an errant user or process that fills up the /home partition, it does not affect /var, where the log files are stored. You can then review the logs in order to investigate what happened. Also, take a look at the /etc/fstab file. Each line has two numbers at the end. One is for dump (an old, old backup program), and the other is to tell the system to fsck on boot (file system check).
     
    It also enables security and/or different file system strategies. For the security example, /boot, /usr, are mostly static (except if you are updating). /home and /var, not so much. In other words, you can configure the system to mount these partitions as read-only on startup, and only remount them read-write before you update.
    A useful mount flag is noexec, where you are indicating to the system that it should not honour any binary programs that are marked executable. For example, if a web-uploaded file is stored in /tmp, and there just so happens to be an exploit that remotely triggers the ability to start a program, mounting /tmp with the noexec flag would help keep that from happening. Another example would be if you do not allow your users to run any programs that they download or compile themselves: mount /home with noexec. If there is no reason why a certain mount point should contain runnable programs, it should be mounted with noexec.
     
    For different file system strategies, Gentoo uses portage for their package management system that contains an enormous amount of small files stored in /usr/portage. You can format that mount point's partition with a file system that is more efficient at storing small files. Of course, this recommendation was made before SSDs were available or affordable.
    On the other hand, f2fs file system is supposed to be more efficient for flash-based storage (minimizes writes), but not every boot code supports f2fs, so you may have to use ext4 for /boot and f2fs for the rest.
    /tmp is supposed to be volatile. You cannot expect any file stored there to survive a reboot. Some distributions will deliberately wipe it out on startup. Some security auditing scripts (CSF, some profiles in RHEL/CentOS7's OpenSCAP, etc.) would recommend mounting a ramdisk there, again for compartmentalizing, but there is also next to no slowdown for "wiping" that directory.
     
    Of course, most of the above would not apply to you. These are some of the examples that I have encountered that either requires using or improves if using different mount points. Personally, I have not been following the different mount points strategy, instead opting for whatever the distribution decides (which is usually a separate /boot and a separate /home, if space allows), although I may take extra security precautions if I am setting up an Internet-facing server.
     
    TL;DR: Personal preference. Some are for legacy reasons. Some for technical reasons. Some were suggested long ago, but do not apply now. Take extra precaution if you are setting up a Linux-based server that has direct access to the Internet.
  2. Agree
    Alphy13 reacted to Scotter97 in We're Building a Gaming LAN Center!!   
    @GabenJr
    Saw that Linus was getting the new Dell switch setup to the point for web management, very cool.  One word of caution, don't allow the switch IP address to communicate with the internet.  Best security is to not route that IP (create a management vlan maybe?).  It doesn't matter what the vendor is, built-in webUI's always have known vulnerabilities.
     
    Thanks for your time.
  3. Informative
    Alphy13 reacted to AntVenom in Multi-computer Water Cooling   
    Just don't mix metals, and use the right amount of PT-Nuke for the volume of water you have, and you'll be good. Nickel and copper are fine, just keep aluminum out of ALL parts of the loop. (Coming from the guy who did basement water cooling w/ it still working)

    And when it comes to pumps, its MOSTLY all about head pressure. I use a pump meant for a computer water cooling setup, but it has 20 feet of head pressure, and it has no problem comfortably pushing about 6 feet of water vertically. It churns along day in and day out. I wouldn't use the pump inside of my computer, because its uncomfortably loud for that, but it's FAR quieter than something overkill like a Little Giant pump, which I used in the past. Little Giant pumps work, but are awful. Avoid them if you can. PURELY because they begin to audibly grind after the 2nd week, and never stop.
  4. Informative
    Alphy13 reacted to ninbura in Multi-computer Water Cooling   
    Thanks!

    I've got around 80ft of tubing in the system as a whole so systems are around 40 feet from the radiators.

    I've had a few issues: 
    With 3 pumps at full speed, considerable pressure, and cheap tubing from Newegg I found that the tubing down stairs was expanding. I had some leaking with the brass joints at first so decided to use tape and calk to seal them and some of the calk got in the system gumming my blocks. One of the radiators sprang a leak while running the loop with only one system attached.
    After adding speed controllers to the pumps and replacing the tubing downstairs with some thicker stuff from Home Depot nothing is expanding. After dismantling each pc, water block, reservoir, and cleaning them out / flushing the system several times there is no more blockage. And after removing one of the pumps from the system and RMA'ing the defective radiator (that really shouldn't have leaked regardless) I haven't had any other issues, flow rate still great.
  5. Informative
    Alphy13 reacted to AnonymousGuy in Multi-computer Water Cooling   
    I have a whole-house multi-system loop deployed right now.  You want to use heat exchangers from a cost and maintenance perspective.  7 gallons mayhem X1 (which would be a small reservoir) is going to cost nearly $200 and is a pain in the ass to mix.  It gets contaminated and you're going to cry.  Also leaks...water leaks it's a problem, coolant leaks becomes a disaster of sticky staining mess.  
     
    PCs also don't need any airflow inside of them; natural convection works.
     
    By far and away is keeping the main reservoir from going to shit.  One of my biggest errors was using an open-air (error .... air...haha) reservoir.  Evaporation is a significant issue, it gets contaminated with algae all the fucking time, and occasionally some bugs find their way into it too.  Sure I have filtration and UV sterilization but they're solutions to a problem that shouldn't exist.  Version 29 of my loop is in progress with a sealed reservoir, built by me.
     
    Also, this whole thing is expensive as fuck to implement.  You're basically dabbling in an area where no one else goes outside of commercial environments.
  6. Agree
    Alphy13 got a reaction from handymanshandle in Where has lack of net neutrality taken us? (right forum this time)   
    My concern with NN is that in attempting to open up the internet, we will actually restrict it. Right now, it is partially outside the domain of the gov. NN will be trading corporate regulation and throttling for government regulation and throttling. If one ISP starts restricting you in ways that you dont like, you can choose another. The NN bill as it was, would need to be enforced and would have allowed the gov to do so. How do you enforce it? The gov could easily require ISPs to put up hardware that tracks all traffic and makes sure everything is "neutral" but where does that stop? As is, the main factors restricting new ISPs to introduce new technologies and more competitive services are all of the restrictions and regulations. Making more regulation is going to put more power in the hands of the huge corporations.
  7. Funny
    Alphy13 reacted to Enderman in Multi-computer Water Cooling   
    Hopefully you don't put any fish in that aquarium..
  8. Like
    Alphy13 reacted to manikyath in Multi-computer Water Cooling   
    there's an entire DC in canada that's watercooled, the linus man even visited them:
     
  9. Like
    Alphy13 reacted to bowrilla in What's next for computers   
    General purpose quantum computers won't have an advantage over classic binary-based deterministic computers. This is just not how it works. Quantum computing only has advantages in very speciic applications. Browsing, Games, writing text in work is not one of it. Despite that: throw away ALL of your current software since nothing is compatible. So guys, please stop dreaming of quantium computers. They're NOT what you're thinking they are.
     
    And considering AI – we do have AI. It's already all around us. What you guys are thinking of (I guess) are general purpose AI. That is a very dangerous thing and there's nothing like an AI processor unless you're thinking of the Titan V as an AI processor. Still, all neural networks are still just being projected on regular binary based structures. Add more cores and you improve performance.
     
    The near future will bring a slowed development. We're close to the physical limit of silicon. There will be a new material being introduced within the next ~10-20 years. In the mean team we'll see further parallelization and rising core counts. Carbon nano tubes might be a possibility but the thing is: they aren't new and not even difficult to manufacture (heck, you can make them yourself at home with some tape and a pencil) yet we don't see them in many applications. Nanotubes have been around for about 30 years and yet in 2012 we've seen the first demonstration of 9nm transistor outperforming a Silicon based equivalent. There was a "computer" about a year later but that's kind of it. Nanotubes are not a miracle material. They might only push the limit for a couple of years maybe one or two decades but then we're at the same point again and quantum mechanics are knocking on the door again. 

    Next thing is the human interface. How come people aren't surprised that the way we interact with our computers haven't much changed in decades? Keyboards haven't really changed. In fact the good old mechanical keyboards are more and more in demand again and the oldest ones seem to develop a mythical aura. Or what about the mice? The basic principle hasn't changed, the machanical balls got replaced by lasers but that's about it. The next ~30 years will bring a development of more and more "bionic" technology and a more direct way of interacting with electronic devices. I'm not talking about thought control (people usually have no idea how complicated that would actually be – ever tried to control your brain waves on command? gues what, it's really tricky!). Think of implanted input devices, maybe even microphnes for telecomunication, ear implants and maybe teven the first eye implants. We already have some very basic implants (think of cochlea implants) – the medical development will give (some) blind people sight again and some deaf people hearing. Those implants will probably be even better than the biological parts they're replacing. By the end of the century electronically improved/modified people will be around. It's not unlikely that people will have their personal computers/smartphones implanted. 
     
    In the mean time: smart glasses. Google had its early take on smart glasses – for now they're done, yes. But they will come back. Augmented reality will most likely be the norm. 

    And lastly: the merge of devices. We're already seing the beginning of this development. Think of tv sets 15-20 years ago. The first LCD and Plasma screens were around offering 720p or even 1080i for the first time – yet those sets were just plain displays with some speakers. They didn't have an OS running media content, streaming apps and so on – just displays. Today there's basically no point in having an AppleTV or a similar device anymore, you can just have one device to do all of it. Or think of MP3 players, walkmans/discmans/md player. You don't have those anymore, it's all in your phone. The merge of devices is obvious. This trend won't stop. I expect smartphones to be more and more powerful and replacing your regular computer at some point. We've already seen first attempts to acchieve this and 2-in laptops (or detachables) are on the rise. Combined with powerful wireless connections and smart devices around you can just send your video signal straight to the next big display, wireless power/charging and probably projected input devices on every surface.
  10. Agree
    Alphy13 reacted to AngryBeaver in Where has lack of net neutrality taken us? (right forum this time)   
    The biggest problem here is that these big ISP's spend tons of money funding politicians which in turn help them push their agendas. Look at the Michael Cohen ordeal with ATT, Verizon, Comcast, etc. Then there is the fact they are able to spend millions of dollars annually to block small startup and municipal ISPs. These are companies that took massive billion dollar government grants to build out their infrastructure yet aren't required to share these same pathways with other companies?
     
    In my area they have blocked a extremely good fiber company (10GBe for $249/m). They have basically halted their ability to expand across a park area and also across the river. These are areas that comcast either controls the poles or conduits and are blocking their access to them
  11. Agree
    Alphy13 reacted to Maug in We're Building a Gaming LAN Center!!   
    Exactly what I was thinking about.
    This is it, the perfect timing, the perfect project.
    Reborn without initial issues, dedicated to the craziest LAN center project ever... Perfect for LTT spirit !!!
×