Jump to content

.:MARK:.

Member
  • Posts

    567
  • Joined

  • Last visited

Posts posted by .:MARK:.

  1. 56 minutes ago, JohnBRoark said:

    Well it'd be a really cool idea, but it doesn't really work like that from what I understand about the way video games work. A game is going to need to pull memory and data from your physical hard drive to run properly. Now I'm not entirely sure if this will work over a network- a game just might not understand how to run properly over the network and I know there are plenty of operating systems out there that won't let you run applications at all over network shared stuff. 

     

    The cost and time it takes to implement 10gb networking just to put your games on a different computer feels like a complete waste. For the money you're going to spend on decent NICs you could just buy another hard drive.

     

    56 minutes ago, Scruffy90 said:

    You may run into a few issues. Registry entries created when the game is installed, and random access latency and other things that i'm not entirely sure the 10 gigabit connection will handle. Some people have found success with less demanding titles. Dont know of anyone that tried it with anything more recent. The most common complaint though is load time. Seems to have increased quite a lot. 

     

    Most of this is false.

    A game doesn't "pull memory" from a hard drive. And issues with registry are unlikely if the setup is done properly.

     

    To execute a game, the operating system will load the binary into RAM. In order to do that, the operating system will fetch it using it's own layers of abstraction that allows it to treat network filesystems like local ones, especially solutions like iSCSI. When an iSCSI disk is mounted, to all programs on the computer it appears as any other disk and is able to access it using the same filesystem API as it does for local disks.

     

    The game binary is able to read resources from the mounted filesystem as it uses the operating system's filesystem API and it works as normal. Issues with registry should not exist as long as paths remain constant.

     

    As @leadeater mentioned, a good 10Gbit NIC is nice to have, I prefer Intel ones such as the X520 DA2. And the nicest protocol for this is iSCSI, as this works more like a true disk rather that a mapped network disk. And you'll likely have faster speeds than a SATA 3 SSD (given you exceed 6Gbit/s).

  2. @leadeater @dalekphalm I'm curious since I am helping someone build a ZFS array right now. I have a few questions.

    First of all, I might add I don't care about ECC or non ECC, to me it makes no sense to argue over it.

    But I am curious about the behaviour of the filesystem itself, how it can be bottlenecked by certain types of loads and how it will deal with those bottlenecks. If someone can provide or link to a good explanation of different kinds of loads on the filesystem and how it stresses resources, then a mythical rule of thumb won't be necessary.

     

    So if one of you can briefly give an example of a load, and how the filesystem deals with it and then what resources it uses for that, then maybe we can get a better idea of how the lack of a certain resource will impact performance.

     

    For example if I have no deduplication or compression enabled on a zpool with 2 vdevs of 8x2TB drives in Z2 and no ZIL or L2ARC. And I do a write over iSCSI with 10Gb connection, how does the filesystem handle the writes, and what resources will it use?

  3. If you want your traffic to be encrypted so that your ISP cannot view it, you need to control a node outside of your ISP network and make a secure connection to that.

    Then give people the address to that node (a VPS for example) and ask them to connect to that instead.

    What you should do is forward all the traffic on those ports on the node to your home through a VPN connection.

    That way, all traffic from you would go through that node before it leaves to go anywhere else on the internet, and all traffic from the internet would go to that node and travel through a encrypted link before it hits your home.

  4. On 1/11/2017 at 7:26 PM, tt2468 said:

    A buddy of mine has been testing sophos home. Apparently the fq_codel works super will with 500Mbps+. I think it could be a good alternative.

    Pretty much anything running on x86 will work better with fq_codel, the EdgeRouters have a very low power MIPS chip in them.

  5. 6 minutes ago, Windspeed36 said:

    To further note, the ERL is also not ideal if you're on gigabit and want to run QoS. Without QoS it can handle 1000mbit routing without a problem but QoS really takes a toll. 

    To add to this, the QoS basically removes the ability to run HW offloading as it manages queues in software, you'll basically get 80-100Mbit/s speeds using QoS.

    Though I really doubt that you'll need any kind of QoS on a gigabit line.

  6. 13 hours ago, Remix said:

    I almost always recommend the ER-X to my clients, because it's essentially the same performance (with a few exceptions) for a bit less than the ER-Lite. Both have advantages, such as the QoS on the ER-X or the larger flash storage on the ER-Lite.

    The larger flash is not just an advantage, but a requirement. The ER-X can't even update successfully because of how tiny that flash is.

  7. 6 minutes ago, Davin. said:

    I accidentally clicked on "format" not realising i wiped all my data.

    But never mind...i just wanna use the PC again but i cant proceed with the specified error

    I watched some youtube vids but im still not sure...

    Click each partition and click delete. Then you will have one item left (called "Unallocated Space" or something similar), then just select it and click next.

  8. 41 minutes ago, Jonny said:

    I run couchpotato, sonarr, transmissions, emby, Jackett, gitlab, nagios and more on mine.

    You must have a good connection speeds between you and looney

    :D

    The storage server is designed to be functional standalone and also somewhat portable (as portable as several Us of rackmount gear can be).

    The virtualisation servers run a whole lot more VMs running many services.

    The connection between me and looney isn't great, but the most important services are HA and failover.

    I'll work on a network diagram for me and @looney when were done with most of the work.

  9. 2 hours ago, babadoctor said:

    http://askubuntu.com/questions/53822/how-do-you-run-ubuntu-server-with-a-gui

    What is the most lean/lightweight GUI environment for an Ubuntu 16.0.4 32bit server? 

    The computer is quite slow, so, it would benefit me a lot to install something lightweight. (512MB of vram with a 12 year old GPU from the dump)

    I saw this on that article:

    
    sudo aptitude install --without-recommends ubuntu-desktop

    Is that the most lightweight I can get in order to not affect performance?

     

    Can I ask why you want to run a GUI on a server?

  10. 2 hours ago, Jonny said:

    What do you run on these servers?

    Thats alot of $$$£££

    He runs things... xD

     

    But seriously, the servers are used for a variety of things.

    The new storage server will run a hypervisor which will run the storage host and also some VMs for rtorrent, sonarr, couchpotato.

    The server will also be used to store CCTV footage a few times a year.

     

    I should also point out that I know this because me and @looney share a RFC1918 block and have an IPsec tunnel between us, and our servers are joined and managed together.

  11. A server serves. It's a term used so loosely, I feel that is the best way to describe it.

    You'll see things like hardware called servers but also software called servers. Some more unusual things like display servers, it isn't a server in the conventional meaning, but it serves a graphical display output on a computer.

    If you want to know what a specific type of server is, like if people you talk to frequently mention "steam servers", or "xbox servers" then you are likely looking at gameservers, or webservers. These are usually servers in the sense of a software program running on a remote machine that serves something to you or others.

    The fact that people above may or may not agree on a definition or a meaning, indicates that the meaning of the word is relatively unimportant, but the general idea is.

  12. Honestly, having servers myself and working on a bunch of others, IPMI is not very useful. I only ever use it for mounting an ISO and going through the install process of a hypervisor. After that the server is either online or rebooting (once or twice a year), and there is no need for the IPMI. I really wish that people would stop looking at it and the equivalents from other manufacturers to be an amazing feature that allows remote access, because it's a security nightmare and is nowhere near as good as OS built in remote tools like SSH or RDP.

×