Jump to content

.:MARK:.

Member
  • Posts

    567
  • Joined

  • Last visited

Everything posted by .:MARK:.

  1. I like this thread, because I like to do the same sort of thing. But I may also have some alternatives to some of that software. I would use something like Emby instead of Plex, but I just watch stuff directly from SMB shares. I would definitely use Sonarr instead of sickrage. And for VPN you could use turnkeylinux OpenVPN. https://www.turnkeylinux.org/openvpn (or you could look into Pritunl) And essentially the possibilities are endless with VMs
  2. The servers running these are just crazy, I remember seeing some pictures of an intel sever with 12TB or RAM and 8 x v3 CPUs.
  3. Just assign IP to the interface on the second PC, then just RDP to that IP.
  4. I completely agree, I wouldn't stream over 720p if I was going to stream at all. But my point was that my CPU is not much of a factor when it comes to 4K recording on my system, as my GPU can handle that with the encoder built into it.
  5. 10Mbit/s is what I use currently which is enough for my purposes, but I can do higher with no issues.
  6. I was talking about the hardware side of things, I don't stream, but I record my desktop for stuff and I have messed around with the config of OBS until I got 4K with no lag.
  7. That is simply to do with the display output, so that CPU can't run a 4K monitor at 60Hz
  8. The CPU is not really the factor to consider here, what GPU do you have? I am able to stream 4K with OBS with little to no CPU load because I use the encoder on my 980Ti. Which lets me record flawless 4K 60 and compress on the fly.
  9. as @legopc says, the CPU is not totally taxed during gaming, which means that you will likely get similar performance in real use as your GPU will be the bottleneck. But what is nice is that you have more cores to mess with, and if that is what you like, go for it!
  10. I wouldn't run an HP in my home tbh, for home labs I'd do the supermicro thing :D.
  11. lol, yeah. The thing is that without curiosity nobody would get anywhere, if I wasn't curious I wouldn't know anything I know now, and there are assholes out there that mistake curiosity with 13 year olds being 13 year olds.
  12. You couldn't be more wrong, besides are you not a script kiddie anyway? You sound like one.
  13. Yeah, G.INP improves the reliability of VDSL2 and G.Fast I believe is just a higher frequency using a similar system to VDSL2. Problem here is that Openreach (people who own the infrastructure in the UK) consider themselves and this country the "digital leader" when it comes to broadband. And they do anything to avoid rolling out FTTP. To show their lackadaisical approach to speeds here, there was a EU broadband meeting a few months ago, and the countries revealed the goals for coverage and speeds stating things like "100Mbit/s to 95% by 2017". But the good old UK just says "Superfast by 2020" Like anyone knows or cares what Superfast means... To sum it up, we wont have over 10% FTTP by 2030 here.
  14. Technologies such as G.INP and G.Fast greatly improve the performance of FTTC over short distances, giving more time to plan infrastructure upgrades to fibre.
  15. Issue here is that everyone needs fibre, but if you think you don't need it because it's expensive, then you'll never get it and the infrastructure of your country will not progress. Fibre WILL improve the internet of your country, it just wont be cheap at first... Nothing is. Also if your ISPs want to be wallet sucking they will do what most others do, they will milk the current old infrastructure for all the money they can and present no alternative. Fibre may be expensive, but at least it's available.
  16. What? And waste time? What do you think is the point of Kali? Why do you assume OP is uneducated? What is the point in anything you've said in this thread?
  17. Well if you're running your own DNS with AD, why would you set your DNS servers to Google?
  18. I have all the learning resources I need so I would go for a larger SSD so I can actually install my develop environments. XD.
  19. Will you use any co-processors or compute acceleration components? Perhaps you could look into workstation boards and an E5 Xeon. Maybe ditch the high-end cards to afford a higher core-count. Reason I suspected a larger SSD would be useful is for the huge IDEs these days and all the emulators for various platforms and virtualization needs.
  20. You don't need a 6 core for dev work TBH, you also don't need 2011 socket. Small C drive SSD for dev work is terrible, get much bigger. A high end video card like that is just not needed.
  21. You need to set sharing permissions and NTFS permissions so Everyone can have read (and maybe write) access
  22. When/if you get fiber, you will get an ONT fitted by Google wont you? If that's the case then you won't have to buy another modem again and I think you can sell the modem later. I would pay the $50 for the ability to get faster speeds myself.
  23. Decided to do some Python for a bit of automation stuff on servers, and I made a library for multi-segmented downloading from FTP servers. I don't really know Python so let me know what I'm doing wrong. https://github.com/MarkPash/pash-ftp/blob/master/pash_ftp.py import ftplib import os import threading import shutil class Downloader: def __init__(self, ftp_server = '', ftp_user = '', ftp_password = ''): if ftp_server is not '': self.connect(ftp_server, ftp_user, ftp_password) def connect(self, ftp_server, ftp_user, ftp_password = ''): self.ftp_server = ftp_server self.ftp_user = ftp_user self.ftp_password = ftp_password self.ftp = ftplib.FTP(self.ftp_server, self.ftp_user, self.ftp_password) def download(self, ftp_file_path, threads): self.ftp_file_path = ftp_file_path self.parts = threads self.ftp_file_size = self.ftp.size(self.ftp_file_path) self.chunk_size = self.ftp_file_size/self.parts self.last_chunk_size = self.ftp_file_size - (self.chunk_size * (self.parts - 1)) partdownloaders = [] for part in range(self.parts): if part == (self.parts - 1): this_chunk_size = self.last_chunk_size else: this_chunk_size = self.chunk_size ftp = ftplib.FTP(self.ftp_server, self.ftp_user, self.ftp_password) partdownloaders.append(DownloadPart(ftp, self.ftp_file_path, part, self.chunk_size * part, this_chunk_size)) for part in partdownloaders: part.thread.join() with open(os.path.basename(self.ftp_file_path), 'w+b') as f: for downloader in partdownloaders: shutil.copyfileobj(open(downloader.part_name, 'rb'), f) for part in partdownloaders: os.remove(part.part_name) class DownloadPart: def __init__(self, ftp, ftp_file_path, part_number, part_start, part_size): self.ftp = ftp self.ftp_file_path = ftp_file_path self.ftp_file = os.path.basename(self.ftp_file_path) self.part_number = part_number self.part_start = part_start self.part_size = part_size self.part_name = self.ftp_file + '.part' + str(self.part_number) self.thread = threading.Thread(target=self.receive_thread) self.thread.start() def receive_thread(self): try: self.ftp.retrbinary('RETR ' + self.ftp_file_path, self.on_data, 100000, self.part_start) except: pass def on_data(self, data): with open(self.part_name, 'a+b') as f: f.write(data) if os.path.getsize(self.part_name) >= self.part_size: with open(self.part_name, 'r+b') as f: f.truncate(self.part_size) raise
  24. First you need to think of how you will get to your home network. Do you have a dynamic IP address? If so, you may need to sign up to a Dynamic DNS provider. This will allow you to have an easy to remember address to point to your home network (like example.no-ip.org), that will work when your external IP changes. Secondly you need to know what you wish to access, I personally like to access everything so I run a VPN server on my LAN. This allows me to be a part of my LAN from anywhere in the world. And this method allows me to access any computer in my LAN. Now that I can access everything in my LAN, I need to configure computers to accept remote connections via Remote Desktop or SMB shares.
×