Jump to content

Fabricio20

Member
  • Posts

    15
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Fabricio20

  • Birthday July 28

Contact Methods

  • Discord
    Fabricio20 #5913
  • Steam
    adm_fabricio20
  • Origin
    DaddyFabby
  • Twitter
    adm_fabricio20
  • Website URL

Profile Information

  • Gender
    Male
  • Location
    Joinville, Brazil

System

  • CPU
    i7-8700k @ 3.70GHz
  • Motherboard
    ASUS ROG STRIX Z370-G
  • RAM
    16GB DDR4
  • GPU
    EVGA GeForce GTX 1060
  • Storage
    250GB M.2 + 500GB SSD + 4TB out of Multiple HDDs
  • Operating System
    Windows 10

Recent Profile Visitors

714 profile views

Fabricio20's Achievements

  1. Hey guys, I'm currently working on a project where we'll be assembling a SAN for several compute nodes to use and I'm looking for some tech tips. We'll be working with 3 nodes to start with, each node having two 8TB drives each. (6x 8TB [48TB] total for starters). These nodes will be used to run basically anything, from CDNs to media servers to SeedBoxes, so speed is important for us, but reliability is even more so. We are looking into getting some NVME SDDs to use as caches, for both reads and writes (see below on reliability). All nodes have high speed networking between them (> 10GBit) so network is not gonna be a bottleneck (at least for now - and if it is, we'll work on it). The current plan is to have all nodes running GlusterFS, in a mode where one full node can fail. (2 op + 1 redundant). We were initially planning on using Gluster Tiering for mounting our SSDs as caches, but sadly that feature was discontinued without a replacement. Due to this, we are now at a loss in how to add proper caches to our boxes. We are now looking into ZFS volumes in non-raid mode (below gluster), where we add the SSDs as cache volumes in our zpool configuration, with an extra disk mounted as spare in the close future (allowing for a local rebuild with quick recovery once we have a zpool disk failure). Is ZFS recommended for this? In this configuration? Is there alternatives (lvm cache?) A note on reliability for write caches: We understand write caches are quite dangerous since if the cache fails during writes we could have severe data loss (anything that wasn't flushed to disk), however, since we plan on running gluster on top of it, I believe (without testing yet) that this won't be an issue, as data replication will happen at the network level, so if a node fails because of that, the data was being written to at least one extra node at the same time. With this logic in mind, we are OK with considering a node fully offline if a single disk has failed us (since we only have two per node so far anyway...), as network-level "raid" will take care of the net(data?)split. Edit: Specs of the current stack: - 3x 24-Drive Chassis (4U) - 6x 8TB HGST Drives [32GB ECC DDR3 - 2x Xeon E5-2620] - Networking (a lot of it) So, keeping in mind that: - Reliability is a must (read: Least possible downtime on the event of a failure) - Performance comes just right after reliability - Obviously we want to maximize usable storage, but just after the above points. - We are open to any software stack suggestions and hardware layout configurations (We tried Ceph before Gluster but it seemed quite hard for starters.. Maybe someone can link me some nice docs to quickly set up a test env?) - We have not yet purchased the SSDs for this (so if you have other suggestions like Optane [quite expensive tho], shout it!) - If the software stack can handle it, we are OK with a single disk taking a whole node offline. Do any of you, experienced hoarders, have any suggestions?
  2. Hey peeps, I'm currently having an issue with my Raspberry PI 3 (Model B). If I plug the Pi3 into my network (cable or WiFi), after a while (an hour? 30 mins), everything starts to slow down. Packet loss goes way up and after some hours, everything is dead (100% packet loss). Every device connected to both WiFi and Cable just dies out. This is exclusive to the Raspberry Pi 3, no other device on the entire network causes this. This is not related to the OS it seems, as I've seen this happening on Raspbian and Ubuntu (raw) as well as HomeAssistant. I'm running an ISP-Provided Modem and a Mesh WiFi Router (Tenda Nova). Any tips on debugging this? I really want my Pi3 Working ?
  3. Can confirm, that is more expensive than what I would pay for a new one here on Brazil with all taxes included.
  4. I would go and upgrade the graphics cards, due to them having more than 3gb of vram. 3GB is not enough for a lot of games if you want to run at 1080p (with at least 60fps constant). As a personal note: - I currently have an i5 7400 along with an EVGA 1060 6GB, and I somewhat feel bottlenecked by this cpu. I didn't do any tests to confirm this, but the overall system performance looks bad along with game performance, even when the graphics card is barely being used. CPU usage is always in the high 40s (%) even though I have like Windows and Discord running (and it's not like both are doing anything as far as processes tell me).
    1. Canada EH

      Canada EH

      Where abouts in the Philipenes?

    2. Kawaii Besu

      Kawaii Besu

      Hmm?

       

      Oh, Quezon City. :T

  5. Good lord, fixed, thanks for informing me of my stupidity.
  6. As spotted by grsecurity on Twitter: Intel seems to have released a Microcode update yesterday (8th) that covers almost all of their range of processors, including a range of server processors like 64-bit Intel® Xeon(s). It's still unclear as to what this update addresses exactly (confirmed via multiple news sources) but one can speculate, based on TechCrunch's citation of Intel's CEO, Brian Krzanich (quoted below), that these updates will be addressing the previously disclosed CVE-2017-5753, CVE-2017-5715 (Spectre 1, 2) and CVE-2017-5754 (Meltdown). This microcode update comes as a "handy tool" for system administrators managing the Linux platform as it features an after-boot update mechanism: Update (11th): Several performance impact reports have been made so far about this and other kernel/bios updates released across the board, most of them show a decrease of up to 6% in performance on (new) Intel Hardware (6th Gen+) and some bigger impacts on older platforms (5th Gen and Below).
  7. At least here in Brazil, R6S is still played a lot, it's one of the most played games on Steam (global) if I'm not mistaken. There's some stuff to catch up to yes, but it's not much and the pace of seasons is slow enough to learn everything. On that note, do not buy the Starter Edition, it's not worth it. All operators are set to cost 25k Credits (Where normally only DLC operators would). The standard edition is only a bit more expensive and is easier to get a hang of, since the pricing drops down to (up to) 2k per (non-dlc) operator.
  8. Freedom of media/press that is, I guess "instead" of fixing their issue they decided to fix their public image?
  9. Tavis Ormandy, member of Google's Project Zero just tweeted about a Lawsuit filed by Keeper Security (Keepass) against Ars Technica on "Misleading Claims" for their "Windows Bundled App" Article. On the original article by Ars Technica, they describe how Tavis Ormandy (re)discovered a flaw on Keepass after noticing that windows 10 was bundling the app after installation. According to Tavis, the flaw was exactly the same as one he had perviously disclosed (Issue 917), which had been patched by Keepass (back on August 2016). The article also indicates that this issue could have been avoided if Windows wasn't downloading the app without the user's consent, as it should only affect users that accepted to use the app by trusting it with their passwords. Today, however, Tavis Ormandy tweeted about the lawsuit which has been filed by Keeper Security (Owner of Keepass) against Ars Technica, this was followed by a link to the public case and later by the case's document. Original Tweet: https://twitter.com/taviso/status/943532596012638208 Case: https://www.pacermonitor.com/public/case/23282996/Keeper_Security,_Incv_Goodin_et_al Document: https://www.documentcloud.org/documents/4333677-Keeper-Security-Inc-v-Goodin-et-al.html The document reads: Following with a claim for defamation, after acknowledging that Ars Technica had corrected some "Misleading Claims" on their article at least twice.
×