Jump to content

ig8uh8m8

Member
  • Posts

    21
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About ig8uh8m8

  • Birthday May 07, 1993

Profile Information

  • Gender
    Male
  • Location
    Wanganui, New Zealand
  • Interests
    Rebuilding consumer level computers with occasionally top shelf parts. Racing anything with 4-8 cylinders and turbocharged. A particular finesse for vinyl work.
  • Biography
    Nissan R series skylines and C series Laurels are in my DNA, same with all Holden Commodore (Toyota Lexcen in japan) related cars, HP hardware is in there as well.
    I'm a man of extravagant yet humble hardware having run a dual monitor system relentlessly over the years. Hoping to expand to a third in the next few years.
  • Occupation
    Freelance Media Creator

System

  • CPU
    Rig build in progress
  • Motherboard
    Rig build in progress
  • RAM
    Rig build in progress
  • GPU
    Rig build in progress
  • Case
    Rig build in progress
  • Storage
    Rig build in progress
  • PSU
    Rig build in progress
  • Display(s)
    Rig build in progress
  • Cooling
    Rig build in progress
  • Keyboard
    Rig build in progress
  • Mouse
    Rig build in progress
  • Sound
    Rig build in progress
  • Operating System
    Rig build in progress
  • Laptop
    Everis e2037 (Geo 240 Motherboard LN539ZS08497A0 compatible), HP 15-ba032AU

Recent Profile Visitors

807 profile views

ig8uh8m8's Achievements

  1. Also worth noting that idgaf about noise level. as for the pressure i doubt i would see positive pressure except for low load situations. the two splitter solution has been actioned, bought and paid for.
  2. overkill on a budget. where i live, multi-ssd rigs are scarce, and tend to be fawned over. with a couple of crucial p3s or kingston nv2 nvme ssds and a couple of crucial bx500 sata drives, the 5700x i mentioned, 64gb ram and a 12gb 3060 oc, given that this pc will be hooked up to a 21" (1920x1080) tv and xbox controller, i don't think there will be any trouble. hence the opinion of it being overkill. the intention is to crap on any console i come across.
  3. In my circle of friends i have a reputation for overkill. my aim with the fan setup given the case's ability to breathe fairly easily was to move as much air as possible.
  4. I would hope that depends on the settings involved but considering the case is basically mesh from top, bottom, front and rear, neutral pressure would be a non issue. i was thinking both the front and rear fans would run off of one header, and the remaining 3 along the bottom on another. i'm aware of fan splitters being a thing, however i'm thinking of grabbing a corsair fan controller to set custom fan curves for different use case scenarios with the system up and running instead of fiddling with the curve settings in the BIOS. that way if temps fire up, I'll have the fans slam on high. I'm in new zealand so during the summer depending on where you are, things get HOT so i felt it important that moving air with the components that are being used. I decided to switch cpus and go for the 5800/x depending on cashflow in april.
  5. not enough connectors on the motherboard to drive 8 fans, and don't want my office sounding like a f***ing tornado lol (talking shit about my i7 3770 build involving a Corsair Hydro Series H5 SF)
  6. So i'm jumping ship between intel and amd on my main gaming rig and am looking to slap together something packing a 8 core cpu. I was glazing over the packaging contents of the msi MAG m360 AIO and noticed it didn't come with a controller and given my choice of case (jonesbo D31 mesh screen w/LCD panel) that obviously didn't come with one also. my questions: What would be a suitable controller for this machine? should i be running two controllers? Should I be running one taking signal for the case from the motherboard and the other driving the pump/fans on radiator? should i be changing any of the currently planned orientations so that i draw more air in? I'll be running the radiator (3 fans) in push/exhaust config, bottom (3 fans) as intake, rear (1 fan) as exhaust, front (1 fan) as intake at present, so i am of the mind that i would need two controllers, one driving the case fans, the other driving the 3 mounted to the radiator. This is my first serious watercooling system so i'm completely lost as to how i should be handling this. Send help. I'd have solved this problem with a Razer controller in one foul swoop but NOPE, as those old folks be sayin', "they don't make 'em like they used to" This build's specs for nerds: Motherboard: MSI MAG B550M PRO-VDH WIFI mATX w/MS-4136 tpm2.0 module CPU: yet to be decided, 8 core minimum (taking recommendations, leaning on the 5700x) Cooling (CPU): MSI MAG Coreliquid M360 AIO 360mm Cooling (Case): Silverstone VISTA 120 120mm PWM (5x) Ram: G.SKILL Ripjaws V Series 32GB DDR4 Desktop RAM Kit - Black 2x 16GB SSDs: 2x Crucial P3 500GB NVMe M.2 (Dual boot, Ubuntu/Windows 11) Crucial BX500 1TB 2.5" (Game Storage) GPU: yet to be decided, no less than 8gb ram (taking recommendations, leaning on the RTX 3060) PSU: Cooler Master MWE Gold 850W 80+gold
  7. i'm not running truenas because I found that running a simple install of linux mint and editing a conf file was far easier for someone like myself looking for direct, uncomplicated instructions to follow. I also have spent quite a long time working with linux and am quite familiar with how things work on the desktop flavours so again, simplicity took the vote. as for the parts choices, i do it because i can, the capability is there to run what i am putting in, and should the machine be in running condition when i choose to upgrade, I can drop a few new drives in there for the sake of running a couple VMs to tinker around with. yes i probably shouldn't be blowing what amounts to be around $1000NZD on a simple server build over 3 years, but i don't have an obscene drug or alcohol habit to feed so i've gotta find fun somehow. what else can i say though, i hit the power button and have the desired effect of the shares hosted within being online in under a minute. minimal if any dips in the performance, the only bottleneck now is the ethernet card.
  8. it's worth knowing i chose a setup with integrated graphics to ensure i had the ability to at the least keep an eye on how things are running via gnome-disks, baobab, hardinfo (for temps) and gnome-system-monitor as well as a network widget that brings up current network activity in a terminal window.
  9. I was considering going the external hdd route for both shares, 1 & 4tb respectively. I am aware of the issues with qlc, but they're only live from friday 8:30am until 11:59pm sunday weekly unless i get bored enough to want to do maintenance revolving around the OS. So a backup per share will not go amiss. Just the 32GB ram and 2.5GBE, the CPU is due for delivery today (Dec 5). in the spirit of idgaf, not much else to spend money on (believe me, I've thought long and hard about if i could and if i should). Going max ram & cpu, throwing money at the ssds & slapping in 2.5GBE is all i could come up with as a "if it can do, it will do" rule. For the desired lack of interaction with this particular computer, I would say I've done as much as i can do. I don't see a point going beyond maximum capability, chasing shorter boot times was never the goal, this specific build was a boredom buster to pass time with the long-term goal of reliability in mind, being able to access whatever i have stored on any machine that i own without carting a crappy usb drive of some sort around. I'm keeping my eyes on the HP Prodesk g3 600 sff machines for the meantime watching where their prices go and watching for hardware problems, maybe in a couple of years i'll pull the drives and do a turn-key upgrade with the added bonus of a nvme based backup, a newer i7 and the same amount of ram as i'm chasing now (32GB) purely to troll certain folks who tell me that going beyond an i5 for a humble samba server rig is pointless. Even now... the attached meme is the face i'm wearing as i come to the conclusion of this build while folks around me scramble to free up icloud, google drive or otherwise local storage.
  10. TLDR; accidentally bought an optiplex, decided to pull a charlie sheen and send it for a server build, found myself asking "now wut?" Over the course of 2023 slapped together a build with the goal to consolidate two samba shares into one machine for a few hundred bucks and now i'm nearing the desired specs, i'm wondering if i've done it right, are there areas i could improve on, etc. This all started purely by accidentally bidding on this machine on TradeMe (NZ local, Canadian/US eBay equivalent) for $25USD around February-March and slowly piecing things together using parts that I sourced from domestic (Local to New Zealand) and international sources like eBay (for the proprietary dual 2.5" adapter) I originally sourced the ram (8GB kit from facebook marketplace, then a 16GB replacement kit from trademe), drive adapters (optical drive to ssd adapter from trademe, proprietary hdd/ssd adapter from eBay), and then a boot drive and 4tb samsung qvo drive from PB tech (US MicroCenter equivalent). Over time as i acquired parts i couldn't help but get things put together while waiting to generate funds to complete the build and I had installed linux mint to test the responsiveness (boot time to desktop for example) and was massively impressed with it being well under a minute. I then installed samba and waited for the final drive (4TB ssd) to be bought and paid for so i could get the shares live and running. I just whipped up a mint usb install, ran it, installed and basically set up samba and was away with the races once data was transferred from the old 4TB seagate barracuda i was using to temporarily hold the media i had on hand. After that i got curious and started over thinking the ram consumption, thinking i had a fault within the software/hardware config but turns out weak phone internals were the issue. The curiosity remained, I jumped on the chance to upgrade the ram to 16GB (installed 24/11/23) and 2.5GBE (buying this next month) and now i'm wondering... do i jump on the chance to bag a 32GB kit? are there any additional benefits to feeding this machine the most the motherboard and cpu will allow? the bigger question is, for the sake of home usage, is what i have put together a good, long term solution for myself plus my senior neighbor to use for media playback, while i also use it to store personal files on a separate 1TB ssd? would the ram help in this situation? so far i've seen less dips in performance while retrieving and sending data to this machine, so the answer may be yes, but is there a point going beyond what i have already? I'm on the cusp of slapping a i7 2600 in there for good measure because it literally has the same TDP as the i5 2400 already inside, though i'm aware that this move is redundant to the purpose of the build given the average utilization of the i5 barely goes beyond 10% but this move and my concern is extended by the NIC upgrade to 2.5GBE (mainly so that my access to it is unfettered by my neighbor). my question here is simple; is 1GBE and the i5 enough, or does anyone reckon that the switch to 2.5GBE and the i7 is better for streaming over samba over multiple end clients? (ubuntu based HP t630 thin client for my neighbor over wireless, windows 11 laptop for myself using a USB-C 2.5GBE adapter) if i spot a good enough 32GB kit, I'm very likely to pounce on the opportunity to bag it, i7 2600 cpus avg $50NZD here and i bagged one for $35 from an international source, but i think for the meantime i'll sit on that and the 16GB currently inside. the final question, did i do this right? I've been building personal samba shares for local network access this way since 2012 and it's been a reliable method for me ever since first conception of the idea and a great way to make use of old hardware that would otherwise be sitting around my house. If i could improve on this, please let me know. The machine does nothing else but sit connected to the network, turn on automatically at 8:30am, sit idle waiting for requests, update automatically as those roll out, and shut down at 12AM during dark hours. I occasionally run bleachbit and a bunch of other apt commands then fstrim to clean things up each month, but other than that, zero direct interaction because of my own discomfort around the idea of ssh access for maintenance. Specs for nerds: 16GB ddr3 1333mhz (soon to be 32GB?) Intel i5 2400 3.1GHz (3.4GHz turbo) (soon to be i7 2600 3.4GHz, 3.8GHz turbo) 250GB Samsung 870 EVO SSD (Boot) 1TB Samsung QVO SSD (Personal Storage) 4TB Samsung QVO SSD (Media Storage) Broadcom GBE NIC (bought for sake of link aggregation, turned to S*** so 2.5GBE here i come)
  11. as it turns out, it was inactive ram otherwise held in reserve, no dirty cache. my concerns have been eased.
  12. under system monitor, i notice it when im sending to or receiving data from it. depending on the amount of data moving about, depends on how much is used. this is also reflected in the applet i have going in the panel bar at the bottom of the screen. will look into the cat command when i have completed the backup of my data from the jelly star which i am using to test the performance having only just upgraded the ram as of friday (NZ time, it's sunday 6:07PM here now)
  13. Linux Mint for this server rig. I did let it rest for an hour with the data still held in cache.
  14. I read somewhere that TPM requirement was a must (Possibly reddit fodder), and am otherwise clueless as to the background of operations this computer does aside from having it automatically turn on, update automatically during the day and shut down. As for the ram comment, i'm again clueless, but from what i can gather, it's read cache, and straight to ram. this also applies to write operations, and am assuming it holds it to ram so that it's reducing seek times for data requests. If i'm being honest, for years i have used Ubuntu primarily for basic web browsing and day to day communications, and in this case, Linux mint on account of it being resource friendly for the sake of handling file server operations. This is the 3rd machine I have put together to handle all my personal and media storage needs and it works absolutely brilliantly, but the ram usage makes me a bit worries when I'm hauling a** reading data and occasionally dumping video to it. What I can say is i have noticed less dips in read/write performance since changing machines & doubling the ram, so it must at some point mid transfer dump data to disk if not committed already, but why the need to keep it in the ram if it hasn't been requested for some time?
  15. I am running a Dell optiplex 790 sff, i5 2400 (soon to be i7 2600 for the lols?), 8gb ram (now 16GB), on a 870 evo 250gb boot drive, 1&4tb 870 qvo ssds for storage, Linux Mint as base OS. I'm worried about ram failing although no signs of trouble, I can see the cache build up to about 13.5GB/80(ish)% (during a 14GB transfer) of available ram. Is this a safe setup? Are there alternatives I should consider? I'm not using ZFS on account of the new requirements being I need TPM 2.0. Just wanna see if there's any other little tweaks I should look into. All this computer does is store and serve data. Very rarely do i send data to it, but when i do, it's in the multi-gigabyte range in any instance. I did previously hit a drop in connection while transferring a large amount of data to the server and i put this down to the quality of the hardware in the client device (Unihertz Jelly Star) being too weak to sustain the transmission of data over an extended period of time, the rest of the client end devices i use handle transfers without a sweat (previously a galaxy S10e and MANY windows and Linux based machines) on the Jelly star, it was moving data at around 2MB/second, hence the suspicion of weak hardware, combined with a slower connection speed/protocol over wifi. This was confirmed with a bulk transfer test via a galaxy A03S. I doubled the ram to 16GB (soon to be 32GB) to reduce the chances of any performance issues. -read/write dips were noticeable in previous iterations of the setup in shuttle XS35 (v1 & v3) machines, but no drops in connection to my knowledge as highlighted above. i suspect this was down to limitations of the Atom CPU and ram as a whole hence the switch and consolidation into a dell rig. My real concern here is while it appears that the computer in the server sole is fine holding the cached data in ram, I want some kind of reassurance that it is safe in doing so, if not i would absolutely love to find a solution that clears the cache when transfers complete after a set time, rather than having a cronjob do it automatically, say for example, every 30 minutes. Instead, having the computer say to itself: "okay, no clients have asked for this data as of 45 minutes ago, time to clear it out of the ram". The concern is becoming greater with the eventual NIC upgrade to 2.5GBE, however i don't see it becoming an issue, just wanting to slap together a reliable and very responsive setup that will absolutely slap the s*** out of USB based network shares, and so far this has already been achieved under a 1GBE NIC, but i'm not done there clearly hence the desire for a faster network connection between client and server. I apologize for the poor spelling and punctuation, I am part Australian (25%)/New Zealand (remainder) so language can be crude at the best of times.
×