Jump to content

NPDPdev

Member
  • Posts

    53
  • Joined

  • Last visited

Everything posted by NPDPdev

  1. I am building my own local server hosting company, and I need advice on what servers to stick with in the long haul. I do not have much money, but am starting a new job soon and don't have many expenses, so I will be able to put most of my money in to this. Still, though, my money is limited and so I am mostly looking in to older generation servers. I am specifically wanting to go with Dell PowerEdge servers because of the features of their iDRAC management system. I have decided to go with PowerEdge R430s, as they are pretty affordable for their power. I can source them for $200 for a barebones system, or $400-500 for a system with one or two low-end CPUs and a small amount of RAM. I plan on mostly maxing out these servers (40 cores w/ about 192-256GB of ram), so parts in them when I initially purchase them will likely be sold immediately or used only until I can afford to upgrade them. Again, I am starting this from scratch, so I don't need insane computing power, probably until a specific customer requires it and is willing to pay the premium. I have a few questions: Is going with servers so old a good idea? It allows me to begin at a time that I wouldn't otherwise be able to, and I believe that they have a comparable amount of compute power and capability as a slightly newer server would have. A full size rack costs about as much as as an upgraded r430, so for now they will be on the floor in a well ventilated area. At how many servers should I probably buy a rack? I was thinking before I bought my 4th one, or by the time I needed to buy a large switch. I am planning on ordering a rackmount switch, PDU, APC, and maybe a cheap KVM switch (Dell branded with adapters that allow for USB and VGA over ethernet cable). Do I need any other equipment for now? (I have a modem and router of course) At what point should I buy a dedicated backup server, and would it be okay if it was cheaper and a bit older? I think until then I will be using two separate arrays, one with high capacity drives, others with mid-capacity SSDs. The servers have 10 2.5" drive bays, which would allow for me to run something like 8 2tb SSDs and 2 8tb HDDs. I am planning on upgrading each server to 40 cores with 192-256gb of ram. Should I instead buy more servers with more modest amounts of power? (like two with 20 cores and 128gb ram, as opposed to one 40 w/ 256) This would also save costs, as I can buy about 3 10 core CPUs to every 20 core, and I could buy lower-capacity DIMMs, also saving a little bit of money) I know this is a rambly post, but finally, please don't say something like "people are better at this, or this will never take off" as I want to do this, it is my money, and hearing that would likely make me less prone to admitting failure if it happens anyways.
  2. Thank you for the insight. I am fully planning on running a minimum of three of near-identical PowerEdge r430s. I will probably use one of them as a personal server w/o important things on it, so that if needed, I can migrate clients to it as a failsafe. I suppose I will keep the DIMMs, however I am planning on using 32gb ones, as that is the only way to achieve max capacity on the server, so I would prefer to keep it uniform even in the event of a failure. I will likely keep them around for troubleshooting though, as keeping spare 2133MHz 32gb DDR4 DIMMs is a bit out of my price range lol.
  3. How old is your SSD, and how big is it? It sounds far fetched, but maybe your SSD has reached its lifespan and is now showing signs like this as a result of its damaged cells? This would be even more likely if it is a very low capacity SSD and if you install or delete things extremely frequently, but I have had my SSDs for as long as 5 years and they are all fine. Maybe to prove this, you could research a command or tool that checks the integrity of system files and tells you if they are corrupted. If it is the same file, even on different updates of Win10, maybe it is not your SSD, but a poorly written program. You could try using your computer without these programs or running them in a VM and see where that gets you.
  4. I agree, however shipping alone will be 1-2 weeks (next Friday or later), and so I would really like to have the new CPU ordered and shipped by that time to avoid having to wait even longer.
  5. I have recently purchased a PowerEdge R430 and had a few questions. The configuration I ordered contains one e5-2630v4, but I would like to upgrade it upon arrival. I am looking at getting either another e5-2630v4 with 10 cores (bringing the server to 20 cores spread across 2 CPUs at 2.2GHz (3.1 boost). My question is: Should I just get the $120 2630v4 or should I sell the one that is in there and buy an e5-2698v4 for $400-500 with 20 cores clocked at 2.2GHz (3.6 boost)? My knowledge tells me that the singular 20 core is best, especially since I will eventually be able to upgrade to two of them (and that is what I eventually want regardless of what I do now). I also believe that it is better to have all of the cores on one CPU, since there is less performance lost when the CPUs both have to communicate between each other and the chipset. I am planning to use this server to offer up virtualized computing to customers and so I need the core count to be higher than ten. Also, if I am running something like ESXi or linux with Docker, how many system resources should I allocate to the host OS/hypervisor. I was thinking about reserving two cores and 8gb of ram, or does ESXi do this on its own and not really need dedicated processor cores to manage these instances? Edit: The server also contains 32GB of ram, I believe DDR4@2133MHz. I plan on offering package deals of 4GB of ram per server core, so I would need to upgrade to 80GB (probably going to get 96 because it is an even multiple of that 32 that is already in the system). If I run only 1 CPU, that would give me just 6 RAM slots, meaning I would have to purchase 16GB sticks, and possibly having to sell the RAM that is already in the system, since these cheap ebay listers usually just stuff these servers full of small capacity sticks. I suppose regardless I should buy higher capacity DIMMs if I am planning on eventually upgrading the server to have 40 cores with 192Gb RAM or more.
  6. By using the Name attribute, it allows me to address a specific Xaml tag from within the C# file. I just use that name as if said tag was an object, and I can interact with it however I want. I did not end up using this solution though, and instead remembered that Grid row/column sizes support pixel counts OR factors. I simply entered the factors that I wanted, and everything now displays as it should. Here is the changed code: <Grid.RowDefinitions> <RowDefinition/> <RowDefinition x:Name="ContentArea" Height="12*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Sidebar" Width="0.125*"/> <ColumnDefinition/> </Grid.ColumnDefinitions> Here is the correct screen displaying:
  7. I am working on the beginnings of my first UWP app, and am a little confused. One of the key benefits to UWP apps is supposed to be their automatic scalability, and developers are encouraged to set the sizes of UI elements in multiples of four (for easier scaling across platforms). I am using a Raspberry Pi 3 (running Win10 IoT Core), and using a screen with an 800x480 resolution. In VS2019, I get an xaml preview for my UI that shows me what it would look like on certain device presets. The closest preview preset to my screen is one that's 1024*768. Here is my code to create sections in the UI and set their colors (so I can see the "zones" I have created): <Grid> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition Height="704"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="128"/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Rectangle Fill="Coral" Width="128"/> <Rectangle Fill="Gray" Grid.Column="1" Grid.Row="0"/> <Rectangle Fill="Red" Grid.Row="1"/> <Grid Grid.Column="2" Grid.Row="1"> </Grid> </Grid> This is what I would like the UI to look like across all platforms. Vs. What actually displays on my prototype screen: The final system will have a different display than the one I am using to test, so the row and column values must scale with the display parameters. Here is the specific part related to establishing Row/column length/height: <Grid.RowDefinitions> <RowDefinition/> <RowDefinition Height="704"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="128"/> <ColumnDefinition/> </Grid.ColumnDefinitions> I have found that within Xaml and C# I could do SystemParameters.FullPrimaryScreenHeight and it would return me the height of the display/window, meaning I could use the formula SystemParameters.FullPrimaryScreenHeight-(SystemParameters.FullPrimaryScreenHeight/12)=Properly scaled vertical resolution to find my needed height for the row. Unfortunately, I cannot do arithmetic operations using Xaml alone. I know that there is some way to use Xaml in conjunction with C#, but this is my first time developing based on Microsoft's UWP. I'm sure that the combination of Xaml and C# is a fairly common practice, like HTML and CSS or HTML and JS, but I am just unsure how to do it. Sorry for the word-vomit, but I couldn't think of a simple way to explain this.
  8. With quarantine and all, I'm having a LOT of extra time to spend on my PC at home. Long story short, I can't decide if I want to save up a little to get the Valve Index or a nicer monitor (I have a Rift CV1). I currently have a dual monitor setup, but they are both 1080p 60hz monitors. That would be fine for most, but I have a 1080ti (i know, don't yell at me for that horrible combo), and there's pretty much no reason for me to not have a monitor with higher resolution or refresh rate. I'm probably going to go for a 1440p 144hz monitor, or maybe 144hz with G-SYNC. My pricepoint for that is around $300-$500, but that's enough to knock me out of getting an Index for quite some time. Tl;dr: I have a 1080ti with a 1080p 60hz monitor, and a Rift CV1. Should I get a better monitor, or a Valve Index?
  9. May I ask what computer you used to run the server? I've been thinking about purchasing something like an old Poweredge 1950 and putting it to use as a full-time MC server.
  10. Yeah, I may actually have to up that, if more than six~ish people play at once.
  11. I've been looking to add some more players to my small FTB Revelations server. Until now, it has mostly been a group of friends, but we're looking to have more of a player base and to get a server economy of goods/services going. Since the server has six gigabytes of ram, and Revelations is one of FTB's largest packs, there is a whitelist to avoid lag from many random users being on at once. (The server is based in Virginia, USA) If you're interested in playing on the server, just hit me up on discord and I will gladly add you to the server's discord server, and whitelist your username. Discord: AWestVirJahnian#3247 Edit: This goes without saying, but mods are a Java Edition exclusive, so Bedrock Edition players will not be able to join this server.
  12. I've been rocking a 1070 Founder's Edition for just over a year now, and it's been treating me very well. I want to get into the 1440p ~90-120hz gaming zone, and am torn between choosing a used 1080ti, or used 2080 non-super. The prices of these cards are very similiar, but the 2080 is slightly more expensive averaging around $550-600, whereas the 1080ti is around $500-550. I plan on upgrading my current VR setup to include a Valve Index, and so I am not looking into waiting for a future AMD release, since Nvidia cards are supposedly better for VR (although I haven't been able to find much testing to back that up). I also know at some point ray tracing will be a desirable feature, but that is not my concern right now, so that would not play into my decision at all. I also may decide to run SLI with the chosen card at some point, but again, I'm not too concerned about that at some point (especially because VR has abysmal support for SLI). Which card would you guys reccomend?
  13. I have solved the problem. The motherboard's supported T3 AIC has two mini DP inputs. These inputs inject video data into the TB3, allowing for a connection to TB3 monitors, or TB2 with a converter.
  14. I can hook it up using a thunderbolt 2 port, but it would just be using the CPU's iGPU. I'm wondering if I can inject display output into a thunderbolt 2 connection.
  15. I am planning on building an extremely custom Hackintosh inside of an old Mac Pro case. I'll spare you all the details, but basically I'm planning on using the highest end Apple thunderbolt 2 display. This display is non-negotiable and I would ditch the Vega 56 that I plan to put in the computer before I would ditch the cinema display. I know I could use mini display port but he display has certain IO ports in the back of it (like an iMac), and I have to use a thunderbolt connection to utilize them. Obviously even if I adapt the Vega 56 output to mini DP, I would only be getting use of the screen and not the ports. Is there any way that I could force the GPU output through a standard thunderbolt connection via software? I know this is available on Windows 10, but don't think there is any feature similar on MacOS. Is there any hardware peripheral (like a reverse splitter) that I could use to force the GPU output through the thunderbolt connection? I'm very worried that there is no other way to do this. I don't want to have to modify the display in any way (I already have to splice enough wires to keep the original IO shield), but apparently the logic board within the display converts the signal to mini DP anyways. I may be able to splice a regular mini DP cable with the input from the logic board and drill a second hole to run both cables. Any other suggestions are appriceated.
  16. Yeah, you can get relatively good performance on Fortnite. Apex is going to be a bit slow, and you'll probably have to turn down the settings very low (I am unexperienced with how this game performs because I have not played it yet).
  17. Although upgradability on the CPU is pretty low, that equipment will probably be fine. I say probably because it has an R9 270X. If I were you, I'd separately purchase something like a GTX 1050Ti, or even maybe a 6Gb model of the GTX 1060 if you have the budget. All in all though, that's a nice price.
  18. Can you list the entire specs of the computer, or link me to its product page? Usually we on this (& other tech forums) disagree buying prebuilt systems, but I think it's acceptable for you because you may not be too skilled with assembly. (I'm pretty sure you meant you're buying a custom PC from someone who makes them)
  19. As well as what @fasauceome and @LukeTheCoder05 said, you should probably know that RAM has diminishing returns on gaming performance. Doing creative tasks or running TONS of programs at once (that's why servers have a ton of ram) would be the only reasons that you would need anything over about 16Gb. 16Gb seems to be the standard for most gamers.
  20. When I built my computer, I decided to go with a modest Ryzen 5 2600 and 16gb of 3000MHz Trident Z RGB ram. I didn't worry about getting 3200 or 3400MHz ram because I've always known ram speeds to leave performance relatively unaffected. I knew that Ryzen played well with higher clocked memory and that it makes a bigger difference, but I only recently learned the extent of that. I am planning on getting an X570 mobo and then (a month or two later) buying a shiny new Ryzen 9 (assuming they are actually revealed to consumers). I obviously can not purchase ALL of this at once, and am thinking of buying new faster ram right now (at a decent point so I can afford it). My question is: Would it really be beneficial enough to purchase 3200 or 3400 (most likely) ram? Should I just save towards the other parts?
  21. Actually, unless you are going with the STRIX mobo for appearance purposes, you could likely get the same features for cheaper. You may also want to invest in a 550 or 600 watt power supply (don't worry about the rating, as long as it's bronze or better, it's fine). A non modular power supply would be *slightly* cheaper, but I would keep a semi-modular because it is worth the little bit extra. As for that motherboard, I personally use a $99 Gigabyte B450 Aorus Elite, although it does not have built-in WiFi. It's not a huge difference, and personally, I think your Strix mobo looks fantastic and is worth every penny.
  22. This seems to be an alright build. All cables that you'll need should be included with the items you purchace.
  23. ...so you're trying to use it like a splitter, and use one 6-pin for motherboard power, and the other for a GPU? I can't say that it sounds like a good idea, but as long as the pin voltages are the same, it could work. I highly suggest you not do this however, as if the ports require different voltages on different pins, it could fry the mobo or video card.
  24. I would pick the 2060, however a decently priced, used 1070 is not a bad deal. In most games (or, at least the few that support ray tracing :->), you will likely leave RTX turned off if you want every other setting to remain on ultra.
  25. Welp I guess my be quiet! Pure Base 600 is out of the running ? (forgive the bad photography)
×