Jump to content

PBH

Member
  • Posts

    20
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About PBH

  • Birthday Feb 20, 1993

Profile Information

  • Gender
    Male
  • Location
    Sri Lanka
  • Occupation
    Research student

System

  • CPU
    5900X
  • Motherboard
    Gigabyte X570 Aorus Pro Wifi
  • RAM
    (2x16GB) Cruicial Ballistix 3600MHz C16
  • GPU
    RTX 3080
  • Case
    Carbide 275R
  • Storage
    Corsair 250GB NVME
  • PSU
    Silverstone ST85F-GS
  • Display(s)
    Dell U2419H
  • Cooling
    BeQuiet Dark Rock Pro 4
  • Keyboard
    Logitech G613
  • Mouse
    Logitech MX Anywhere 2S
  • Sound
    Xiaomi Bluetooth Speaker
  • Operating System
    Cent OS Stream 8, Windows 11
  • Laptop
    MSI Modern 15-A11M
  • Phone
    Oneplus 7T
  1. Thanks for the great insight. I checked with some students and some of the staff members. Most people using CFD, matlab, molecular dynamics or other matter modelling tools simply start their simulations and leave the workstations since there is very low requirement for GUI interactions in each case. Also on limited occasions, there are requirements to utilize a significant amount of memory reaching way over the 128GB limit on consumer platforms. Wouldn't this coupled with the possibility of scaling up the system by adding more nodes in the future make a central server more future proof? I am still having trouble wrapping my head around the argument that individual workstations would serve better in the long run. Because of this, I checked with some of my contacts, and I might be able to get a VM Sphere license through without too much trouble. So hopefully if this reduces the software costs mentioned earlier, it might actually become feasible to build an expandable system which would hold on for much longer while also supporting additional features such as ECC memory...
  2. So I checked the list of software licenses available for the university, and it seems that WIndows Hyper V server 2019 license is available as well as windows server licenses. In this case, what additional software would I require? Would VM-Ware Sphere perform significantly better than Windows Hyper V? Also, am I even on the right track? Thanks
  3. Yes that is true. But with what was mentioned earlier, the cost for VDI seems to be making a server unrealistic for small requirements. However I shall try with some local vendors and determine the exact difference. Also your idea of BYOD seems really good. The university already has multiple computer labs to supplement anyone who forgets to bring a device as well. Thanks
  4. So does that mean a server is not cost effective even in the long run? My target is that sometime in the future, that there would be the possibility to use the combined capability of all the cores so as to keep the system working for as long as possible. Also, additional nodes can be added in the future with new budget allocations right? Also the thing is, I do not know the type of required hardware to match with one another in order to see if some agent would provide with a quote.
  5. Assuming that the current available computer labs can be designated for 3D stuff, I have personally, remotely used everything else such as Computational Fluid Dynamics (CFD) and matlab. Also there are several students venturing into molecular dynamics and other material modelling methods. These students are working off of one workstation computer and are accessing it remotely 100% of the time. So I'm sure that most of the required work can be handled remotely, and the rest can actually be divided among the available resources. Also both CPU and GPU computations are currently being used.
  6. Thanks for the advice. The thing is that this is a small university and the infrastructure are still under developed. I have already contacted the IT support, and they said that they are working mostly with networking, web page maintenance and things like that. Also since this is something completely new, most people are reluctant and not willing to take a risk. However, I believe that the risk is worth since there is a possibility of expanding something like this right? I was anyway thinking that it would be possible to get several nodes interconnected. This would lead the way for a future expansion right? Also regarding the software, since the possibility exists to expand a server in the future, and given that the full capacity of 50 PCs aren't even required for the near future in the university, it would be acceptable to absorb these costs by reducing the hardware performance for the time being. That would allow for the addition of extra hardware nodes in the future without spending that much extra on software right? The systems are mostly used for computer aided modelling, such as autocad, solidworks and catia. Also a significant amount of computational fluid dynamics and matlab would be used. We currently have microsoft volume licensing for windows server and such software. What other kind of software would be needed. I am not very fluent with this area.
  7. My university is trying to allocate a significant portion of the budget to buy around 50 workstation computers (probably r7 or i7 with 16GB ram in each and windows OS) for students. Being someone who uses such individual PCs for research work due to the unavailability of a server for the university, I am trying to justify the purchase of a single server instead of such individual systems. If the computers are bought, the chances are that they would be used for only around 20-30% (at best) of their usable lifetime (which is what is happening with existing computer labs). Also people requiring large core counts or memory would not have any chance of getting their work done without paying extra for AWS (which is what is happening at the moment). Would it be possible to suggest a reasonable specification for a server (which can effectively replace 50 individual systems) so that I can contact a local supplier and get an idea of the cost difference? Would it be possible to use something such as Thin client computers (https://en.wikipedia.org/wiki/Thin_client) or any other alternative (such as existing labs) to allow the students to log into the server and do their work? I am experienced in building and maintaining my own personal computers, but am not experienced in server hardware. So if I can get some guidance, that would be extremely helpful. Thanks in advance
  8. A long time back (when razer first announced their eGPU enclosure) I remember reading that eGPU works only for notebooks with discrete GPUs. Since then I haven't been much in touch with any of these and that is why I was concerned. Also, I failed to mention that I use linux for most of my work. Would that be a problem?
  9. I have been looking for a thunderbolt dock (to expand the USB port selection and to get a display output) for my notebook when I came across eGPU enclosures. It seems these enclosures such as the CoolerMaster MasterCase EG200 ticks all the boxes I need from a thunderbolt dock while providing the extra support for an external GPU and storage even though at a higher premium. What I need to know, is if this would work with my notebook since it comes with no dedicated gpu. Notebook : MSI Modern 15A11M (Specification Modern 15 - A11S | MSI Global) CPU intel 1135G7 GPU Intel Iris Xe Thunderbolt 4 capable Note:- The additional cost for a GPU is of no concern as I already own an extra 2070 Super from my desktop, and I am not hoping to utilize the complete potential of the GPU as I know that the performance of these enclosures are lower than for a desktop.
  10. Did you go through the upgrade? Did it work? I have a Toshiba Satellite :L50A with 4200M and thought of giving this a try (just for fun )...I've been using this laptop since january 2014 and so far hasn't run into any trouble. Upgraded the ram to 6GB from 4 and put in an SSD replacing the slow HDD and am getting better performance compared to newer i3 models. Maybe a CPU upgrade can pull this trough another year or two... ..Why not reduce e waste and save some money at the same time..
  11. Thank you. I think I now have all the information I needed. Very much appreciate the helpful comments.
  12. Thanks for all the helpful tips guys. I found a used Tesla K40 on amazon for 1/4th the price of an RTX 4000. I'm also looking into the services being offered by AWS and BOINC. Just a few small concerns though. In any case if I try to buy and install a K40 to my system, would the mounting be any different from that of a regular desktop graphics card (say a quadro P620 or RTX 2080)? Also what would be considered as safe temperatures for a K40 to run under. During the night time and weekends, all air conditioning in the university will be shut down and the ambient temperature would be close to 33-35 Celsius usually, while I plan to use the system 24x7. I am hoping to attach an exhaust fan from the rear end of the card to suck air through the heat sink (but I have no idea regarding the required flow rate) or to remove the plastic housing and attach two small fans to blow directly onto the heat sink as done in gaming GPUs. Which should be the more reliable and recommendable option. I have never physically seen a server grade system so am not familiar with the mounting mechanisms or thermal solutions except for the knowledge that they are typically passively cooled. Thank you
  13. Thank you. I shall look into that. Really appreciate it guys I had not considered that. Thanks There is no one that I know of as of yet. Except for AWS as mentioned by Stewie in the previous reply.
×