Jump to content

PBH

Member
  • Posts

    20
  • Joined

  • Last visited

Everything posted by PBH

  1. Thanks for the great insight. I checked with some students and some of the staff members. Most people using CFD, matlab, molecular dynamics or other matter modelling tools simply start their simulations and leave the workstations since there is very low requirement for GUI interactions in each case. Also on limited occasions, there are requirements to utilize a significant amount of memory reaching way over the 128GB limit on consumer platforms. Wouldn't this coupled with the possibility of scaling up the system by adding more nodes in the future make a central server more future proof? I am still having trouble wrapping my head around the argument that individual workstations would serve better in the long run. Because of this, I checked with some of my contacts, and I might be able to get a VM Sphere license through without too much trouble. So hopefully if this reduces the software costs mentioned earlier, it might actually become feasible to build an expandable system which would hold on for much longer while also supporting additional features such as ECC memory...
  2. So I checked the list of software licenses available for the university, and it seems that WIndows Hyper V server 2019 license is available as well as windows server licenses. In this case, what additional software would I require? Would VM-Ware Sphere perform significantly better than Windows Hyper V? Also, am I even on the right track? Thanks
  3. Yes that is true. But with what was mentioned earlier, the cost for VDI seems to be making a server unrealistic for small requirements. However I shall try with some local vendors and determine the exact difference. Also your idea of BYOD seems really good. The university already has multiple computer labs to supplement anyone who forgets to bring a device as well. Thanks
  4. So does that mean a server is not cost effective even in the long run? My target is that sometime in the future, that there would be the possibility to use the combined capability of all the cores so as to keep the system working for as long as possible. Also, additional nodes can be added in the future with new budget allocations right? Also the thing is, I do not know the type of required hardware to match with one another in order to see if some agent would provide with a quote.
  5. Assuming that the current available computer labs can be designated for 3D stuff, I have personally, remotely used everything else such as Computational Fluid Dynamics (CFD) and matlab. Also there are several students venturing into molecular dynamics and other material modelling methods. These students are working off of one workstation computer and are accessing it remotely 100% of the time. So I'm sure that most of the required work can be handled remotely, and the rest can actually be divided among the available resources. Also both CPU and GPU computations are currently being used.
  6. Thanks for the advice. The thing is that this is a small university and the infrastructure are still under developed. I have already contacted the IT support, and they said that they are working mostly with networking, web page maintenance and things like that. Also since this is something completely new, most people are reluctant and not willing to take a risk. However, I believe that the risk is worth since there is a possibility of expanding something like this right? I was anyway thinking that it would be possible to get several nodes interconnected. This would lead the way for a future expansion right? Also regarding the software, since the possibility exists to expand a server in the future, and given that the full capacity of 50 PCs aren't even required for the near future in the university, it would be acceptable to absorb these costs by reducing the hardware performance for the time being. That would allow for the addition of extra hardware nodes in the future without spending that much extra on software right? The systems are mostly used for computer aided modelling, such as autocad, solidworks and catia. Also a significant amount of computational fluid dynamics and matlab would be used. We currently have microsoft volume licensing for windows server and such software. What other kind of software would be needed. I am not very fluent with this area.
  7. My university is trying to allocate a significant portion of the budget to buy around 50 workstation computers (probably r7 or i7 with 16GB ram in each and windows OS) for students. Being someone who uses such individual PCs for research work due to the unavailability of a server for the university, I am trying to justify the purchase of a single server instead of such individual systems. If the computers are bought, the chances are that they would be used for only around 20-30% (at best) of their usable lifetime (which is what is happening with existing computer labs). Also people requiring large core counts or memory would not have any chance of getting their work done without paying extra for AWS (which is what is happening at the moment). Would it be possible to suggest a reasonable specification for a server (which can effectively replace 50 individual systems) so that I can contact a local supplier and get an idea of the cost difference? Would it be possible to use something such as Thin client computers (https://en.wikipedia.org/wiki/Thin_client) or any other alternative (such as existing labs) to allow the students to log into the server and do their work? I am experienced in building and maintaining my own personal computers, but am not experienced in server hardware. So if I can get some guidance, that would be extremely helpful. Thanks in advance
  8. A long time back (when razer first announced their eGPU enclosure) I remember reading that eGPU works only for notebooks with discrete GPUs. Since then I haven't been much in touch with any of these and that is why I was concerned. Also, I failed to mention that I use linux for most of my work. Would that be a problem?
  9. I have been looking for a thunderbolt dock (to expand the USB port selection and to get a display output) for my notebook when I came across eGPU enclosures. It seems these enclosures such as the CoolerMaster MasterCase EG200 ticks all the boxes I need from a thunderbolt dock while providing the extra support for an external GPU and storage even though at a higher premium. What I need to know, is if this would work with my notebook since it comes with no dedicated gpu. Notebook : MSI Modern 15A11M (Specification Modern 15 - A11S | MSI Global) CPU intel 1135G7 GPU Intel Iris Xe Thunderbolt 4 capable Note:- The additional cost for a GPU is of no concern as I already own an extra 2070 Super from my desktop, and I am not hoping to utilize the complete potential of the GPU as I know that the performance of these enclosures are lower than for a desktop.
  10. Did you go through the upgrade? Did it work? I have a Toshiba Satellite :L50A with 4200M and thought of giving this a try (just for fun )...I've been using this laptop since january 2014 and so far hasn't run into any trouble. Upgraded the ram to 6GB from 4 and put in an SSD replacing the slow HDD and am getting better performance compared to newer i3 models. Maybe a CPU upgrade can pull this trough another year or two... ..Why not reduce e waste and save some money at the same time..
  11. Thank you. I think I now have all the information I needed. Very much appreciate the helpful comments.
  12. Thanks for all the helpful tips guys. I found a used Tesla K40 on amazon for 1/4th the price of an RTX 4000. I'm also looking into the services being offered by AWS and BOINC. Just a few small concerns though. In any case if I try to buy and install a K40 to my system, would the mounting be any different from that of a regular desktop graphics card (say a quadro P620 or RTX 2080)? Also what would be considered as safe temperatures for a K40 to run under. During the night time and weekends, all air conditioning in the university will be shut down and the ambient temperature would be close to 33-35 Celsius usually, while I plan to use the system 24x7. I am hoping to attach an exhaust fan from the rear end of the card to suck air through the heat sink (but I have no idea regarding the required flow rate) or to remove the plastic housing and attach two small fans to blow directly onto the heat sink as done in gaming GPUs. Which should be the more reliable and recommendable option. I have never physically seen a server grade system so am not familiar with the mounting mechanisms or thermal solutions except for the knowledge that they are typically passively cooled. Thank you
  13. Thank you. I shall look into that. Really appreciate it guys I had not considered that. Thanks There is no one that I know of as of yet. Except for AWS as mentioned by Stewie in the previous reply.
  14. Hi, thank you for the reply. Currently I am using mixed precision where both FP64 and FP32 are being used. Yes, it would be ideal to go for FP64, however with current resources I have been using the mixed option where the computations being done are reasonably accurate since my systems are homogeneous. Hence I am looking more towards FP32 performance.
  15. Hi guys, I'm sorry if this is not a relevant topic for this forum as I am new here. I'll explain my problem as follows. Currently I'm working on my PhD right now which is partially funded through a government grant and am having difficulties buying computational hardware. 1. I am using classical molecular dynamics (mainly) to study one dimensional semiconductor nano-materials for thermoelectric applications. I am mostly looking into Si nano-wires and thermal conductivity suppression through diameter modulation. I'll provide details on exactly what I'm doing if required. 2. I am in Sri-Lanka and just started the PhD (I am the first student studying this field from the department although my supervisor followed the same field and is guiding me through the process). 3. Currently, I am using a workstation available in the department Intel Xeon-W2155 (32GB ram) with quadro P620 graphics card and also a PC which I built through my own expenses (i9 9900K without a GPU). 4. I am currently being paid a stipend sufficient to account for my living costs through a grant. 5. However, as I keep working on the simulations, I now understand that I am seriously being limited by the available computational power. 6. I am also saving money to buy a Quadro RTX 4000 (which is the maximum I can go for) but would probably have to wait for another 5-6 months until I can afford it. 7. Would anyone please consider if it would be possible to provide me with some sort of assistance in obtaining better computational capabilities (CPU based or GPU based both works). Either by providing remote access to perform the simulations or by providing me with some decent equipment which may be getting discarded ( I am willing to pay a reasonable amount along with shipping and wouldn't mind trying to repair a broken graphics card even if any are available to be spared). 8. I am open for suggestions as well and would be able to provide a written document explaining the situation through the head of the department if required. 9. I am also open to acknowledging any assistance in research articles which may be published. I would very much appreciate it if someone considers assisting me on this to some extent. Also, I understand that excess computational power is not something that is freely available and also that what I am asking for may be way too much to be provided. Thank you
  16. The Razer Blade definetley.... I really can use an upgrade from my outdated i5-4500M to run OpenFoam flow simulations.
×