Jump to content

SiverJohn17

Member
  • Posts

    47
  • Joined

  • Last visited

Awards

About SiverJohn17

  • Birthday Nov 09, 1994

Profile Information

  • Gender
    Male
  • Location
    Tennessee
  • Interests
    I have an interest in most subjects and things honestly... Anime, music, science, tech, and gaming being the major ones. I majorly listen to music and I don't game as much as I used to...
  • Biography
    I really don't know what to put here. Just ask if you really care.
  • Occupation
    Student

System

  • CPU
    i7-4500U
  • Motherboard
    Laptop Mobo?
  • RAM
    8GB
  • GPU
    GTX 745M
  • Case
    Laptop
  • Storage
    1TB Seagate 5400 RPM
  • PSU
    Battery
  • Display(s)
    1080P (Tien?)
  • Cooling
    Again laptop.
  • Keyboard
    Das Ultimate S Ultimate (and of course laptop)
  • Mouse
    Corsair M95
  • Sound
    HD8 DJ
  • Operating System
    Windows 8.1 (Possibly Arch soon)
  • PCPartPicker URL

Recent Profile Visitors

1,019 profile views

SiverJohn17's Achievements

  1. I don't know if this is off topic, but seemed the correct place to put it, and excuse the memey title. Tuesday I defended my PhD dissertation, one of the last steps before you become a PhD. I had intended to record it but forgot so no video evidence (or ability for family/friends to view after the fact) sadly, however right before I got to questions, I took a drink from my water bottle and said "LTTStore.com" I just wanted to thank LTT staff for being a source of entertainment for a large section of my college journey (began watching in earnest when I broke my leg in 2014) and for actually being incredibly useful in my research (I built five computers and spec'd two more during the parts shortage when it was easier to buy from system integrators for my lab as well as all the troubleshooting I had to do). This even lead to one of these builds and my PC Part Picker being referenced in a research paper. I've never regretted what I considered my "initial investment" in to LTT as signified by my rare forum badge. For the general people potentially reading this, my field could be charitably called computational biophysics and feel free to ask me any questions.
  2. I have been debating this for a while, and while there are alternatives from both ONYX and Sony, the pricing is similar. My main reason I haven't bought one is, thinking back to one of my original kindle's years (a decade even?) ago and I never preferred it over a physical book. I even gave it to my grandmother. Maybe the technology has improved greatly since that time, but I doubt it. Granted I'd mainly use one for reading research documents so YMMV. Especially because how I read constantly requires flipping of pages (though that wasn't true back in the day with reading novels...).
  3. No, I respect that and worst case scenario it starts a discussion internally if they hadn't thought of it. Not a bad idea to get all bases.
  4. I'm assuming LMG (Floatplane?), if they do a price increase right out of the box will have a sunset clause for those that have the better rate at some point. Or taking into account your troubles will have something like the first week of people get the lower values. Linus/Luke don't seem the type to be unreasonable and I am sure they are very aware of this issue (their bitching about the payment system heavily implies that).
  5. I reinstall any time I feel like I have too much junk on my computer. Thinking about doing a fresh install on the laptop I'm currently using and I did the most recent install ~6months ago. I run an Arch build so it takes seconds to rebuild though.
  6. Depends minimum I'd say is about 30 ceiling is generally around 70. Though if we are talking about across all devices I regularly use, I'd say the current number is well above 100. I mainly several chrome windows open with various projects in them and will close out of them as I complete them. Or give up on them.
  7. Sorry I missed this. Interestingly my weakest subject is probably chemistry. My undergraduate was in physics and biology. But yeah, I know what you mean. I thankfully knew where I was going from the start of undergraduate.
  8. I am actually just a first year PhD student doing a rotation in a computational chemistry group. However, I have done some self studying on this stuff throughout the years as I have kept a constant interest in hardware. So much so that even though I am the newest member of the lab, I have become the de facto tech guru. Its a fun job though my rotation is ending this week. Edited: For derp.
  9. It depends on how your initial code is written, because thread parallelization isn't how you squeeze most of the performance out of a Phi. You also have to take into consideration vectorization, and if you're code isn't written with that in mind it can be just as much of a pain to rewrite parts in CUDA. Granted if you've written your code to be nice and modular it shouldn't be as terrible. The same is true for the Phi, if you noticed in the video Linus talked about the new instruction set AVX512, and so if you write code to optimize for that then it won't work on the older CPUs (it'll of course work on the other Xeons though I don't know about AMD support). However basic code on either platform will work on any other. The basic Saxpy code on a Fermi card will work on Pascal. The trouble begins when you start to get architecture specific but as these were mainly built for the world of HPC leaving that off the table isn't an option. So that'll always be a problem no matter what you use.
  10. So two problems with that one you are correct the suite is only CUDA. It is an out of house software solution though even if it was in house I'd probably develop on CUDA for some of the extra luxuries and, correct me if I'm wrong because it has been a few years since I've looked into it, that even on OpenCL you still have to code for the certain GPU architecture you want to use. The second problem, and personally more aggravating to me, is that the code can only be ran on a single GPU so this would get us less performance than a Titan XP for more cost.
  11. Actually, and I didn't realize when I made my first comment, it most certainly can. According to Wikipedia at least (I had trouble finding this on the F@H website) Folding@Home utilizes GROMACS which is known for its strong scalability. As well as its optimization so much so that it gets relatively minor improvements from going from CPU to GPU, making it a perfect application if someone has a couple grand they want to just spend for the hell of it.
  12. It would definitely be beneficial, how beneficial depends on exactly how that code is implemented, something I am ashamed to say I am ignorant of (considering it is close to my field of interest).
  13. I apologize if this sounds condescending as I am unaware of your knowledge base. However while the statement that the code doesn't have to be significantly modified to run on it is true. If you wanted to get the most of the performance out of it you'd definitely have to rewrite it in order for it to be more vectorizable. Which has its own challenges. Of course if you've already written your code for that purpose then sure slapping one of these in versus a GPU is a huge difference. That being said I agree with your comment about it is interesting they made it a primary processor instead of a coprocessor. Honestly, that is most of the reason I've lost interest. Also the comment from Linus about it being good for NN is a bit strange considering Intel announced the Nervana Chip to handle that end of the market.
  14. To give the standard answer, depends on what your work load is. If we are talking about gaming or something that needs high performance of a single core then yes. There are huge drawbacks, because you basically have four threads competing for one core. However for certain applications if you are intelligent on how you do your threads you could get some respectable speed ups. Edit: Forgot to answer the last part of the question, no I don't see this coming to consumer any time soon. It'd be unnecessary for most applications I can think of.
  15. So fun fact, I am actually the customer for products like these. And I straight up don't care about Xeon Phis. They were interesting a while ago but from my perspective there is no reason to use one of those chips over your standard GPU. In fact my lab uses off the shelf Titan XPs for most of our simulations. For most of our purposes these cards are the best options with only the new Tesla V100s being more powerful. Standard disclaimer our simulations only utilize single floating point precision so pure FLOPS are fine. That being said I (being the odd man in our lab) want to get a V100 for a personal project. TL;DR even for HPC applications Xeon Phis are mainly irrelevant.
×