Jump to content

BillytheSkids

Member
  • Posts

    102
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

  1. It seems so clear in hind sight that WinRM was not able to process the request on the client side as opposed to the server side. I ran winrm quickconfig on the server side so many times I couldn't understand why it wouldn't process my request. Glad to know it was an issue on my part however as I was considering rolling back OS to prevent any other issues going forward. Thank you so much, I guess this is my first learning experience in Hyper-V, I hope the rest aren't as frustrating.
  2. I've been using ESXi and agree that I do overall like their software better and is probably far more stable as they have been doing purely cloud for longer then anyone. I am transitioning my lab for my own knowledge and understanding as I'm finding more and more companies want to enter the cloud era and their vehicle of doing so is Hyper-V. The cheap cost of entry I believe is their primary reason I encounter it so much, but I think the cost of support is where it will get them in the end.
  3. Hello all, I've recently been encountering more and more businesses running Hyper-V and with my desire to become more intimately familiar I decided to change my lab over from ESXi to Hyper-V 2012 R2. I am using old retired hardware for bare metal machines in a Workgroup configuration as I have a multiple platform lab. I must say the plug and play nature of Hyper-V over ESXi was definitely nice on this hardware but I immediately hit a wall. After my initial attempt at configuration I received and error. I began researching online, tried running hvremote and numerous other methods of configuring to connect to the server with Hyper-V Manager on my recently installed Windows 10 laptop. The error states "An error occurred while attempting to connect to server" and "The WinRM client cannot process process the request." The error suggests configuring TrustedHosts and when I attempt to force add server to TrustedHosts it states WinRM cannot process the request. Just for clarification I have the server listed in both my DNS server and in the laptops Hosts file. I have enabled Anonymous Login in Component Services and opened up the correct ports on the firewall. I have also launched a Windows 8.1 VM on the same laptop and been able to access the server with no issue. I have gone through the MSDN on what's new for Hyper-V in Windows 10 and found no source of this problem. It even lists Windows 2012 R2 as compatible. Is there something that I am missing in Windows 10 in a Workgroup that is causing this? While I appreciate any help I was more curious if this was something anyone else has encountered. There appears to be simultaneously no documentation of this issue and no instruction or documentation of this configuration post release.
  4. PeggersXtreme, I'm operating purely off of memory on this so pardon me if I'm a bit off, but I believe current production on M.2 is only maxed out at PCIe x1 which maxes out at around 1Gbps. Taking away about 20% overhead for communications traffic would put you in the 800 Mbps range. Like I said I could be mistaken, but this would mean their rated speeds are way off and not possible. It also could be a factor that your board's M.2 is only 1 lane and the SSD is built for 4 lane (not sure if this is adopted or standardized yet). Just some thoughts that might put you on the right path for diagnosing your problem. Hope this Helps.
  5. blabla21, I do not know from personal experience but from what I have read many times on this forum, there is no difference and the 7.1 is poor quality. Most recommend buying the Cloud 1's over the 2's as the $10 price difference is unnecessary. I'd say keep what you have unless you want to upgrade to higher, better quality for a lot more money. Hope this helps.
  6. ^ <- Nailed it! Trololololol
  7. cdsboy2000, This question is a matter of personal preference. Xeon's have the capability of using ECC RAM and therefore should maintain reliable trans coding removing the 0.1% (an exaggeration to illustrate my point) of having to re encode. However if you aren't using ECC RAM then this additional reliability capability is unused and you lose the ability to overclock for faster encodes. An simplified way to explain the core to speed relationship is this. Your base speed determines the time to encode a single frame. Additional cores allows you to render multiple frames at once. So for example say a 2.0 Ghz processor processes a frame in 20 seconds a 4.0 Ghz will encode it in 10 seconds (approximation for educational purposes). With these estimations theoretically a 4 core at 4.0 Ghz will process 8 frames in 20 seconds same as an 8 core at 2.0 Ghz. So many factors come into if additional speed is a factor and by how much but the i7 at base speeds versus the Xeon 1231 will be about 20% faster. Therefore 10 minute Xeon render will take 8 minutes on the i7 roughly with a 20% speed increase. If 2 theoretical minutes is worth the money to you then it is a personal choice. Personally I overclock with EXTENSIVE long term reliability testing on every piece of hardware and then go 1 extra notch down for peace of mind. So after I verify a 4.5 Ghz 48 hour stable overclock I will take the speed down to 4.4 Ghz. The real world impact of my process is more obsessive then probably necessary but to me 1 bad trans code doubles trans coding length more then any overclock will save. Hope this helps.
  8. ICSvortex, It really depends on what your end goal is and your definition of "real" programming is. Every business and industry has their own requirements and vision and past that some are trying to make early adopted "old" technologies compatible with newer (older databases, etc). You can literally learn any languange and there will be a need for it, there is no standard one size fits all (I know of a government contractor who uses cold fusion and just upgraded to ESXi 5.0 for example). Hope this helps.
  9. Badarang, To be honest with only a $100 difference I would wait and save and get everything that I want. Try to sacrifice performance in one way or another for a 10% price difference because of impatience does not seem wise. Hope this helps.
  10. terere93, Regardless of this, graphics card only affects in application previewing. Rendering and final exporting is 100% processor based. This means that the person you are getting this graphics card for may not see any difference in CAD performance getting even a 980Ti. Cuda, Stream, etc do not affect this. Hope this helps.
  11. terere93, Dietrichw is absolutely correct, one thing I will point out is the current GTX line is not on the official approved or recommended list for compatibility. While the GTX 960 may have better performance the application may not take advantage of this yet. That being said I personally would go with a GTX 970 as the price to performance ratio shows this to be a great card especially with applications that can make use of it (Adobe Premier Scrubing, etc). Hope this helps.
  12. cdsboy, For video editing and rendering the additional cores will usually be more relevant to you then having an overclockable high Ghz processor. Xeon's are not overclockable so if this is a must for you then the 4790K is what your will want over an Nvidia graphics processor. Hope this helps.
  13. Be sure that using Unetbootin is a supported and recommended way of creating an installation USB as some Linux distro's do not support this. Past this there is something wrong with your configuration to boot from USB. Without going through the BIOS myself it's tough for me to determine as every manufacturer and model changes their BIOS. Try pressing F12 to choose and force boot device to USB. If you are sure it is booting from the USB device check the distro forum for some advise. Sometimes you may have to go with an older version of the distro or a custom kernel if you are using old or unsupported hardware. No clear direction for you but hope this helps.
  14. Fobus, In answer to your question as I believe someone else pointed out, it doesn't affect it. RAM will typically be the more used resource and you don't gain a huge advantage going from 30% 4 core processor load to 20% 8 distributed thread load. You are still under utilizing the processor, computers are not as simple as double Ghz and double cores is 4 times faster. You lose performance in ways to things like distributed thread efficiency, application coding, OS process balancing. A simple way to explain the problem is for every thread you want to use you have to run another process to manage that additional thread. So in summary, it doesn't matter for multi tasking and there is no benchmark that I am aware of to try to test this. Hope this helps.
×