Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About tech.guru

  • Title

Recent Profile Visitors

87 profile views
  1. run mdsched.exe and schedule memory test on next reboot. confirm you dont have any memory errors
  2. they have the numbers to look at the user base and how often and number of people who used the feature. your missing the point and conflating your self as the single user base google concerns themselves with. they have many user groups to consider when designing and applying interface changes. you are ignoring this fact. I wont deny it may impact your workflows or others when trying to find an image. however, the feature must have not been used that much by the 99.99% or whatever google saw of the other user groups who just wanted to search for a image
  3. this is your opinion is not based on fact, google has to appeal to billions of users and the big reason is simplicity. these same arguments are used over and over again by power users when developers change their UI to be more streamlined. the list goes on, firefox office windows but developers have the hard task of making existing user base happy when trying to make interface simple and easy to use. the overall industry is going to towards minimalism see, https://www.nngroup.com/articles/characteristics-minimalism/ and having a banner removes space used for displaying images. it also introduces an option users may accidentally turn on get confused and find it difficult to turn off. you will have to accept this fact, this service isnt designed just for you but billions of other users.
  4. its not frequently used or google wouldnt have removed it. the average user doesnt even understand resolution or size it was no doubt to cleanup the interface and many developers do this over time to their programs. its still available through advanced search if you need it. if you see the homepage of google you will see this fits with their philosophy and why they have been so successful. its a simple clean page to just search with minimal clutter.
  5. guys before you blow a gasket just use the advanced search https://www.google.ca/advanced_image_search they just got rid of the drop downs to cleanup the interface
  6. here is a link, https://www.citrix.com/lp/try/citrix-networking-vpx-express.html it has been upgraded to 20 Mbps but removed citrix gateway in free edition. you are not using vpn for this appliance and want to deploy inside the network. this seems like a good fit express is basically standard edition, this includes high availability and all the features needed to load balance dns. since its virtual be easy to upgrade the license and assign more cores and memory if you decide to expand its usage to more network services
  7. you would just put the virtual ip for the load balancing vserver for dns, on the dns clients list. all other traffic would go to the domain controllers as it does today. but you would be able to use this to load balance other network services, LDAPS, HTTPS, DNS, NTP, RADIUS, EXCHANGE, SQL etc for your requirement you could even go with the free vpx dns is very lightweight, https://www.jasonsamuel.com/2011/03/02/citrix-announces-free-5-mbps-vpx-express-and-free-platinum-edition-vpx-developer/ if your requirements grow than you could consider more expensive license. just note the citrix management analytic system is premium feature and isnt available for free edition
  8. not true. the virtual ip is a floating ip that will be assigned to the primary device in the netscalers case. if a heartbeat is lost between the primary netscaler failure event to secondary occurs. secondary device takes ownership of all services. GARP request is sent so the arp tables for the VIP are updated to the new device. you can also use vmac if GARP is too slow or blocked on your switches.
  9. the benefits of buying a load balancer is you will gain an appliance that can be used for many other things. the netscaler can also be coupled with management anaytic system to monitor and alert you of issues and report response times. in addition it supports rate limiting and some advanced features to protect your dns infrastructure from attacks.
  10. you can use a high availability pair of netscalers to load balance dns servers. this can be accomplished very easy. in addition, you can load balance LDAPS which is probably another service that at the moment is a single point of failure. get a couple of netscaler vpx for this project, or bigger MPX if you want to load balance more services. it has probes for DNS you can add a record for DNS to query and expected response. if the probe fails the netscaler will mark server as offline and move to the next server
  11. it provides 540 W on the 12v rails for the video card and cpu. your video card requires 215w and cpu is 65w so the answer is yes.
  12. its clear you dont know what your asking, i think what you want is your clients connect to your internal network through your isp and internet browsing happens over the vpn connection. you could do this quite easily you host a vpn server nat'd or public ip address on your isp you provide subnet internal on your network for these vpn clients. no local access to anything on the network. you should consider a separate router from your home network. you would make the third party vpn service the internet gateway for the internal subnet. note, its a significant risk to you and im not sure this would be allowed under your vpn ToS.
  13. expensive is relative compare to other competing solutions. look at vmware virtual san and other solutions. the user mentioned 80 tb worth of data, thats a large amount of data not to have a redundant solution especially when he mentions the concerns around failure that has already happened. i stand by what i said either go with a vsan or physical san solution based on the tolerance of risk. Microsoft was suggested based on his comments about going to windows server.
  14. you can use the following advice, from windows pe on installation media https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/capture-and-apply-windows-using-a-single-wim#related-topics replace the imagex command with a xcopy command you will need to find out destination and source drive letters xcopy c:\ f:\ /s /e /h /i /c /y or use some freeware utility could be used instead above. ensure if running diskpart script drive 0 is actually the new drive. it will wipe out any data on that drive. you may wish to run the create the diskpart layout script with the source drive disconnect to be safe.
  15. you will if you implement the technologies built into windows server, there is solutions using scale out file servers. they use Microsoft cluster with cheap local storage and you can cluster and replicate using block based. that would be perfect for file servers for your video editing where you are working on large files. the newer client operating system (vista and above) support transparent failover. if one went down the other file server could take over clients would just notice a brief pause but otherwise be uninterrupted. you can also use the built in storage replica to keep both local disks in sync its very compelling and would provide better performance than a single server and eliminate failures you mention causing an outage .large data sets such as videos are a perfect for an active-active file cluster that can provide excellent performance and reliability without buying an expensive SAN.