Jump to content

hellcats

Member
  • Posts

    8
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Male
  • Location
    Dallas
  • Occupation
    Programmer

Recent Profile Visitors

850 profile views

hellcats's Achievements

  1. I just started seeing this too. It started after updating Office yesterday. Someone at MS accidentally left the subsystem as "Console" for a deamon process. You can use "secpol.msc" to enable logging of processes when they launch or terminate (even if only for a split second). Run "secpol.msc", Local Policies -> Audit Policy -> Audit Process Tracking: enable both "Success" and "Failure". Then open the Event Viewer and go to Windows Logs -> Security. You can turn on filtering for "Audit Failure, Audit Success" Hopefully MS will fix this soon.
  2. I think worrying about automation displacing humans, and worse, setting policy based on it is a bit of getting the cart before the horse (in keeping with the analogy in the video). First off, horses didn't create the internal combustion engine which displaced them, and I am willing to go out on a limb here and state that no horse ever worried about it either . As the video pointed out (8:13), machine learning (ML) and AI today is all about optimization: finding a function that minimizes some measure or metric (norm, energy, cost. error etc.) What's happened recently that's different from what came before is that if you scale up to big data and fast computers with lots of memory, you can actually solve some interesting problems (instead of just toy problems) using optimization. But there hasn't been any fundamental theory of thinking or strong AI that would do what the video is saying which is that machines will be able to perform (and be better at) all the cognitive tasks that humans are capable of, including creativity. The example of music composed by computer isn't really an example of machine creativity. A human wrote a program to analyze patterns in music that humans created, chose some set of some desired properties, and then performed a mathematical optimization procedure, the result of which is a data-driven system for producing "music" that exhibits those properties. The computer didn't think to start learning about music on its own; it didn't have a desire to create beautiful music - it has no concept of "beauty" and doesn't know why it composes what it does anyway. I think the problem here is primarily one of semantics: we use terms like "machine learning", and "artificial intelligence" which are loaded with all sorts of anthropomorphic baggage. Yes, automobiles that can navigate in fairly complex environments and do it more safely on average than human drivers is a thing. Yes, computers can beat humans at chess, poker, go and many other games. Yes, computers can find ways to route wires in your CPU chip better than any human. This is all just mathematics. Until some theory of thought or cognition is developed and validated, then we are just seeing a continuation of the same technological progression that has been going on for thousands of years since that big black monolith showed up on the savanna early one sunny morning.
  3. I considered that; in fact I went round and round between Skylake 6700K and Haswell-E. The IPC improvement from Haswell to Skylake apparently isn't all that great, and I also have other use cases than just VR and gaming. I thought that X99 would give me the option of upgrading the processor either to Broadwell-E (next month maybe?), or a 5960X that will probably be more affordable once Broadwell-E comes out. Kaby Lake isn't due until late this year sometime, and apparently it includes a new platform (200 series). The thing that really kept me up at night was whether I could overclock Haswell-E to compete with stock Skylake frequencies, but according to most people it is easy to get at least 1Ghz with Haswell-E. So if that's true, then I'm willing to give up ~5% for the added bandwidth, larger L3 cache, and upgrade path. I did fully spec out a Skylake configuration, but it wasn't a whole lot cheaper than the 5820K that I finally decided to go with. But I could very easily have made the wrong choice... keeping my fingers crossed.
  4. I speced the wattage with an eye towards adding a Pascal card when they come out. I'd use the 980 Ti for graphics, and the Pascal for compute.
  5. I'm currently running an AMD Phenom II system sporting an Nvidia 470 GTX GPU from about 6 years ago, so the 5820K will be a huge upgrade . The reason I'm in such a hurry to upgrade is that I ordered an Oculus CV1 (which should hopefully arrive later this month), and the Phenom box hardly meets Oculus' minimum system specs! I ordered everything last week, so I should be building my shiny new system this week. I picked up 64Gb of DDR4-3000 CL14 RAM ($419 @ NewEgg). I don't know why - I just wanted it. Chances of actually achieving that memory bandwidth with CPU overclocked??? No idea, but I'll give it try. I also got a good deal on a Samsung 950 Pro M.2 SSD. I'm intrigued by RAID0 performance using M.2 and Intel RST (see here), so I'll probably get another one after the system is up and running. Besides gaming and VR, I'll be using the system for C++ programming, so I wanted something that I could fire off parallel compiler instances with. I'm not much of a H/W guy, but I've been watching tons of LTT videos and scouring the inter-webs for component reviews etc. This will be by far the most ambitious build I've ever attempted - hope I don't screw the pooch! PCPartPicker Link Intel Core i7-5820K 3.3GHz 6-Core Processor ($319 at Microcenter) NZXT Kraken X61 106.1 CFM Liquid CPU Cooler Asus X99-DELUXE/U3.1 ATX LGA2011-3 Motherboard G.Skill Ripjaws V Series 64GB (4 x 16GB) DDR4-3000 Memory Samsung 950 PRO 512GB M.2-2280 Solid State Drive ($287 at Microcenter) Samsung 850 EVO-Series 500GB 2.5" Solid State Drive Asus GeForce GTX 980 Ti 6GB STRIX Video Card Phanteks Enthoo Evolv ATX ATX Mid Tower Case EVGA SuperNOVA 1000G2 1000W 80+ Gold PSU (P.S. I know that the LHC is the Large Hadron (not Hedron) Collider. Hedron refers to polyhedron, which I use in my work)
  6. Beautiful machine. I've pretty much decided on the Evolv ATX and X61 myself (although I just can't decide between the 5820K and 6700K, but that's a personal problem :). How are the temps and how loud is it? Is it annoying at all? Do the Corsair SP fans perform as well as the stock cooler fans (or maybe better?) Do you have the radiator fans blowing out, or sucking into the case? If blowing out the top, does the lid of the Evolve impede airflow at all? Have you encountered any pump noise with the X61? Why am I asking so many questions (another personal problem)?
  7. For me it was a weird time before Open Source existed and some friends and I were able to take a class compiler project and turn it into a product. You really couldn't do that today, but there are always new opportunities. The LLVM project is changing the rules in so many fields today. I bet LLVM experts can name their own price right now. But I'd say just keep up with technology trends, and do deep dives when you can so you really understand why people are choosing the technology. It is no use to just chase the latest buzz words, you really have to understand what the problem is and how it is being solved. Don't just recite "best practices" or defer to authority; keep digging until it makes sense to you in your own way. Once you do enough of this then you'll be the one creating the next hot tech!
  8. "Never" and "always" are so final. Integral values can be represented exactly in IEEE format (up to the precision of the mantissa), so you could store bank balances in cents (a.k.a "fixed point" arithmetic). But in general, comparisons are fine as long as you understand what is going on. I've witnessed so many otherwise smart programmers say really stupid things about floating-point (examples: "there is no floating point compare instruction on the CPU" (it's called FCOM by the way), or "floating point numbers are not exact, so they can't be compared"). Is it just math anxiety? I think the problem is conflating computer arithmetic with real world arithmetic. For me, it helps to view computer arithmetic as just a bit-twiddling API independent of actual mathematics. In this API view you can compare values for equality, you can get deterministic results, and you can successfully write numerical code. You just have to understand what the functions do - as you would with any other API. This is definitely an area needing more attention. David Eberly was working on a book covering all aspects of computer arithmetic a few years ago, but his publisher canceled the project fearing there wouldn't be enough demand. How sad.
×