Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About Fourthdwarf

  • Title

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The var tag is just a way of delimiting the 'numbers' - If i didn't have some kind of delimiter I'd just be using concatenation without HTML, which is not the point of the exercise. The semantics of the var tag are almost besides the point - all that I care about is that the two strings are concatenated, because concatenation is an equivalent operation to addition. Using <var> was just a joke, which I could make as the particular choice of tags that do this was arbritrary. This is clearly not a serious way of doing addition - but it is a technically correct way of doing addition. My point is, any definition of programming language either must include HTML, or must exclude domain specific languages that clearly *are* programming languages.
  2. As others have said, this is a really bad criteria. There are no turing complete programming languages (because there are no infinite tape turing machines!). Of course it can. Because we're doing programming, let's use the <var> tag to encode numbers. I'm going to encode the numbers as a string - the value of the numeral is the number of characters in the string. For example 6 + 7: <var>Hello,<\var> <var> World!<\var> Generates a string 13 characters long. Which in my encoding, means that it is the numeral 13, which is 6 + 7. Therefore you can add numbers by doing "Hello, World!".
  3. No idea. I do know that it probably is written by people who use linux rather than windows. I also know that many linux distributions tend to be less resource intensive than windows - some more than others! It also depends on a lot of things. If you want to render using GPUs, it may depend on whether you use AMD or nVidia cards (at the same performance on windows, AMD will generally outcompete nVidia on performance/convenience).
  4. So, of the options I mentioned, PyPy (The RPython version of python) is actually faster than CPython (the normal version of python), but LLVM is actually used in the Clang C compiler, so may be the better option.
  5. I'm going to say that this is bad advice! In part, it depends on what you want to do with your language. But also, it makes for an unusable language. Nobody wants to use a 6502 in 2019. With modern compiler building tools, targeting something like LLVM would be good for procedural languages, as it provides optimisation, register allocation, and every backend you could want. You would have to learn SSA form, and how to use phi instructions, but it's actually easier than most assembly languages. But if the language is very similar to python, you may want to build an interpreter on top of python, rather than a compiler. Then, you can use the Futamura projections to collapse the tower of interpreters into a compiler. If you're willing to dive deep into the tools, the PyPy project provides tools for building a JIT compiler this way, with the RPython toolchain, though it is a limited form of python. This is kind of an arcane way of doing it, and won't teach you too much about traditional compilation techniques, but it is also fascinating and fun. +1 on the dragon book(s). Perhaps a little dated, but still good.
  6. Apple have basically abandoned industry standard graphics APIs and gone with their own thing. Unless you specifically want to target macOS, you'll likely find more and better tools on other machines. This isn't going to be that important with basic unity stuff, but if you end up going into deep graphics magic, you may find vulkan/dx12 better supported at the bleeding edge than metal, and if developing for an indie game, the first two will let you develop for a wider audience. But unless you're doing some really experimental stuff, that shouldn't matter too much, and you could just go with an older version of openGL. Going with macOS, since you're familiar with it, may serve you better in that case.
  7. Puppy Linux is likely your best bet. It loads the OS (~210mb) entirely into RAM, in order to negate the slowness of the USB. TinyCore/Core does this as well, but they aren't as full featured as puppy.
  8. I'm unfamiliar with any implementation of ray tracing that works purely by operating on an acyclic graph. I can see the argument that you produce a DAG in 3D space, but that's not a graph problem. RTX does use octrees, but it has specific hardware to accelerate those octrees, and this only accelerates raytracing (by culling objects) AFAIK. And BSP is also used to similar effect. But in both cases, you have acyclic graphs with relatively few edges, as opposed to well connected graphs with cycles, that may cause issues. Also, it's only relatively recently that GPUs have outperformed CPUs in raytracing, partly because detecting intersections is difficult without breaking uniform control flow. So yeah, some graph algorithms do work well on GPUs, but add cycles, backtracking, and other common graph algorithm issues or techniques, and you have something possibly much less suited to GPUs. GPUs might find a small advantage here, but nothing near what other kinds of problems can have.
  9. Are you sure? With graph theoretic algorithms you might need non-uniform control flow, which is a big no-no on GPUs. Which is something others have brushed past, but its hecka important. For efficiencies sake, you need every invocation* to have the same control flow, because every invocation will take every path. * An invocation is like a thread, but per work-item (in a graphical setting, these may be vertices or pixels, in GPGPU, these can be anything)
  10. Take a leaf out of apples book and don't start from scratch. Start with, say, L4, which is probably the most successful modern microkernel, and build your own services etcetera. If you use L4Linux as a basis, you can possibly treat L4 like an exokernel, having the features of Linux (though running in userspace) alongside a microkernel API allowing access closer to the metal. You might need to build an arcane X11 setup if you go for this option, but it would be nothing on starting from scratch on modern hardware, which is the result of 50 decades of workarounds. I like the Idea though. Personally, I'd like to do a huge refactor of Linux to build a microkernel OS, or split L4Linux into multiple services, but that'd be a full time job.
  11. More properly: Run Kali in a VM, not directly on hardware! It's primarily a set of tools, and not a useful OS!
  12. So, it turns out, that at a small scale, it's cheaper to go solar, because it requires less engineering. On my favourite electronic component supply website I can get panels at £3/W, whereas the cheapest motors come in at around £5-£10, and have a built in gearbox which may help or hinder your project. And an AC motor, which produces a nicer waveform, costs at least £30. And then, you need the actual turbine blades! While you could build a William Kamkwamba style turbine, it might not win favour with a HOA. You'll possibly end up 3D printing it. Oh, and a housing. These costs add up. Meanwhile, on the solar project, we're already at the power conditioning stage. Once again, solar comes out on top, since we'll likely be using DC-DC conversion, with a relatively stable signal. Output 5v to charge a phone, or whatever. On the wind side, we'd need much better power conditioning, as DC motors are noisy, and an AC motor won't produces a useful waveform either. You'd need to do either AC-DC conversion, or AC-AC conversion, both of which are more complicated than DC-DC regulation. AC-DC conversion needs a bridge rectifier, and some smoothing circuitry to ensure a nice flat DC signal. AC-AC conversion is even more difficult. This is because you have your input signal, which is within some range of frequencies, and then you need to convert it into a specific frequency. For a hobbyist, perhaps you'd choose a motor-generator set - tripling the cost of the motors (at least)! TL:DR Solar is cheaper to build at small scales, if you have access to the internet.
  13. SPIR-V - the fifth iteration of SPIR - is an important (and cross platform) assembly like language for GPUs. It allows for the front end of the compilation process to be done ahead of time, and with a single front end. Vulkan no longer requires opengl. Instead, it uses SPIR-V. The shaders are compiled to SPIR-V. And these shaders (in theory) will work on any platform with vulkan support, be it a PC with an Radeon VII, a Nintendo Switch with it's nVidia Tegra GPU, or a smartphone with a Mali or Adreno GPU. It can even be used when developing for iOS and macOS, although it will have to be converted into MSL, using tools like those in MoltenVK.
  14. Cooling this kind of processor too much will just increase the resistivity of the silicon, making the critical path (length of a clock cycle) longer. They just don't put out enough heat for cooling to be the bottleneck.
  15. Have the game and the file owned by another user to the actual user, and the game changes the permissions of the file. Let's say we have users user - the actual intended user gamer - the owner of certain files and files -rws--x--x gamer gamer 34927 Jul 30 17:50 snake -rw------- gamer gamer 1268 Jul 30 17:48 secret The snake binary is setuid gamer, so when it runs, it runs as gamer. This means that the snake program can change the permissions for secret (perhaps to -rw-r--r--). However, due to permissions, user cannot change the permissions for secret.