Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards


Contact Methods

  • Twitter

Profile Information

  • Location
    Brasília, Brazil
  • Gender
  • Interests
    Anything that can give an electric shock and/or can be programmed.

Recent Profile Visitors

3,062 profile views
  1. Correct, but not due to underlying micro architecture. Exactly. Which may support different combinations of instruction sets and still may share a single language, even though not every instruction is supported by all micro architectures. It is a clear example of what I said. This is the point I still didn't manage to get across. No, they don't. They just need to map their registers and instructions. The only scenario where it is absolutely necessary to create a new assembly language is when your architecture doesn't share the same computational model we are accu
  2. There are communist parties all over the world and they are just as bad... I'm in favor of banning them all.
  3. Are you sure your professor said that? Because this is incorrect. I think one of you misunderstood something, or maybe nomenclature changed over time. There are different assembly languages (GAS, NASM, MASM, Intel, ...), but not one for each architecture. Instruction changes do not create a new assembly language by itself. Not really. Some languages can map assembly instructions into new hardware instructions, requiring the assembly code to be reassembled (just like you can recompile C code to different architectures). NASM also has higher-level pseudo instruction
  4. This is how you end up with alternative standards/certifications that mean basically the same.
  5. Oh, that's pretty cool. Looked into the map thing and it seems like it is explicitly mapped into hardware (not necessarily into individual drives though). I guess for efficiency in data locality + network traffic, but kind of defeats the purpose of what I've imagined. Using torrent you could have almost random distribution of sparsely used data copies plus local copies of frequently used data instead of specifying where to replicate them . It also would depend on how many copies you do to ensure there won't be any data loss. I'm not into it, so I
  6. Interesting. I'm not into server stuff, but data centric reliability, rebuilding data and scaleability just reminded me of torrent. Couldn't it be used to replicate data blocks on a multitude of disks? Also could be used to maintain a distributed map of the blocks so no single point of failure. Resilvering would simply be asking to download a set of specific blocks (which doesn't necessarily need to be tracked, as the number of peers with the blocks can also be used as an heuristic on how many additional copies are required to maintain a certain level of confidence the
  7. Exactly. Bugged firmware, drivers and delayed Windows updates. It's the exact opposite of Google's Pixel line.
  8. I'm with a 1st gen ryzen (1800x and b350). I've already finished most of my research that required that continuous use, USD->BRL exchange rate is really unfavorable right now, supply shortages, etc. I'm waiting a threadripper or 5950x successor with DDR5 before thinking of upgrading. If prices remain insane, I'm probably moving development to the cloud and get a console for gaming...
  9. Yup, usually 30~40GB batches (compressed). My 24GB of RAM are usually 85~90% utilized by the simulations, which are also limited by the core count, which is limited by the mobo. May change everything when the next gen Threadripper is released with DDR5 by 2022.
  10. I don't use it for cache, just for writing data in parallel in random patterns. Worst case scenario for HDDs. Last time I've used ssd cache (64GB) for data was in a laptop (1TB 5400RPM), and it was quite fast for office workloads. Cooling was complete garbage on this Samsung ultrabook, so I've never bothered trying to game on it.
  11. It would make sense for more modern games, but for retro stuff? Using it as 2nd-level cache for HDD would make more sense, but main memory cache should suffice for retro games. Sounds pretty fun for development though. I usually write a few TBs per day when processing stuff and have a sacrificial SSD reserved for that.
  12. As storage? But this memory is volatile. Your emulator could still use it as cache, but the latency is worse than main memory, so it wouldn't make much sense.
  13. How about something like this? It's unoptimized for didactical purposes. You can completely remove the branches, labels and final addi's if you make a few changes. Input X AND Y are should use no more than 31 bits. If you want bigger operands, you will need to deal with the resulting 64 bit number divided into the HI and LO registers. Or you can use the floating point unit... Which removes all the fun... I remember when a professor asked us to implement the FFT and IFFT using only integers. It was a pain to implement the complex math (imaginary+r
  14. Are they seriously going to continue this madness? God, there must be a way to do magnetic recording/reading without requiring moving parts close to the disk, and preferably one that could be done in parallel.