Jump to content

straight_stewie

Member
  • Posts

    2,705
  • Joined

  • Last visited

Everything posted by straight_stewie

  1. One that I've always wanted to do, and came close before a family disaster stopped me doing my hobbies for a couple of years: Be warned, that this one is extremely intensive in the higher order maths: Languages definable by parallel string rewriting are proven to be equivalent to languages definable by sequential string rewriting. In a manner of speaking, this means that a generalized L-System can define any language. The project that I want to undertake is two-fold: Prove that it is possible to write a useful programming language where-in parsing happens on all tokens in "parallel" rather than sequentially. From some early experiments I conducted with plain string rewriting, this is actually fairly easy to parallelize execution on in all cases. General rules can be derived for actually running a parallel string rewriting algorithm that can handle both context free and left-right context sensitive grammars in parallel. This can speed up processing of large inputs by an incredible amount. This can generate huge amounts of user-defined patterned and possibly stochastic data quickly (enough to crash a 32GB RAM machine in about 5 seconds using just an i7-6700HQ when doing things in-memory) A mini-language for object searching and manipulation can be created, similar in some ways to LINQ or PLINQ, except that, by the very definition of a parallel string rewriting system, the language used would allow for "automatic" parallelization of the searching, creation, and manipulation of a collection of objects. L-Systems and parallel rewriting languages are definitely an open area of research in this regard. Most existing research has to do with their usefulness in describing biological growth/processes or for drawing curves. One way to see if this is an area worth exploring further is to try to convince yourself that the string rewriting rules used in L-Systems are the same as would be defined for a Chomsky grammar using BNF.
  2. One big feature that I've never seen in another (free) kit would be test suite scripting. That is to say, if I would like to compare runs of multiples of the same test to see score trends, I currently have to manually record information. It would be nice to have a tool that would allow me to say something like: for num_runs in range(0, 10): result = run_test() result.append_to_file(myPath) Or whatever it might look like. This helps in finding relatively high, but still very stable, daily driver overclocks. Results could include everything from benchmark scores to various machine temps as well.
  3. Would you like to have control of your RGB and/or fan speed on an individual or group of fans level?
  4. Windows works the same way, it's just hidden better. Everything is always a file, whether it's presented that way or not. Exposing the "everything is a file" interface to the user can have several benefits. It's significantly easier to write an application that pretends to be a mouse: All you have to do is secure a file for a mouse and write instructions to it. And that's just one example. This is derived from how things work on silicon: All you can really do is write to an address and read from an address, sometimes with side effects (operations). And that's it. That applies whether you are writing to a register, memory, or storage. All you can ever do is write to and read from an address. Your argument against hardcoding things like filepaths is really a different argument altogether, one which basically everyone agrees with, with one caveat: They agree with it for application development.
  5. Viewing things at 4k resolution on a sanely sized TV isn't what the switch to 4k was supposed to do. A higher resolution enables more accurate edge detection, which improves video smoothing, image sharpening, and certain parts of HDR. It also enables you to watch a larger TV in a smaller space.
  6. straight_stewie

    こんばんは - Mini Steam Giveaway 🎉 Giveaway Items: 6…

    I'd like to be entered into the DreadOut pool. I'm not as active as I once was, but I hope I haven't become a ghost member. Or maybe I do, horror is kinda cool.
  7. I mean, you're not wrong, but... That's kind of not the point of "one-liners".
  8. Voice reproduction is a very active area of research. The tutorial you will need to do a good job of this (indistinguishable from real life) essentially amounts to graduate or post graduate studies in the field.
  9. Well, one could argue that the GPL is not sane unless you are in the universe of GNU tools.
  10. The easiest is just python with gpiozero. It comes baked into RaspberryPi OS. An arduino is somewhat better for many tasks (including this one). It is also orders of magnitude more difficult for what you are trying to do. Just to be clear, @Deflowerer is right, a raspberry pi running a full blown OS is overkill for what you are likely trying to do. But he's wrong in the sense that it sets you up for failure. @Deflowerer forgot to account for the effort required in spinning up new knowledge. Real life engineering is about balancing costs with desired end results. Having to learn a new language (and a host of other things that go along with WiFi based networking while running essentially baremetal) is extremely expensive in terms of time and effort. It is less than 100 lines in python to make probably one of the nicest Commercial-Off-The-Shelf toaster interfaces in existence. Since OP is working on a publicly announced project that is part of a long standing tradition, they cannot fail or be partially complete. So my vote is to take the easiest way, because it costs you nothing here: Use a raspberryPi loaded with RaspberryPi OS, and write your code in python with the gpiozero library.
  11. If I'm following this correctly, that just prints all numbers b <= n <= a where n % b = 0. In plain English, you are looking to find all multiples of b between b and a inclusive. What you should do is find the largest multiple of b less than a. We can do this by writing: int largestMultiple = div(a, b) * b; Then you would just continually subtract b from largestMultiple until you are equal to or less than b, printing as you go along. This is about as efficient as it gets, because it only requires as many iterations as there are multiples of b such that b <= multiple <= a.
  12. Can you share your specific testing methodology? There are many factors that could cause significant differences in results, especially if you only ran the test once each.
  13. Those aren't terrible temperatures, but there's room for improvement. If you find it difficult to understand, you can just leave it alone, and if those are really the peak temperatures, you should be fine for many years still.
  14. There were two different allegations: One that alleged that two different products from two different companies (admittedly, owned by the same parent company) sometimes accidentally traded with each other. The allegation admits that these were accounted for and shown to the public. The other allegation is that someone was also manually engaging in wash sales to bolster the numbers. The second one is a serious crime with serious penalties. The first one is a flimsy allegation, and if it holds up , a whole lot of firms are going to have to do some very serious restructuring (why would it be legal for that to happen in forex, but not crypto?) But that's how these court proceedings work: Prosecutors looking to make an example will charge someone with every single little thing that looks like it might be possible. Then the first rounds in court are about the defense getting the majority of those charges thrown out, then we finally get to deal with the real and serious accusations, then the defense will start appealing the decision. That's how it works, every single time.
  15. This is one is a stretch. Mostly because, by the governments own admission, Coinbase did in fact publish the information about them trading to themselves. If various traders or agencies were too dumb to take the volume/liquidity net of those disclosures, that's not Coinbase fault. This is also flimsy because, since Coinbase did disclose that information to the public, the government would have to prove that they took active steps to make it's disclosures difficult to find or read with the express intent of misleading the market. However, the rest of the accusations are not nearly as flimsy. But, on the other hand, is this really a case of fair treatment under the law, or is this the beginning of the crypto crackdown we've been waiting on? If the government is to begin accepting crypto-currencies as valid currencies (a requirement of regulating and taxing them), then they will eventually have to start figuring out how our existing body of law applies, and that process usually involves a crackdown.
  16. Facsimile machines have a place. They can be made to be much more secure than email, and automatically generate paper copies at all ends. I hate most driver assist technologies. If the car can't drive itself, I don't want it taking control.
  17. Then this entire discovery is academically interesting, but a non-issue for users. If someone can get your processor to enter debug mode (especially without unfettered physical access) then there's a much bigger problem somewhere else, and the end user is already fxed. The feds can, and have, installed hypervisors that can hide themselves in certain hard drive controllers. This was the major capability of the EquationGroup's espionage platforms, or so says Kaspersky.
  18. In my personal opinion, the Grid is one of the most overused elements in WPF. You can build this with 6 StackPanels, which will make it easier to add, remove, or change the game elements, and virtually not change in any way your ability or means to control the relative sizes of components. Here's a graphic to help visualize this. As a tip for building a layout this way, docking the stackpanels to simple window locations is useful, and so are the horizontal and vertical alignment properties available on the StackPanel element.
  19. Thermal paste is like good sushi: You need to clean your pallet with 99% isopropyl alcohol first. /sarcasm
  20. cannot hold a match to the most powerful application processors. https://kinvolk.io/blog/2019/11/comparative-benchmark-of-ampere-emag-amd-epyc-and-intel-xeon-for-cloud-native-workloads/ If you look at these benchmarks, the ARM CPU outperforms the x86/64 processors in memory accesses. Which is a reasonable and expected conclusion since these processors exist mostly because AWS commissioned them to help run their database and AI offerings, which could really really benefit from a lot of in memory databases (hence high blocking I/O performance was the main point of optimization). In the benchmarks that are not blocking I/O limited, the x86/64 offerings from both companies smoke the ARM processor. Keep in mind that the application spool-up/spool-down and semaphore-acquisition/semaphore-release benchmarks are blocking I/O limited, which is where we already know that the ARM chip excels. It's almost like the ARM chip is designed to do one task really really well, while the x86/64 offerings are expected to do every task with high performance. Well, it's not "almost" like that, it's exactly like that: The stated purpose of the Ampere ARM chip was to help AWS improve their offerings for blocking I/O bound applications (data intense applications like databases and AI). The stated purpose of the x86/64 offerings is to build high performance servers for, basically, any task you could come up with. It's like a purpose built drag car vs a street/strip machine. Yeah, the nitro funny car sure get's down and boogies, for a quarter mile of perfectly paved straight road anyway. But the street/strip machine can help you pick up the groceries on the way home from the track. All my point, all along, is that there hasn't been an ARM chip yet designed for the high-end consumer space. We simply don't know how such a processor would perform.
  21. Pretty much everyone agrees that it will take around a decade of concerted effort to switch enough consumer applications to ARM for it to really become attractive. I will capitulate that we've seen the start of that process already. That's true enough, but my intention was to compare top-of-the-line offerings to top-of-the-line offerings. ARM will always have a leg up in the low power space, that's where very nearly 100% of it's development effort (both from the brand and from the individual suppliers) has been directed since the introduction of the brand. At the moment, it's a little difficult to compare a high end chip to ARM, because no one has made a competitive high end ARM chip yet, which is my point. Any word on when Apple intends to release an ARM chip for their workstations? My personal suspicion is that if Apple thought ARM was ready for that purpose the rumor mill would be churning.
  22. What? Have you actually looked at real benchmark scores? The most common M1 chips have only like a quarter the single or multithreaded performance for most non synthetics that the big boys (i7's and threadrippers) have. Just wait a bit on that one. The big limitation for x86/64 chips is particle physics. The big limitation for ARM is that no one has tried big, fast, ARM chips before. Let's just see what happens with the benchmarks over the next decade. Apple's ARM chips have HUGE ground to cover to catch up to current x86/64 offerings.
  23. I've been living in Mississippi for a little over a decade now. I am aware of what happens when your city doesn't have any snow plows and it snows almost 2 feet overnight. Within the next day, all of the retail parking lots where cleared, and the major roads had been packed down enough to make it passable by virtually any 4+ wheeled vehicle whether 2wd or 4wd. I'm also aware of what living in an area with excellent snow management is like. Even with top notch snow management, you still have to drive on ice, packed, and sometimes even fresh snow.
  24. Just to set the stage for what I'm about to say, I grew up in Wisconsin but I've been living in Mississippi for about 10 years now. We got the same snow storm that Texas did, mostly to the same extent. It really wasn't that bad. People in the south would just rather use snow as an excuse to not do anything than learn how to drive in it. Literally. The snow, overall, stayed on the ground for only a week (the bulk of it stayed for only three days), and yet every store was empty, suppliers didn't deliver to retailers, and people wouldn't come to work. But I drove a front wheel drive Ford Escape to work, every day during the storm, without issue. Work absentee problems during winter weather in the south are the result of laziness and poor driving ability, and nothing else. The power outages in Texas where pretty bad. From the research I've seen, there are really two reasons for that: Common failures that happen during ice storms (downed lines), but more largely, they rely very heavily on wind (and to a lesser extent solar). Those power sources simply don't work in storms. Common sense would tell you that if an airplane can't take off with ice on it's propeller, then a wind turbine won't work well with ice on it's propeller (that's basically what it is) either. This issue has been so highly politicized that I don't imagine any reasonable discussion can be had on it, but just think about it for a minute: Does a fan work when covered in ice?
  25. Do not, under any circumstances, cheap out on the Power Supply. A cheap or underpowered power supply can cause myriad issues that are difficult to diagnose as being a weak or bad power supply. If there is any component you should splurge on, it should be the power supply. Do cheap out on a case, on memory, on SSD/HDD, or on a GPU. Those parts are easily replaceable, and won't cause problems with other components if they are unreliable or have a failure. The power supply, on the other hand, is central to everything in a computer, and a bad or poor quality one will really wreak havoc on everything else.
×