Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards


About RedRound2

  • Title
  • Birthday December 22

Contact Methods

  • Steam
  • Xbox Live

Profile Information

  • Gender
  • Interests
    Technology, Gadgets


  • CPU
    Intel Core i7 3770K@ 4.1 Ghz
  • Motherboard
    Asus P8-Z77 V PRO
  • RAM
    Corsair Vengence 16GB@1600 Mhz
  • GPU
    Sapphire Radeon HD 7950 3GB OC
  • Case
    Cooler Master HAF-XM
  • Storage
    Corsair Force GS 240GB SSD, 1TB WD Black
  • PSU
    Corsair HX850
  • Display(s)
    Dell S2240T 21.5" Multitouch Screen
  • Cooling
    Cooler MasterHyper 212 EVO
  • Keyboard
    Logitech G110
  • Mouse
    Logitech G602 Wireless Mouse
  • Sound
    Razer Kraken Headphones
  • Operating System
    Windows 10

Recent Profile Visitors

2,613 profile views
  1. I did mean logging the camera feed. And Im not talking about sentry mode, which is indeed an external storage medium. I'm not sure if this is still true, but Tesla does log all video footage and send it to themselves to improve their self driving NN. Otherwise I can't really think of any possible reason on why the entire drive needs to be written everyday
  2. Yeah, but probably the autopilot feed from all 8 cameras is what cases there to be 1.5 write cycles a day. They can maybe use a more efficient format, increase storage in addition to SSDs which these days have really good write endurance and be much faster at the same time to boot. Or they they have a seperate storage medium for autopilot stuff that can be easily swappable since that is kind of a extremely heavy task running on something that is supposed to be mainstream product
  3. They can't seriously argue their 5-6 year claim when the MCU issues prevent core functionalities mentioned above. In a way, they shot themselves on the foot by shifiting everything to the MCU but then not taking extra steps and redundancies to make sure that the said MCU is as relaible as an ECU found in most if not all cars today. And seems like swapping the eMMC with a reliable SSD looks to be solution here, which I'm sure they have in their newer cars, but nonethless they should've know or had the foresight for this
  4. I am really not sure what is up with you to seemingly not being unable to make any connection whatsoever. I can eat orange, but feel nauseous when I eat grapes I can travel by train, but feel nauseous when I fly. Does that mean grapes can fly? Or train in an orange. Oh but wait, it's a wrong comparison, because I cannot compare and edible thing with travel? It's not the same. The airlines company never intended to sell me oranges I feel nausous whe I fly, like when I eat grapes This is literally what your argument is. Make up random connecti
  5. This is a statement I agree with. I have no issues whatsoever with this argument. I originally replied to the guy who went writing paragraphs about how apple achieved what they did with software optimizations. As you said now and I said before, things like battery life, snappiness, big.little core management can all be attributed to software OS optimizations that does indeed make a significant difference in performance. My point being that in hard brute force tasks, the effect of those optimizations are minimised, given that the OS is matured and has certain basics in place like tasks proprity
  6. Good god. In what context does like mean the exact same? I assume I dont have to quote the definition of word like. I drew a comparison, and by no means it was a perfect comparison but it was enough to get the point across - and you yourself dont disagree with the example I gave, so what's the issue? You made an incorrect conclusion. Just deal with it and move on.
  7. Both of you are saying the same thing, yet you either missed or ignored a previous probably 5% performance overhead that may come due to the degree of optimization between each OS (quoted myself below). Yet we don't see a 5% difference in anything here, much more than that which says something about the hardware here, not optimizations. Even if Apple had full control of hardware stack, they can only make things significantly faster by either using space-age technology in silicon (it's a joke - dont jump on me for this) or by using dedicated accelerators, which doesnt seem to be the case so fa
  8. No you didn't. I made a clear distinction and drew parallels by bring up an example. Final cut used hardware acceleration, premiere did not. That was whole point. Whatever else, you think I did, is exactly what you think I said and nothing more So are we to assume that Windows hasn't been optimised for Intel? That Intel Macs were severely nuked in optimization on macOS (apart from thermal - which they did address later gen and got it under control) Yeah Windows didn't know how to use CCX initially, but it's pretty stupid to assume that Intel chips has so much more potential
  9. Again, you never really understood what I was trying to say. Final Cut on Intel macs were fast because it made use of quick sync. Premiere didn't at the time Topaz lab makes use of some Intel accelerator, on Mac it didnt make use of the NPU in M1 Do you get the parallels I was drawing? Sorry that I didnt have any example where the same software was nuked on one OS while works well on the other Literally nothing productive came out of this conversation, since you just assumed random things rather than asking me if I meant it was similar to comparing it t
  10. This whole paragraph below goes into great lengths on how software optimizations somehow made things faster on M1 machine which is far from true Which is completely true. This is exactly how Apple is squeezing so much performance out of their processors (M1 included) in the first place - that tight integration between software and hardware goes a long way to boosting Apple's position here. Intel's chips - running plain old windows - don't get to benefit from that level of integration, which does hurt their position when making direct comparisons. As those who have been playing a
  11. Both Intel and M1 have dedicated accelerators for certain applications. And I agree that is an important metric when choosing platform. That was never the issue. But when Intel decides to use a software that specifically took advatage of their accelerator on their processor and did not make use of the NPU in the M1, that's where the issue arises (they could've either ommitted it, or found a softare that made use of both). And then going on screaming about how they achieved some insane performance is being disingenous Can you give some context on things you talk about. You dont even rea
  12. Did you not read my earlier comment before? I said that comparing premiere pro and final cut is an example of a bad comparison, where the former beats the latter just due to it's better optimizations. Drawing parallels to why the Topaz Labs AI optimization is bad example since it makes use of dedicated harware accelerator on Intel's chip, but doesnt make use of the Neural Engine on the M1. That is Intel being disingenuous at best. Ideally to have showcase their product, they should have chosen benchmarks that forces the chips to brute force the solution and see the raw speed. Anyw
  13. I still dont get how any suite of benchmarks that Intel ran on the M1 were blessed by the software optimization in macOS. A purely mathematical calculation cannot be further increased in speed because of the operating system. Where you could argue about OS optimization is like how some Youtubers do a real world speed test on phone, booting the phone, opening up diffferent applications, closing them, keeping in memory, etc where an iPhone with measily 4/6GB RAM ends up being better that 8/12GB RAM android phone I dont see how running a PDF extractor, or calculating data in excel, en
  14. What? Final Cut runs a lot faster in M1 compared to last gen Intel Macbooks, so I dont get your point. What I said was that comparing final cut on Mac and premiere pro in Windows is the kind of tests that would introduce a variability in software optimizations, similar to what Intel did in one of the tests. That's not really a comparison you use to potray the superiority of the chip and it's intel trying to be disingenuous at best Can you give me a source for this? Becasue all the articles on Linux running on M1 talks about how the non standard components of the Mac an the lack of any
  15. They see Apple as a huge threat. Literally the new CEO in the first meeting told others that they have to come up something better than anything "the lifestyle company in cupertino can produce" Except no one really ever compared something like Final Cut pro and Adobe Premiere, where the former plays into Apple's hardware/software integration advantage while the latter doesnt. What we've always done with teh M1 is compare apple's to apple's to benchmark scores between CPUs that doesn't really have anything to do with their operating system or any integration. It's just a bas