Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

igormp

Member
  • Content Count

    1,113
  • Joined

  • Last visited

Awards

This user doesn't have any awards

4 Followers

About igormp

  • Title
    Veteran

Contact Methods

Profile Information

  • Location
    Brazil
  • Gender
    Male
  • Interests
    Embedded systems, computer architecture, machine learning
  • Occupation
    Data Engineer

System

  • CPU
    Ryzen 3700x
  • Motherboard
    AsRock B550 Steel Legend
  • RAM
    2x32GB Corsair LPX 3200MHz
  • GPU
    Gigabyte 2060 Super Windforce
  • Case
    Sun Ultra
  • Storage
    WD Black SN750 500GB + XPG S11 Pro 512GB + random 2.5" SSDs and HDDs
  • PSU
    Corsair CX650m
  • Display(s)
    LG UK6520PSA, 43" 4k60
  • Cooling
    Deepcool Gammax 400 V2
  • Operating System
    Arch Linux

Recent Profile Visitors

1,218 profile views
  1. It is tho: https://www.qt.io/qt-for-python qt has bindings for many languages. Anyway, the lib I mentioned in my first post should hopefully work.
  2. Eh, that makes things a bit more complicated, but this should work: https://github.com/wjwwood/serial Haven't tested it though, with linux it's as easy as just reading the serial file as a regular text file, or using pyserial with python instead of cpp.
  3. GCP (google) also has an unlimited free tier, I use it for some simple bots. Keep in mind that all of those are linux machines, so you may have issues with your ".bat file" idea.
  4. Yeah, sure thing, at least on linux. Not sure how it works on windows, but I can't see why it shouldn't work either. You could try simply running the cuda detection example while having the AMD card as the main GPU to check if everything is working as it should.
  5. Usually anything above ~3000MHz is considered overclocking, you'll just need to enable the XMP profile for the rated speeds. It's usually really simple, but it depends on what CPU and mobo you'll be using. The same applies to 3200mhz, although it's way easier to reach those speeds with most setups.
  6. Have you tried another cable, or tested that cable somewhere else to see if it's reaching 1gbit? If so, then maybe your mobo has a problem, or the router itself is borked.
  7. I can just press the "Play" button (Ctrl+Alt+N) and it does run: Right clicking the file on the left panel doesn't bring me any option like the "run python file in terminal" as you mentioned, but it may be due to the fact that I don't have Anaconda installed.
  8. How long will that server be used for? If you're planning on using it for at least over 500 hours per year, then it might be more worth to use locally instead of AWS.
  9. LGTM. Maybe more ram, but you can add another 4x32gb later on anyway.
  10. No That's an asshole move IMO. Colab instances are meant to allow people without resources to study and try out stuff for free, doing so is basically abusing the system. If you want to do it for the greater good, then pay for it. GCE is just a barebones cloud VM. You can use Colab as a front-end to any jupyter server instead of relying on their free machine, even on your own PC.
  11. Probably because your CPU is good enough to handle 1080p on software decoding anyway. Even my old fx6300 or the i5 4210u on my laptop can do so.
  12. Not really related to how they were uploaded, but how youtube handled it. But yeah, nothing you can do about it.
  13. Firefox only recently added support for it on linux. You can build chromium with VA-API support for hardware decoding, but prefer to use regular Chrome anyway.
  14. According to: https://videocardz.com/newz/nvidia-updates-nvdec-video-decoding-and-nvenc-encoding-matrixes-for-ampere-gpus Pascal only supports 8k decode with H265 4:2:0 and 8-bit VP9, anything different from that will fallback to software rendering.
×