Jump to content

Fourthdwarf

Member
  • Posts

    92
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    Fourthdwarf reacted to Nayr438 in Worried about moving to linux with corsair hardware.   
    The Corsair Commander Pro is supported by the Linux Kernel 5.9+.
    The Fans can be controlled with
     
    Fans Only
    fancontrol which is part of lm_sensors (Terminal) Fans and LEDS
    CoolerControl (GUI) liquidctl (Terminal)
  2. Agree
    Fourthdwarf reacted to Sauron in Changing the installation path of "apt" in Ubuntu 18.04   
    The short answer is that you can't. The long answer is that you can but it's a huge pain and not worth the effort, but if you really want to try here's how: https://askubuntu.com/a/441725
     
    However, the actual binaries often don't take up a lot of space; you can offload a lot of data from your main partition by mounting your other drive as your $HOME.
  3. Agree
    Fourthdwarf reacted to theviztastic in Goodbye Anthony, Welcome Emily!   
    trans people are not a hive mind. In this context, mentioning her previous name seems completely relevant and justified. I'm sure if it's a problem then it could be sorted, no need to get offended on behalf of someone you don't know, especially since there are actual hateful comments on the video, seems silly to get upset with someone being positive about the whole thing
  4. Agree
    Fourthdwarf reacted to nanopone in What is the linux distro with the best touch ui   
    you're not really looking for a distro that's touch optimised, you're more looking for a desktop environment which is touch optimised. in your case, it would probably be gnome. ubuntu ships with gnome, but i would not recommend that you use ubuntu as the snap ecosystem of software is still extremely poor. fedora ships with gnome and is very user friendly and leaves gnome mostly stock. it's also recommended by the gnome project itself
  5. Agree
    Fourthdwarf reacted to RONOTHAN## in What is the linux distro with the best touch ui   
    So most Linux programs aren't really touch optimized, especially since you will need to go into the terminal every once in a while. Some are definitely better than others, but don't expect perfection with any that you go with. 
     
    The most touch optimized out of any of them will be anything Gnome based, Fedora and Ubuntu being a couple big names that do. They are probably the only desktop environment that put some thought into a touch only input, where you do have gestures for the on screen keyboard, switch application screen, and everything has big, easy to hit icons. It's still by no means perfect, but it's probably about as good as you're gonna get. 
     
    I do believe there is Ubuntu Touch as well, but that's more designed for phones and I'm not even sure if there's an x86 build. Might be worth looking into, but don't expect to find much. 
  6. Agree
    Fourthdwarf reacted to Takumidesh in Learning Programming - I'm lost   
    Why? You can work at one of the millions of companies that are craving developers and will actually give you work life balance and not expect you to drink the kool-aid.
     
    speak for yourself, working at a startup sounds horrible to me.
    If you really want to work at FAANG, you need to actually learn computer science. You should study data structures and algorithms, OOP, Discrete Mathematics, C, Big O notation and how to effectively use it.
     
    You should probably spend a little bit of time with low level stuff like x86-64 ASM (though you can get by with just reading about it) as well as concepts like memory management (and garbage collectors for managed languages).
     
    Things that will be helpful for you is understanding how and why computers work. Floating Point numbers, Arrays, Pointers (including OS stuff like Execution pointers) Call stacks, etc.
     
    If you want to find something that will teach you all of that, learn C and then C++. this will get you used to working closer to the hardware and C++ will get you into the world of OOP.
     
     
    After you are comfortable with that. Learn about code deployment, CI/CD, How to work with others in a team (git or other VC implementations) Building for target architectures other than your dev env, etc.
     
    OR:
     
    Instead of all that just learn .NET core and go work at a local insurance company. 😄
     
     
     
    IF you really really don't just want to work somewhere 'normal' then you can try to do web dev for big tech companies. So you need to learn the big 3 (HTML,CSS,JS) as well as a few of the big JS frameworks (react, knockout, angular, etc).
     
     
    So much depends on what you want to do though. Google has nearly 20k developers, all working in different teams with different skill sets and specialties.
     
    It sounds like you are younger and still learning and that is great! but for now in all reality I would a) manage expectations, and b) just continue practicing. think about what you want to build with software and read about what you would need to accomplish to build it. If you want to make a movie tracking app then read about databases and web frameworks. If you want to make a video game then learn about graphics APIs and system calls. If you want to make a robot learn calculus and PLCs.
     
     
  7. Like
    Fourthdwarf got a reaction from Eigenvektor in Algorithms   
    You need to be able to build algorithms using well known strategies, and know when it's appropriate to use them. Strategies like:
    Greedy Algorithms Memoization Dynamic Programming Monte Carlo Methods Divide and Conquer Exhaustive searches
  8. Like
    Fourthdwarf got a reaction from Wictorian in Algorithms   
    You need to be able to build algorithms using well known strategies, and know when it's appropriate to use them. Strategies like:
    Greedy Algorithms Memoization Dynamic Programming Monte Carlo Methods Divide and Conquer Exhaustive searches
  9. Agree
    Fourthdwarf reacted to FFY00 in Creating a Compiler   
    Just use LLVM.
  10. Agree
    Fourthdwarf got a reaction from PiPi636 in mac or windows for coding   
    Apple have basically abandoned industry standard graphics APIs and gone with their own thing. Unless you specifically want to target macOS, you'll likely find more and better tools on other machines.
     
    This isn't going to be that important with basic unity stuff, but if you end up going into deep graphics magic, you may find vulkan/dx12 better supported at the bleeding edge than metal, and if developing for an indie game, the first two will let you develop for a wider audience.
     
    But unless you're doing some really experimental stuff, that shouldn't matter too much, and you could just go with an older version of openGL. Going with macOS, since you're familiar with it, may serve you better in that case.
  11. Agree
    Fourthdwarf reacted to nox_ in Linux distros don't support 4k wtf ?   
    Agree on that, he frequently does this, not just in threads he creates, but in those created by others, always says things like they are facts, even after you show him that he is wrong..
  12. Informative
    Fourthdwarf reacted to mathijs727 in Programming - GPU intensive things?   
    RTX does not use octrees, it uses Bounding Volume Hierarchies (BVH) which have been the most popular acceleration structure in ray tracing for years. For simple scenes the BVH is a tree hence ray traversal = tree traversal. However when instancing comes into play a BVH node can have multiple parents so it turns into a DAG structure.
     
    Also, GPUs have been outperforming (similarly priced) CPUs for years so I wouldn't call it something recent (before RTX GPUs were already much faster).
     
    Ray traversal also requires back tracking (most commonly using a traversal stack) so that's not an argument. The only real difference between ray tracing and maybe some other graph traversal applications is the amount of computation that has to be done at each visited node (ray / bounding box intersections in the case of ray tracing). And graph traversal itself isn't that branch heavy either. You basically have the same operation (visiting a node) repeated in a while loop. Sure, selecting the next child node contains some branches but those are one-liners. For example in the case of ray tracing: if left child is closer than push right child to the stack first, otherwise push left child first. Computing which child is closest (and whether it is hit at all) is computationally intensive and not very branch heavy. A bigger issue with ray tracing is the lack of memory coherency which reduces the practical memory bandwidth on the GPU (having to load a cache line for each thread + the ith thread not always accessing the i*4th byte in a cache line).
     
    Nvidia themselves also promote their GPUs as being much faster at graph analysis than CPUs:
    https://devblogs.nvidia.com/gpus-graph-predictive-analytics/
  13. Agree
    Fourthdwarf got a reaction from Nettly_ in Kali Linux WiFi not working :(   
    More properly: Run Kali in a VM, not directly on hardware! It's primarily a set of tools, and not a useful OS!
  14. Agree
    Fourthdwarf got a reaction from Totalschaden1997 in Kali Linux WiFi not working :(   
    More properly: Run Kali in a VM, not directly on hardware! It's primarily a set of tools, and not a useful OS!
  15. Like
    Fourthdwarf got a reaction from bmichaels556 in Renewable Energy... Where Do I Start? :)   
    So, it turns out, that at a small scale, it's cheaper to go solar, because it requires less engineering.
     
    On my favourite electronic component supply website I can get panels at £3/W, whereas the cheapest motors come in at around £5-£10, and have a built in gearbox which may help or hinder your project. And an AC motor, which produces a nicer waveform, costs at least £30.
     
    And then, you need the actual turbine blades! While you could build a William Kamkwamba style turbine, it might not win favour with a HOA. You'll possibly end up 3D printing it. Oh, and a housing. These costs add up.
     
    Meanwhile, on the solar project, we're already at the power conditioning stage. Once again, solar comes out on top, since we'll likely be using DC-DC conversion, with a relatively stable signal. Output 5v to charge a phone, or whatever.
     
    On the wind side, we'd need much better power conditioning, as DC motors are noisy, and an AC motor won't produces a useful waveform either. You'd need to do either AC-DC conversion, or AC-AC conversion, both of which are more complicated than DC-DC regulation.
     
    AC-DC conversion needs a bridge rectifier, and some smoothing circuitry to ensure a nice flat DC signal.
     
    AC-AC conversion is even more difficult. This is because you have your input signal, which is within some range of frequencies, and then you need to convert it into a specific frequency. For a hobbyist, perhaps you'd choose a motor-generator set - tripling the cost of the motors (at least)!
     
    TL:DR Solar is cheaper to build at small scales, if you have access to the internet.
  16. Informative
    Fourthdwarf reacted to ShredBird in Basic must-have ICs   
    I would highly recommend getting "The Art of Electronics" by Horrowitz and Hill.  This book covers how to build very practical analog circuits and it identifies plenty of off the shelf parts that are very well rounded.

    As for OP-AMPs, the cheapest go-to amp for basic use is the LM741.  If you need something a bit better go for an LF411.  If you need something precise, but still inexpensive, go for an OP27.
  17. Informative
    Fourthdwarf got a reaction from Candysandwich99 in Raspberry Pi Model Confusion   
    There's some confusion in this thread. Let's look at the models in chronological order:
     
    Model B: the first version manufactured. 800mhz single core armv6, with 256mb of ram, 2 USB ports, sdcard slot and an Ethernet port. $35
     
    Model B rev.2: 800mhz, with 512mb ram. Same dual USB + Ethernet. $35
     
    Model A: 800mhz, 256 mb, one USB, no Ethernet. $25
     
    Model B+: new circuit board with rounded corners and 4 USB ports. Same "form factor" as pi 2/pi3. Uses microSD slot. $35
     
    Compute module: 800mhz, 512mb ram 4gb flash. sodimm form factor. No sdcard slot. $30
     
    Pi 2: quad 900mhz armv7 processor, 1gb of ram. $35
     
    Pi zero: microHDMI, microSD and micro USB. Over clocked 1GHz single core processor. 512mb ram. $5
     
    Pi 3: 1.2ghz quad core. WiFi& Bluetooth. 1GB ram. $35
     
    Pi compute module 3: 1.2ghz quad core sodimm. 1gb ram, 4gb flash $35
     
    Pi compute module 3L: 1.2GHZ quad core, 1gb ram, microSD slot. $25
     
    Custom raspberry pi: Pi1/2/3/compute module with options for touchscreen, more ram, wireless & PoE. Minimum order quantity 1000, price on application.
  18. Like
    Fourthdwarf got a reaction from robertpartridge in Why is Linux considered to be safer then Windows??   
    There is a lot of misinformation in this thread, but here is a rundown of why linux seems to be safer:
     
    - Anyone can fix it. If you can get past a reasonably competent linux user, theres a good chance you've used some security flaw in the system. If this becomes known, it'll get patched on linux within a few days, usually. Attacks are more targeted as a result. If you can't use a vulnerability for more than a few days, you need to know where you're going to attack, otherwise you are wasting an opportunity as an attacker.
     
    - Package management keeps libraries up to date. On windows, software will usually ship with its own copy of needed libraries ('dlls'). As a result, if a vulnerability is found in SecureLibrary.dll, each program using it will need to update it. Package managers (and Steam, iirc) used to install software also install shared libraries, and keep them up to date. This means that as soon as the update is available, every program on the system uses the new library.
     
    - Package management helps prevent man in the middle attacks. On windows, you have to go on the web and download a .exe file. If someone replaces the site, and serves you there own .exe, it could be loaded with all sorts of nasties. Package managers, assuming you have a clean operating system to start with, usually have a verification system, only allowing packages signed by certain keys. For example on Arch linux, package maintainers have to sign packages with their key, which has to be itself signed by at least 3 of 6 master keys. As a result, it is very difficult to replace their package with yours, and just as difficult to add your key to their system.
     
    - An awful lot of linux stuff is replaceable. A destructive virus or ransomware don't really make sense on linux, as the system is often an embedded device than can be switched out easily, or a server with good backups. Either way, the system can be replaced or repaired in a matter of hours, certainly quicker than most ransomware give you to pay up. As a result, ransomware tends to go for windows machines.
     
    - The users tend to be more competent. (Almost) no linux user would believe the 'microsoft technical support' scams, so nobody bothers. Most likely a larger proportion of linux users use ad blockers, or javascript blockers, or other browser addons/features that help prevent malicious adverts and sites from running. And now that docker is a thing, more and more people put untrusted code into containers, just in case it does something bad.
  19. Like
    Fourthdwarf got a reaction from Kaminishi in Why is Linux considered to be safer then Windows??   
    There is a lot of misinformation in this thread, but here is a rundown of why linux seems to be safer:
     
    - Anyone can fix it. If you can get past a reasonably competent linux user, theres a good chance you've used some security flaw in the system. If this becomes known, it'll get patched on linux within a few days, usually. Attacks are more targeted as a result. If you can't use a vulnerability for more than a few days, you need to know where you're going to attack, otherwise you are wasting an opportunity as an attacker.
     
    - Package management keeps libraries up to date. On windows, software will usually ship with its own copy of needed libraries ('dlls'). As a result, if a vulnerability is found in SecureLibrary.dll, each program using it will need to update it. Package managers (and Steam, iirc) used to install software also install shared libraries, and keep them up to date. This means that as soon as the update is available, every program on the system uses the new library.
     
    - Package management helps prevent man in the middle attacks. On windows, you have to go on the web and download a .exe file. If someone replaces the site, and serves you there own .exe, it could be loaded with all sorts of nasties. Package managers, assuming you have a clean operating system to start with, usually have a verification system, only allowing packages signed by certain keys. For example on Arch linux, package maintainers have to sign packages with their key, which has to be itself signed by at least 3 of 6 master keys. As a result, it is very difficult to replace their package with yours, and just as difficult to add your key to their system.
     
    - An awful lot of linux stuff is replaceable. A destructive virus or ransomware don't really make sense on linux, as the system is often an embedded device than can be switched out easily, or a server with good backups. Either way, the system can be replaced or repaired in a matter of hours, certainly quicker than most ransomware give you to pay up. As a result, ransomware tends to go for windows machines.
     
    - The users tend to be more competent. (Almost) no linux user would believe the 'microsoft technical support' scams, so nobody bothers. Most likely a larger proportion of linux users use ad blockers, or javascript blockers, or other browser addons/features that help prevent malicious adverts and sites from running. And now that docker is a thing, more and more people put untrusted code into containers, just in case it does something bad.
  20. Like
    Fourthdwarf got a reaction from V3t3r4n in A minicomputer for a begginer   
    If you just want to program, if you have a computer, it will do, no need for an arduino or pi. But that you mentioned arduino, it makes me think you might also be interested in electronics, in which case an arduino is fantastic, as you can get clones for extremely cheap (less than a fiver for a nano clone), and needs only a mini-b cable and whatever components you plan on using for your circuit.
     
    The arduino can easily communicate over serial, if you then want internet connectivity or more computing power than the uC can provide
  21. Agree
    Fourthdwarf reacted to kasugatei in A minicomputer for a begginer   
    +1 for the full sized one. With zero you will have to spend a lot more time connecting all of the dongles and you'll nee lots of adapters. Just buy regular RPi and that's it. Zeros are more intended for installation where you have limited space.
  22. Like
    Fourthdwarf reacted to Ronda in Portable Linux   
    You can write the installer ISO to another flash drive, boot from that and then do a full, proper install on the 128GB model as you would on an internal drive.
  23. Funny
    Fourthdwarf reacted to BurningSmile in What Command Line Text Editor Do You Use Most?   
    alias nano='vim'
    alias vi='vim'
    alias emacs='vim'
  24. Informative
    Fourthdwarf got a reaction from ContraHacker in Webpage Security and DBMS   
    It's not one password across multiple sites: it's one login.
     
    Basically, in this scenario, you are a relying party (rp). The user chooses an identity provider (ip), which is the only site they login to.
     
    Ideally, they will choose an ip to login to that is more secure than you could provide.
     
    The ip then sends you a certificate with all the necessary information to verify the identity of the end user.
     
    The user only ever logs into the ip, but you get a certificate of identity that allows you to associate data with that person. The certificate is specific to your website.
     
    Because the user only logs into the ip, there is only one password.
     
    This means that if a site that uses openID gets hacked and the certificates are released, users can't login to other sites, even those associated with the same openID, because each certificate is unique to the ip/rp/end user combination.
  25. Agree
    Fourthdwarf got a reaction from ContraHacker in Webpage Security and DBMS   
    OAuth keys aren't susceptible to dictionary attacks or password reuse, whereas passwords are.
     
    After that, it's all about the authentication process, rather than storing the information.
     
    - You let the user decide who they trust to authenticate them. For all they know you might be harvesting passwords. (Not that I'm saying you are)
     
    - You let someone else handle the authentication infrastructure. 2 factor authentication isn't often provided on smaller websites, because it's a pain to set up. Also, who can handle logins more securely: you or google?
     
    - You need less time setting up accounts. No need to get an email address, password (confirm password). Not strictly security related, but it's a bonus.
×