Jump to content

Unimportant

Member
  • Posts

    1,230
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Unimportant got a reaction from Hackentosher in 3 Port 45W USB-C Charger (Formarly: pcie USB card.)   
    A 'scope with the bandwidth to make eye diagrams at pcie frequencies (to check signal integrity for your layout) cost as much as a nice car... 
  2. Agree
    Unimportant reacted to Kilrah in 3 Port 45W USB-C Charger (Formarly: pcie USB card.)   
    How could you not buy one but have no problem sourcing all the components for making your own?
     
    You'll also need to count a couple of months actually learning how to design high frequency PCBs...
     
    If you're not already equipped it'll cost much more than $500 by the time you buy 5 sets of components to cover for mistakes, get all the tools needed to actually populate the board and debug why it doesn't work, reorder PCBs once or twice after making corrections...
  3. Agree
    Unimportant got a reaction from Minimalist Manta Ray in What is the maximum current input on an Arduino?   
    That's somewhat of a too broad blanket statement. MOSFET's have a gate capacitance which must be charged each time you want to turn it on and discharged each time you want to turn it off. Depending on the MOSFET in question, this gate capacitance can be significant and require large currents to switch the MOSFET on and off quickly. Gate drivers are used to mitigate this problem.
     
    So you can't just choose any MOSFET. You need one with a low enough gate capacitance so that the arduino can drive it fast enough for PWM.
  4. Agree
    Unimportant got a reaction from AMD A10-9600P in Anyone know why AMD's old laptop CPUs are so inefficient?   
    And let's not forget, if that laptop has some age that battery's health isn't getting any better.
  5. Agree
    Unimportant got a reaction from Hackentosher in Anyone got any idea how to factory reset this thing?   
    The top horizontal IC (uln2003) is a transistor array. Can't read the bottom one but seeing it goes to some connector pads it's probably some kind of tranceiver. The vertical chip in between is the microcontroller, likely with integrated EEPROM data memory.
  6. Funny
    Unimportant got a reaction from shadow_ray in First programming launguage   
    No, "Dat Guy".
  7. Funny
    Unimportant got a reaction from shadow_ray in First programming launguage   
    He's brainf*cking us
  8. Agree
    Unimportant got a reaction from Sauron in Several C++ questions. (Pointers & 2D arrays)   
    No, in C++ a struct and a class are the same, except for the fact that struct members and inheritance are public by default whereas they are private by default for class.
     
    Correct. As per the given example you can include the header of the forward declared class in the CPP file and then you can use it completely.
     
    The dumb pointer simply refers to a object that is owned elsewhere, so it can be accessed, but the owner is responsible for it's deletion. In the given smart pointer example the memory is owned by std::unique_ptr pThing in function SomeFunction. A dumb pointer to the managed object is passed to function DoSomething so that DoSomething can access the object, but the ownership and responsibility remains with pThing in SomeFunction. If DoSomething throws then the stack will be unwound until someone tries to catch the exception. It's not being caught in DoSomething so that function will exit. It's not being caught in SomeFunction either, so that function will exit as well, but because pThing is a automatic variable in SomeFunction it will go out of scope once that function exits and it's destructor will be called, which will delete the managed memory.
     
    You can't even do that. If DoSomething throws the stack will be unwound until the exception is caught. SomeFunction will exit immediately and execution will resume at some lower level where the exception is caught.
     
    A default constructed std::unique_ptr is empty. You can manually release it and it has a overload for operator bool to check if it's currently managing something:
    std::unique_ptr<Thing> pThing; //default constructed unique_ptr contains nullptr by default if (pThing) //operator bool { //will not execute, pThing contains nullptr } pThing.reset(new Thing); //Create new Thing and have pThing manage it (don't do this, use std::make_unique) if (pThing) //operator bool { //will execute, pThing does not contain nullptr } pThing.release(); //Delete object under management, pThing free again. HOWEVER The point of the story was that smart pointers are only to be used when you need to manage dynamically allocated memory. In your case that doesn't seem to be the case. If your array of Tiles has a size that is known at compile time, just make it a std::array (or your own Array2D that uses std::array under the hood). If the size is unknown at compile time then use std::vector (or your own Vector2D that uses std::vector under the hood).
     
    A C style array decays to a pointer:
    void UseIntArray(int* array, int size) { for (int i = 0; i < size; ++i) { array[i] = //do something... } } int main() { int array[10]; UseIntArray(array, 10); return 0; }  
    C++ actually supports references as a separate concept and those are preferred:
    void UseIntVector(std::vector<int>& vec) { for (auto& elem : vec) { //Use elem } } int main() { std::vector<int> vec; //populate vector... UseIntVector(vec); return 0; } But you could use pointers if you wanted to:
    void UseIntVector(std::vector<int>* pVec) { for (auto& elem : *pVec) { //Use elem } } int main() { std::vector<int> vec; //populate vector... UseIntVector(&vec); return 0; } A std::array is somewhat trickier as it's size is part of the template. So you'd either have to write your function to only accept std::arrays of fixed size or the function itself has to be a template:
    template <std::size_t size> void UseIntArray(std::array<int, size>& array) { for (auto& elem : array) { //Use elem... } } int main() { std::array<int, 100> array; UseIntArray(array); //Template deduction passes along size automatically. return 0; }  
    It's not recommended as it still is not guaranteed to be a continuous block of memory.
     
    Many of your technical problems are artificial, stemming from poor overall design, so I'd focus my efforts there. For example, your "map" should probably be a class in it's own right, not a plain 2D array of Tiles. Then the "map" object manages it's implementation details internally and other objects ask the map to perform tasks. This keeps "map"'s implementation details restricted to the "map" class and stops it from being scattered all around the code as it is now.
     
    In simple terms, The "Human" object asks a "Dog" object to walk. The "Human" object doesn't go and manipulate the dog's legs directly.
     
  9. Funny
    Unimportant got a reaction from straight_stewie in First programming launguage   
    No, "Dat Guy".
  10. Like
    Unimportant reacted to Dat Guy in First programming launguage   
    Not important.
  11. Agree
    Unimportant got a reaction from shadow_ray in Several C++ questions. (Pointers & 2D arrays)   
    No, in C++ a struct and a class are the same, except for the fact that struct members and inheritance are public by default whereas they are private by default for class.
     
    Correct. As per the given example you can include the header of the forward declared class in the CPP file and then you can use it completely.
     
    The dumb pointer simply refers to a object that is owned elsewhere, so it can be accessed, but the owner is responsible for it's deletion. In the given smart pointer example the memory is owned by std::unique_ptr pThing in function SomeFunction. A dumb pointer to the managed object is passed to function DoSomething so that DoSomething can access the object, but the ownership and responsibility remains with pThing in SomeFunction. If DoSomething throws then the stack will be unwound until someone tries to catch the exception. It's not being caught in DoSomething so that function will exit. It's not being caught in SomeFunction either, so that function will exit as well, but because pThing is a automatic variable in SomeFunction it will go out of scope once that function exits and it's destructor will be called, which will delete the managed memory.
     
    You can't even do that. If DoSomething throws the stack will be unwound until the exception is caught. SomeFunction will exit immediately and execution will resume at some lower level where the exception is caught.
     
    A default constructed std::unique_ptr is empty. You can manually release it and it has a overload for operator bool to check if it's currently managing something:
    std::unique_ptr<Thing> pThing; //default constructed unique_ptr contains nullptr by default if (pThing) //operator bool { //will not execute, pThing contains nullptr } pThing.reset(new Thing); //Create new Thing and have pThing manage it (don't do this, use std::make_unique) if (pThing) //operator bool { //will execute, pThing does not contain nullptr } pThing.release(); //Delete object under management, pThing free again. HOWEVER The point of the story was that smart pointers are only to be used when you need to manage dynamically allocated memory. In your case that doesn't seem to be the case. If your array of Tiles has a size that is known at compile time, just make it a std::array (or your own Array2D that uses std::array under the hood). If the size is unknown at compile time then use std::vector (or your own Vector2D that uses std::vector under the hood).
     
    A C style array decays to a pointer:
    void UseIntArray(int* array, int size) { for (int i = 0; i < size; ++i) { array[i] = //do something... } } int main() { int array[10]; UseIntArray(array, 10); return 0; }  
    C++ actually supports references as a separate concept and those are preferred:
    void UseIntVector(std::vector<int>& vec) { for (auto& elem : vec) { //Use elem } } int main() { std::vector<int> vec; //populate vector... UseIntVector(vec); return 0; } But you could use pointers if you wanted to:
    void UseIntVector(std::vector<int>* pVec) { for (auto& elem : *pVec) { //Use elem } } int main() { std::vector<int> vec; //populate vector... UseIntVector(&vec); return 0; } A std::array is somewhat trickier as it's size is part of the template. So you'd either have to write your function to only accept std::arrays of fixed size or the function itself has to be a template:
    template <std::size_t size> void UseIntArray(std::array<int, size>& array) { for (auto& elem : array) { //Use elem... } } int main() { std::array<int, 100> array; UseIntArray(array); //Template deduction passes along size automatically. return 0; }  
    It's not recommended as it still is not guaranteed to be a continuous block of memory.
     
    Many of your technical problems are artificial, stemming from poor overall design, so I'd focus my efforts there. For example, your "map" should probably be a class in it's own right, not a plain 2D array of Tiles. Then the "map" object manages it's implementation details internally and other objects ask the map to perform tasks. This keeps "map"'s implementation details restricted to the "map" class and stops it from being scattered all around the code as it is now.
     
    In simple terms, The "Human" object asks a "Dog" object to walk. The "Human" object doesn't go and manipulate the dog's legs directly.
     
  12. Informative
    Unimportant got a reaction from straight_stewie in Several C++ questions. (Pointers & 2D arrays)   
    You seem to be compiling with no optimizations ?
     
    Compiling without optimizations in gcc:
    g++ -std=c++1z ltt_test.cpp Indeed, vector is slower. Which is expected for un-optimized code:
    Vector: 64576298 Array: 46003023 However, once you enable optimizations:
    g++ -std=c++1z -O2 ltt_test.cpp We get:
    Vector: 49 Array: 340 Godbolt's compiler explorer shows that the compiler simply throws out the loops completely. Either because it is smart enough to pre-compute the result or because your code invokes undefined behavior (signed int overflow is undefined), so it's a broken program anyway and the compiler can have it's way with it.
  13. Agree
    Unimportant got a reaction from Sauron in Several C++ questions. (Pointers & 2D arrays)   
    @fpo Back behind a real keyboard, here goes...
     
    1: Forward declaring
    First, understand the problem.
    When your project is compiled, only the cpp files are entry points for the compiler. Each cpp file is compiled separately, without knowledge of the other cpp files. The contents of included files are copied verbatim into the position of the include statement.
    Thus, if we have some files:
    Thing.h #ifndef __THING_H__ //These 2 lines are called... #define __THING_H__ //...include guard #include "Tile.h" class Thing { Tile mTile; //Thing contains a Tile instance. }; #endif //__THING_H__ Tile.h #ifndef __TILE_H__ #define __TILE_H__ //Include guard... #include "Thing.h" class Tile { Thing mThing; //Tile contains a Thing instance... }; #endif //__TILE_H__ Main.cpp #include "Thing.h" int main() { Thing aThing; //Create Thing instance... return 0; } The entry point for compilation is Main.cpp, and the preprocessor will expand this into:
    class Tile { Thing mThing; //Tile contains a Thing instance... }; class Thing { Tile mTile; //Thing contains a Tile instance. }; int main() { Thing aThing; //Create Thing instance... return 0; } This is literally what the compiler will see after the preprocessor is done. Note that:
    If there were no include guards, this would've led to endless recursion. Thing.h will include Tile.h which includes Thing.h which includes Tile.h and so on forever. class Thing can see the definition of class Tile but class Tile can't see the definition of class Thing. Thus this will fail to compile as it's impossible for Tile to have a instance of unknown class Thing. Forward declaring can help solve this problem because it allows introducing a new name for a class to be defined later. However this means there are limitations to what you can do with a forward declared class:
    class Thing; //Forward declaration... class Tile { Thing mThing; //Illegal, you can't have a instance of a forward declared class, compiler can't possibly know storage requirements. Thing* mpThing; //Ok, pointers to forward declared classes are fine. Pointers have a fixed size so compiler knows storage requirements. Thing& mrThing; //Ok, references are pointers under the hood. }; Thing SomeFunction(Thing aThing); //Ok, function declaration which uses forward declared type as function parameter and/or return type. Thing AnotherFunction(Thing aThing) {} //Illegal, function definition with forward declared types as parameter and/or return type not allowed. Thing& YetAnotherFunction(Thing* apThing) {} //Ok, function definition with pointers or references to forward declared type as parameter and/or return type allowed, but without //using it's members. Typically, one would forward declare a required class in the header and include the full definition in the code file, where there are no conflicts:
     
    Thing.h
    #ifndef __THING_H__ #define __THING_H__ class Tile; //Forward declare Tile... class Thing { public: Thing(Tile* aTile); private: Tile* mpTile; //Hold a pointer, which is allowed... }; #endif //__THING_H__ Thing.cpp
    #include "Thing.h" #include "Tile.h" //You can include the full definition of Tile here without problem... Thing::Thing(Tile* aTile) : mpTile(aTile) {} 2: Smart Pointers
    The purpose of smart pointers is to act as a resource handle for memory. Whenever memory is allocated dynamically it has to be freed at some point. By tying the allocation and freeing of memory to a automatic object on the stack this is handled automatically. This is a implementation of RAII - Resource acquisition is initialization. 
    //Dumb pointer - bad code, don't do this! void DoSomething(Thing* apThing) { if (/*something*/) { throw SomeException; } //use apThing... } void DoSomethingElse(Thing* apThing) { //use apThing... delete apThing; //Take ownership and delete the memory... } void SomeFunction() { Thing* pThing = new Thing; //Manual "old style" allocation. DoSomething(pThing); //If function DoSomething throws an exception, the delete pThing line will never be reached and memory is leaked. DoSomethingElse(pThing); //Code poorly documents itself. Function DoSomethingElse actually takes over ownership of pointer and deletes it. pThing->SomeMember(); //Calling member function on deleted Thing :( delete pThing; //Deleting already deleted memory again is undefined behavior. } //Smart pointer void DoSomething(Thing* apThing) { if (/*something*/) { throw SomeException; } //use apThing... } void DoSomethingElse(std::unique_ptr<Thing> apThing) { //use apThing... //unique_ptr deletes managed object automatically when it goes out of scope. } void SomeFunction() { auto pThing = std::make_unique<Thing>(); DoSomething(pThing.get()); //Function DoSomething takes a plain dumb pointer to a Thing. This makes it clear it only wants to use //the pointer and does not take ownership. If the function throws an exception pThing will automatically be //deleted when it goes out of scope. DoSomethingElse(pThing); //Compile error. Calling by value would copy pThing, after which there would be 2 owners for the same memory. //std::unique_ptr is truly unique and cannot be copied, which is exactly what you want here. DoSomethingElse(std::move(pThing)); //Ok, we move pThing, thereby explicitly handing over ownership to function DoSomethingElse. //No need to manually delete. } Note that plain dumb pointers are still fine in order to refer to some object but without taking responsibility/ownership. Stl containers like std::vector are another example of a resource handle.
     
    3: (2D) arrays
    One of the reasons we don't really like C style arrays in C++ is because they carry too little information by themselves to be really useful and if one has to manually supply that information you open up the way for errors. For example:
    void PopulateArray(int* array, int size) { //... } void PrintArray(int* array, int size) { //... } int main() { auto constexpr size = 100; int myArray[size]; PopulateArray(myArray, size); //Manually passing along size each time is cumbersome... PrintArray(myArray, 101); //Uh oh! return 0; } An array name decays into a pointer that, after being passed to a function, has no information about the size of the array. We'd have to pass that information along manually, which is cumbersome and error prone. If we'd have used a std::vector instead, for example, all such problems go away, because std::vector knows it's own size.
     
    2D arrays are even more problematic. Tile** does not mean "Pointer to 2D array of Tile" like most beginners seem to think. It actually means "Array of pointers, each pointing to an array of pointers to Tile", and no-one is guaranteeing it's even square, or that it's even a continuous block of memory for that matter.
     
    Worse still, if you ever need to pass such a thing to a function that needs to modify the pointer itself you become a 3-star programmer (Tile***). Any serious programmer will simply stop looking at your code at that point.
     
    One of the powers of C++ is that it allows building your own abstractions to hide the complexity. A 2D array can be stored in a plain 1D array of rows * columns size. Write yourself a 2D array class that handles all this complexity for you:
    template <class T, int rows, int cols> class Array2D { public: const T& Get(int row, int col) const { return mArray[row * cols + col]; } T& Get(int row, int col) { return mArray[row * cols + col]; } private: std::array<T, rows * cols> mArray; }; Which could be used as such:
    Array2D<int, 5, 10> array; //2D int array 5 rows, 10 columns array.Get(1, 6) = 101; //Assign 101 to item at row 1, column 6 std::cout << array.Get(1, 6); //Print item at row 1, column 6  
  14. Agree
    Unimportant got a reaction from Sauron in VS2019 - Auto convert macro to constexpr results in slower code?   
    What is supposed to be constexpr about this? It's just a plain non-const int called g_lerp_enabled.
  15. Agree
    Unimportant got a reaction from DevBlox in VS2019 - Auto convert macro to constexpr results in slower code?   
    What is supposed to be constexpr about this? It's just a plain non-const int called g_lerp_enabled.
  16. Agree
    Unimportant got a reaction from Franck in VS2019 - Auto convert macro to constexpr results in slower code?   
    What is supposed to be constexpr about this? It's just a plain non-const int called g_lerp_enabled.
  17. Agree
    Unimportant got a reaction from Spotty in I cannot figure out how many watts this psu has   
    It simply does not provide enough information on the sticker to tell.
     
    Power equals voltage times current, thus it says:
    79.2W on the 3.3V 110W on the 5V 192W on 12V I/O 216W on 12V CPU However, before you go off thinking this is a 600W PSU, most power supplies don't support maximum loading on all lines at the same time. There's usually some "combined maximum". The sticker does not say anything so it's anyone's guess.
  18. Agree
    Unimportant got a reaction from WillLTT in [C++] write everything that exists in list into a file.   
    You might want to add error handling tough, a quick glimpse at the other code shows it does not check if opening files succeeded before reading/writing , etc...
  19. Like
    Unimportant got a reaction from WillLTT in [C++] write everything that exists in list into a file.   
    Something like this ? (enable C++14)
    #include <iostream> #include <fstream> #include <vector> #include <string> #include <iterator> #include <algorithm> #include <optional> bool SaveStringVectorToFile(const std::vector<std::string>& v, const std::string& fileName) { auto oFile = std::ofstream(fileName); if (!oFile.is_open()) { return false; } std::copy(v.begin(), v.end(), std::ostream_iterator<std::string>(oFile, "\n")); return oFile ? true : false; } std::optional<std::vector<std::string>> LoadStringVectorFromFile(const std::string& fileName) { auto iFile = std::ifstream(fileName); if (!iFile.is_open()) { return std::nullopt; } const auto v = std::vector<std::string>(std::istream_iterator<std::string>(iFile), std::istream_iterator<std::string>()); return iFile.bad() ? std::nullopt : std::make_optional(v); } int main() { //Save existing vector to file... const auto learned_ = std::vector<std::string>{"Hello", "This", "Is", "A", "List"}; if (!SaveStringVectorToFile(learned_, "Test.txt")) { std::cout << "Failed to save to file!\n"; return 1; } //Load new vector from file... const auto fromFile = LoadStringVectorFromFile("Test.txt"); if (!fromFile) { std::cout << "Failed to load from file!\n"; return 2; } //Print loaded contents... std::cout << "Loaded from file: \n"; for (const auto& str : *fromFile) { std::cout << str << '\n'; } return 0; }  
  20. Like
    Unimportant got a reaction from Lenovich in Question for collectors or owners of an old PC and other tech related things.   
    I'd say it depends on the type of software you're interested in. For example, if you're interested in gaming possible era's could be:
    ISA era, you need ISA slots for soundblaster AWE/Gravis ultrasound cards, which are required to get the incredible soundtracks in some games of old. 386+ era. Able to address more then 1MB of RAM trough protected mode, many DOS games of that era simply don't run on sub 386 systems. 3Dfx glide era. 3Dfx had it's own API, called glide. Many games of that era were hardcoded to glide and don't run on anything else (besides software rendering). So hardware that can support a Voodoo/2 card could be grouped together. (PCI 2.1 slots, windows 95/98 driver support, ...) 
  21. Agree
    Unimportant got a reaction from bob345 in Recommendation for oscilloscope   
    Before you buy anything, understand that non-sinusoidal signals are composed of multiple sinusoidal signals in superposition (harmonics). Harmonics are always on multiples of the base signal frequency (fundamental). In short, you cannot accurately view a 10Mhz square wave on a 50Mhz scope, for example. So, depending on the signals you'll be working with you might need more BW then you'd naively think.
  22. Agree
    Unimportant reacted to hui in I RUINED His Gaming Rig   
    #hireOllysMum
  23. Like
    Unimportant got a reaction from straight_stewie in Can anyone explain this? (why is the i 7 slower)   
    That's the most likely option. The fact it took so much longer on the second system and it had to be aborted supports the view that it missed the 100000 mark. 
  24. Like
    Unimportant reacted to wasab in Can anyone explain this? (why is the i 7 slower)   
    Thats because you werent playing intensive multiplayer games or steaming 4k movies when it happened. 
  25. Agree
    Unimportant reacted to straight_stewie in Can anyone explain this? (why is the i 7 slower)   
    First, try this:
    // change: dim i as single // to: dim i as integer as @mariushm pointed out.
     
    Floating Point performance can be drastically different between different models of processors, even when those processors are in the same generation.
     
    Additionally, it's just bad practice to use a floating point number as a sentry value in a loop. We like to think that floating point numbers are able to represent every real number, but they most certainly are not. As a result, a number like 100,000floating point may end up actually being a number like 100,000.0002 or 99,999.9998 and you could logically miss your comparison against 100,000integer . There are also issues with comparing floating point types to integer types in the hardware of the processor itself, which could cause you to miss a comparison between a floating point number and and integer when they should actually be the same. Beyond that, floating point operations are just damned slow: Modern consumer oriented Intel processors processors only offer 30-40 billion floating point operations per second, and somewhere around 1 trillion non-floating-point operations per second.

    If i were an integer, and you still have the same performance decrease, then I would venture to guess that it's a configuration problem, as single threaded non-floating-point performance hasn't increased as much over the years as people like to think it has:
    Are both machines running the same version of VBA? Are both machines running the same version of Excel? Are both machines running in the same performance/power mode? Are both machines using the same version of Windows? Do both machines have enough memory to not cause a bottleneck?
×