Jump to content

Unimportant

Member
  • Posts

    1,230
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Unimportant got a reaction from PAEz in C++ | Can't understand Polymorphism   
    Forgive me for thinking these were 2 completely different things:
    ((Entity*)&hero)->set_attack(5); Entity(hero).set_attack(5); I know better now. Although my compiler produces 2 completely different results. I guess I should compile without the -pedantic flag.
  2. Informative
    Unimportant reacted to straight_stewie in RAM speed in programming   
    There are a few problems with the die shot you posted: That was Ivy Bridge and, if you look closely, you can see that much of the area taken up by the L3 cache is not actually memory, but memory controllers. That's because L3 cache, atleast in Intel chips, usually has a relatively high number of access channels (usually four per section and four sections, so 16 access channels). RAM won't dedicate quite that much space to the memory controllers.


     
    This is a Kaby Lake chip. Much of what you see labelled "L3$" is actually called the "Side Cache" and consists of 64 MB of memory. By my math, the "Side Cache" takes roughly 14.8 mm2 for 64 MB. That's 222 mm2 per gigabyte, which is roughly 1/3 as dense as modern DRAM offerings, which yields roughly 1 gigabyte per 70 mm2.

    So I guess the conclusion of my analysis is that, atleast in terms of die usage, SRAM is competitive with DRAM. Even more so when you factor in the performance improvement.

    However, I can't find a reasonable cost analysis, because SRAM is in very low demand so it's cost is inflated. I'm not an economist, so any price adjustments I could come up with to make an apples to apples comparison on cost would be complete nonsense.
  3. Agree
    Unimportant got a reaction from reniat in RAM speed in programming   
    I'd worry much more about being cache friendly if you need performance.
  4. Informative
    Unimportant got a reaction from Legolessed in How do parallel battery packs not have issues?   
    Rather importantly, yes! When you connect the batteries together to make a pack they should be balanced. If you put 2 batteries with different charge levels (different voltages) in parallel the 2 batteries will balance themselves, but with no current limit, it would be violent. Once connected in parallel there is no issue - look at them as communicating water vessels.
  5. Agree
    Unimportant got a reaction from Breadpudding in RAM speed in programming   
    I'd worry much more about being cache friendly if you need performance.
  6. Agree
    Unimportant got a reaction from Levent in How do parallel battery packs not have issues?   
    Rather importantly, yes! When you connect the batteries together to make a pack they should be balanced. If you put 2 batteries with different charge levels (different voltages) in parallel the 2 batteries will balance themselves, but with no current limit, it would be violent. Once connected in parallel there is no issue - look at them as communicating water vessels.
  7. Agree
    Unimportant got a reaction from Hackentosher in How do parallel battery packs not have issues?   
    Rather importantly, yes! When you connect the batteries together to make a pack they should be balanced. If you put 2 batteries with different charge levels (different voltages) in parallel the 2 batteries will balance themselves, but with no current limit, it would be violent. Once connected in parallel there is no issue - look at them as communicating water vessels.
  8. Funny
    Unimportant got a reaction from Breadpudding in Operating System Creation   
    That's what AT&T syntax will do to a person.    ( )
  9. Agree
    Unimportant got a reaction from anselmo in Bootable USB with a picture   
    Should work with Rufus, follow this simple guide https://www.thomas-krenn.com/en/wiki/Creating_a_Bootable_DOS_USB_Stick
    Make sure all the settings are the same. You need FAT32 for example as DOS does not know NTFS.
  10. Informative
    Unimportant got a reaction from wasab in C++ | Can't understand Polymorphism   
    Only the Entity part of hero will be copied to entity. All Hero specific fields are lost, "sliced off".
  11. Agree
    Unimportant got a reaction from minibois in Gameboy corrosion   
    Use an acid, like vinegar, first to neutralise the base alkaline. Once it stops fizzling clean the vinegar off with IPA. Otherwise the corrosion will continue.
  12. Informative
    Unimportant got a reaction from Hi P in C++ | Can't understand Polymorphism   
    The point is the calling code only needs to know about Entity. In a real game, you're probably going to have to keep your entities in some kind of container, such as a std::vector. Are you going to create a Hero vector, a Monster vector, and god knows how many more vectors for each type of entity in your game? Or just one Entity vector (of pointers or reference_wrappers) that can hold all Entities?. 
  13. Agree
    Unimportant reacted to mariushm in Awesome reballing Job   
    Some faults are due to bad solder balls.
    However, some faults are the actually detached gold bonding wires between the chip and the substrate that has the copper pads on which balls are placed. It just happens that when you heat the chip to apply the new solder balls, the internals also heat enough that those bonding wires sometimes get reattached.
     
    There's also cases like what nVidia experienced, where they made a bad choice when it comes to substrate soldering ... most of their 65nm and 55nm chips were flawed and reballing would only help temporarily as the flaws manifested due to solder fatigue (heating and cooling cycles) - so you reball the chips and they may work for a few weeks until enough power on and power off (heat up, cool down) accumulate and internal solder fatigues / breaks down again
    See https://www.theinquirer.net/inquirer/news/1004378/why-nvidia-chips-defective
    or https://techreport.com/news/15720/chip-failures-nvidia-responds-at-last
     
    This is one of the reasons Apple no longer works with nVidia - because nVidia refused to help them pay for a part of the repairs and replacements of the nVidia cards dying in laptops and various apple products. The Apple management got so pissed off on nVidia that they "blacklisted" them, they're now even refusing to accept new nVidia drivers in their operating systems, so new nVidia cards don't work well on Apple. See https://appleinsider.com/articles/19/01/18/apples-management-doesnt-want-nvidia-support-in-macos-and-thats-a-bad-sign-for-the-mac-pro
     
    Onkyo also had a lot of issues with a DSP chip from TI, then with a HDMI mixer chip and then a network chip  ... reflowing or reballing the DSP chip would help for brief periods of time, making the unit work only to have it fail again within a year... it was an internal flaw of the chip
    See https://www.avsforum.com/forum/90-receivers-amps-processors/1652514-onkyo-acknowledges-failed-units-extending-warrranties-until-2018-a.html
     
  14. Informative
    Unimportant got a reaction from Faisal A in building my own PSU and don't know what parts to choose   
    With the questions you're asking, you probably should not be doing any live mains projects. Not only because of the obvious dangers involved, but also because you won't learn much when you blow up the prototype with every little mistake you make. With a low voltage project you can use a current limited bench power supply and keep your prototype relatively safe.
     
    I'd agree with @Curious Pineapple and aim for a buck converter. Check out the TL494 PWM controller. It will handle the PWM generation and most of the loop (might require extra external frequency compensation). You can then design your own synchronous output stage with 2 MOSFETS and a half bridge gate driver. I'm particularly fond of the MIC4605 with automatic dead time control and a low price tag. Read up about MOSFET theory to be able to pick the right parts for your requirements. You can't just haphazardly pick random MOSFETS like you were doing. Other things to look out for are inductor selection (TI has some nice appnotes about this) and proper power supply decoupling. Board layout is absolutely critical or the thing will ring like a bell. Learn what the 2 main current loops in a buck converter are so you can figure out where there's large dI/dt's so you can design the PCB accordingly. Don't try to build any of this on breadboard or prototype board, go straight to manufactured PCB.
     
    Then you'll need equipment like a current limited bench power supply and a oscilloscope (a real one, not a toy ebay kit). Check out Siglent and Rigol for affordable but decent low end DSO's. Or find an older second hand analog scope, there seem to be some nice cheap Hameg's out there lately. If you're not prepared to make this investment then forget about the project, you can't measure and troubleshoot something like this with a plain multi-meter, you need to be able to see your PWM signals, ripple, etc... It'll also teach you proper probing techniques because scoping a buck converter will show the naive technician lot's of ghosts.
     
    Relevant reading:
    http://www.ti.com/lit/an/slva001e/slva001e.pdf
    http://www.ti.com/lit/an/slva477b/slva477b.pdf
    http://www.ti.com/lit/an/slyt670/slyt670.pdf
    http://ww1.microchip.com/downloads/en/AppNotes/00799b.pdf
     
  15. Informative
    Unimportant got a reaction from Hi P in Creating GUI - Is it cheating?   
    No, there's nothing wrong with using a visual design tool.
     
    No one is interested in tons of boilerplate code that does nothing but setup a window and some controls. Doing it manually is a pain to create and maintain.
     
  16. Agree
    Unimportant got a reaction from v0nN3umann in C++ | Class Private Method   
    While true, I'd advice against taking this too far. I would not go and take something that should clearly be a simple struct and make it's members private and add a bunch of getters and setters just in case things change.
     
    There's enough refactoring tools these days to make those kinds of changes easily and quickly. And if a simple public data structure suddenly has to change into a invariant there's probably greater changes to worry about.
  17. Informative
    Unimportant got a reaction from Hi P in C++ | Class Private Method   
    Yours is a very simple example and indeed, one can wonder if this should not be a simple struct:
    struct Person { std::string name; int age; //etc... }; However, it becomes more complicated if we have certain conditions to uphold, for example if we require a person to have a valid non-empty name and a sensible age. This is called the class's invariant.
    class Person { public: Person(std::string name, int age) : mName(std::move(name)), mAge(age) { if (!VerifyName(mName) || !VerifyAge(mAge)) { throw std::runtime_error("Tried to construct Person with invalid name or age!"); } } int GetAge() const { return mAge; } const std::string& GetName() const { return mName; } private: std::string mName; int mAge; }; In this example the Person's constructor requires a name and age to be given. The constructor checks if both the name and age are valid and if not throws a runtime_error exception, which aborts the creation of the class's instance. Thus, it prevents one from creating a invalid person.
     
    The GetAge and GetName functions can only look at the name and age but not modify them. (GetAge returns a copy, GetName returns a const reference).
    This, off course, requires name and age to be private. Otherwise anyone would be able to just write to them whatever they wish and break our invariant.
  18. Agree
    Unimportant reacted to Dat Guy in Memory Management   
    Java does not have destructors. Finalizers are not exactly the same thing.
  19. Agree
    Unimportant reacted to Dat Guy in Memory Management   
    Solved in C++:
    https://en.cppreference.com/book/intro/smart_pointers
  20. Informative
    Unimportant got a reaction from Hi P in Memory Management   
    Files should be closed when you're done with them. Systems can lock up just as hard due to running out of file handles as it can due to running out of memory. Mutexes should be released when you're leaving a critical section, etc etc. Anything that must be acquired/opened/built that should be released/closed/destroyed once you're done with it qualifies as a resource. RAII treats all these things in the same way, memory is nothing special. I don't see how programmers that depend on a garbage collector like a crutch would somehow suddenly find the discipline to clean up all their other resources that a garbage collector does not handle.
     
     
  21. Informative
    Unimportant got a reaction from Hi P in Memory Management   
    I personally don't feel much for garbage collectors. Memory is just another resource. If you can't be bothered to free the memory you no longer use, why would you be bothered closing your files or closing your sockets?
     
    C++ RAII techniques allow you to effectively manage *all* kinds of resources, not just memory. 
     
    But you're somewhat poking at a holy war here, thread lightly.
  22. Informative
    Unimportant got a reaction from Kamjam66xx in RenderEngine   
    Did a quick review, and there's some problems. Some minor and some very serious.
    Some of your classes that try to manage resources actually don't. Look up the rule of 3/5/0. Here's an example from Skybox: //Skybox.h private: Mesh *skyMesh; Shader *skyShader; //Skybox.cpp SkyBox::SkyBox(std::vector<std::string> faceLocations) { // shader skyShader = new Shader(); You've got some raw member pointers for which you then allocate memory with new. What happens when a Skybox instance gets copied? The default copy constructor will simply copy those raw pointers so now you've got 2 skyboxes pointing to the same resources. When one does some modifications to those resources, the results will be reflected to both, leading to weird bugs. Furthermore, when one skybox is destroyed and frees it's memory (you've commented out the deletes in the destructor for some reason? - now you're leaking memory) the other skybox is pointing to deleted resources. The rule of 3 (pre C++11) used to say that if your class manages resources like this you must write:
    Copy constructor and assignment operators that perform a deep copy. That is, allocate their own memory and copy the contents over so now both instances have their own resources.
    A destructor that frees the resources.
     
    This then later became the rule of 5 in C++11 as move semantics where added. If applicable you should now also add a move constructor and assignment operator that can steal the source's resources cheaply.
    The rule of 0, which I subscribe to, states that classes that manage resources, and thus have custom copy/move constructors/assignment should deal exclusively with the ownership and management of their resource. Other classes can then use these RAII objects as members so they themselves don't need to worry about any of it and need 0 custom copy/move operators.
     
    So, in short, you should probably wrap the management of skyMesh and skyShader into another class that's solely responsible for managing it.
     
    One could argue that there only ever needs to be a single Skybox and it won't be copied. In that case you should delete the copy constructor and assignment operator to prevent accidental copies. Even then you should wrap those pointers into a std::unique_ptr that manages their lifetime for you so you can't leak. You should not be using naked new and delete.
     
    You make similar resource management errors throughout your code, here's another example:
    //Texture.h private: //<snip> const char* fileLocation; //Texture.cpp Texture::Texture(const char* fileLoc) { //<snip> fileLocation = fileLoc; //<snip> } bool Texture::LoadTextureA() { // loads image. one and done. unsigned char *texData = stbi_load(fileLocation, &width, &height, &bitDepth, 0); You give a C style string with the file location to the constructor of this class which it then stores. A C style string is nothing but a char pointer pointing to the actual string which lives somewhere outside the Texture class. Then, possibly at a much later time, you use this pointer in your member functions. What if the string is already gone by then and the pointer is dangling ? For small aggregations like this it makes no sense to have the caller worry about the string lifetime. Simply make fileLocation a std::string so each Texture has it's own file location stored safely inside, not depending on the outside world. 
    There's lots more places in your code where you use C style strings. You should convert them all to std::string to minimize the chance of similar bugs. If you need a C style char pointer to pass to the GL functions use std::string::c_str to get a C style string pointer from a std::string at the very last moment.
    signed/unsigned interrelations. for example:
    for (size_t i = 0; i < (MAX_POINT_LIGHTS + MAX_SPOT_LIGHTS); i++) MAX_POINT_LIGHTS and MAX_SPOT_LIGHTS are signed int, so the result of adding them is also signed int, which is then compared to a size_t which is unsigned. Your compiler should emit a warning for this. Signed and unsigned interrelating can lead to different results then what you were expecting. <more info>. It's probably not a problem here because the numbers are so small but it is a general code smell you should watch out for. My way of doing things is to always use signed, for everything (math related, bit manipulation is something different), even things that can't be negative. It's much better to catch some value that can't be negative being negative in the debugger then see it as some huge overflown positive value that might escape detection. If you really need the extra bit, use a larger datatype. size_t is a abomination that should never have been unsigned. (altough understandable, it's a carry-over from C from decades ago when that extra bit really was required). When you get a size_t from some function call, convert it to signed asap. I've my custom made template function that assigns the value of a size_t to a given signed variable and throws when it won't fit. In this case you can offcourse just change to "int i = 0".
    There's probably more things to find but I guess this is enough to get you busy refactoring for a while.
  23. Informative
    Unimportant got a reaction from Hi P in C++ | Stack and Heap Memory   
    Plain automatic stack based variables should be preferred wherever possible.
    They are simple to use and understand. You can't leak stack. The stack is hot, it's where all the action is, the cache controller knows that too. It's hard to get cache misses near where the stack pointer is pointing, heap objects can be anywhere. Allocating from the heap is relatively slow. (Keep that in mind when using memory handles like std::vector. Declaring a std::vector inside a hot loop body might not be the smartest thing.) In modern C++, with move semantics, moving a memory handle like std::vector is nearly just as cheap as copying the underlying raw pointer. So you can put the std::vector on the stack, move it around all you want cheap, and let it handle all the heap stuff. That said, some reasons to use the heap:
    Allocating large amounts of memory. (but use std::vector instead) Allocating an amount of memory that is unknown at compile time. For example, if you need to load the contents of a user-selected file to memory. You only know how large the file is, and thus how much memory you need, once the user has selected the file at runtime. C++ does not support variable length arrays - the size of all stack based objects must be known compile time. So allocating a variable amount of memory can only be done from the heap. (but use std::vector instead) Objects that should not move. Imagine a "Document" class that encapsulates everything that represents a document. Then another class called "DocumentViewWindow" that represents a window that graphically displays a documents contents. The view holds a pointer to the document it's currently displaying as to allow changing document by simply changing the pointer. The document currently being viewed should not move as this would invalidate the views pointer to it. One way to make absolutely sure of this and prevent bugs is to delete the document's move constructor and assignment (to prevent accidental moves) and allocate documents dynamically on the heap. That way the document will sit at the same position in memory for it's entire lifetime. When certain design patterns are used that simply require it. Such as the abstract factory pattern. When you only know which implementation of a interface to instantiate at runtime you've little choice but to do it dynamically. When ObjectA is a member variable of ObjectB, it requires the full definition of ObjectA to be available for the definition of ObjectB. In larger project this can lead to circular dependencies. If you instead have a ObjectA (smart) pointer as ObjectB's member, you can get away with only a forward declaration. This however requires dynamically allocating the ObjectA. For clarity: Dynamic allocation means heap.
     
    Might be some more reasons but these are the important ones that readily spring to mind.
  24. Informative
    Unimportant got a reaction from Hi P in C++ | Stack and Heap Memory   
    You can imagine the stack like a stack of dishes. You add and take plates from the top, the last plate you put on is the first you take off. You never add or take plates from the bottom. Thus, the stack automatically grows when we add stuff we need on top and automatically shrinks when we remove stuff we no longer need. This serves as a form of automatic memory management. This is why variables on the stack are called "automatic variables" in C and C++.
     
    Take the following code sample:
    void Func1() { int var3; //Do work... Func2(); } void Func2() { int var4; //Do work... } void Func3() { int var5; //Do work... Func4(); } void Func4() { int var6; //Do work... } int main() { int var1; int var2; Func1(); Func3(); return 0; } The program starts execution at main. Then 2 automatic variables var1 and var2 are put on the stack, so the stack looks like this:

     
    Then, function Func1 is called. This function needs to know where it should return to when it is done, so the return address for Func1 is put on the stack aswell:

     
    Func 1 has it's own local variable var3 and will call Func2, which again has a automatic variable and so in the end things looks like this:

     
    This is as deep as this program goes, when Func2 ends and returns, var4 - which is only visible from within Func2 - is no longer required and thus removed from the stack. The return address for Func2 is also removed from the stack and used to be able to return to the right place. The same thing then happens for var3 and Func1 and so on. This is called unwinding the stack.
     
    main will then call Func3 and the stack will be grown again...


    ...and again be unwound when Func4 ends.
     
    You can clearly see that even tough there are 6 automatic variables throughout our program, there's never more then 4 on the stack at the same time (ignoring return addresses). That's because for the code path in the example there's never more then 4 variables in use at the same time and the stack automatically manages this.
    This is simple, elegant and effective.
     
    In practice, it's somewhat more complicated as function parameters and such are also pushed onto the stack but I omitted such details for clarity.
     
    You can also see that the lifetime of objects clearly follows program structure. What if we want to manually determine the lifetime of a object? This is where the heap comes in. You can look at the heap simply as a large pool of memory of which you can demand a chunk (allocate) and free a chunk whenever you wish. When you allocate some heap memory with "new" the object(s) you put there will exist until you manually free them with "delete". Forgetting to free such a chunk of memory is called a memory leak. (which is why you should use smart pointers in C++ these days and avoid naked new and delete).
     
    Another reason to use heap memory rather then the stack besides object lifetime is in order to allocate large amounts of memory. The stack is typically fairly small - a handful of megabytes - while the heap can be pretty much as large as a application can address on a given platform. So if you need many megabytes of RAM to load some wave sound samples, for example, that should be on the heap.
  25. Like
    Unimportant got a reaction from Bitter in Burnt MOFSET - A2726   
    Is the gate shorted to drain or source ? If it is then the gate driver is probably dead as well, those tend to be integrated into the PWM controller IC these days.
×