Jump to content

Unimportant

Member
  • Posts

    1,230
  • Joined

  • Last visited

Everything posted by Unimportant

  1. Mainly because of government guaranteed loans. Because of government guaranteed loans, students can borrow vast amounts of money they would not have been able to borrow otherwise. No lender would be crazy enough to lend so much money to a young person who hasn't even done the first thing in proving ever being able to pay it back absent government guarantees. Students then turn around and bid the price up using that money. It's a vicious circle. If students weren't able to borrow so much money many of them would not be able to afford to go to college and colleges would have to slash prices to keep their seats filled up. But because the guarantees have driven prices up so high it looks to most ppl as if the guarantees are necessary and a good thing rather then the main cause of the problem. A secondary reason is also the current mentality that everyone has to go to college. Standards have been lowered and ridiculous, completely useless, courses have been added to allow more people to make the cut, which adds to demand and drives prices up. But many ppl just aren't college material and it's a waste of everyone's time and money forcing them trough a dumbed down useless course anyway.
  2. No, see my post above. Even the comity and Bjarne Stroustrup have repeatedly admitted the STL got it wrong when they used unsigned for subscripts and sizes. (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1428r0.pdf)
  3. Now put a debugging breakpoint on statement. (Only on statement, not the entire if, so the breakpoint only trips when the condition is met).
  4. It'll come back to bite you, or the one having to maintain your code, in the a**. How easy is it to accidentally misread or make a modification in the wrong place without the braces. Everyone who claims "they won't be making such mistakes" are probably the first to do so. Get all the help you can get! Putting everything on a single line also does not allow putting a breakpoint on the conditional expression. No, you declare variables when you need them, in the scope you need them. That way you don't pollute a wider scope then necessary. The variable only lives as long as it needs to and no longer (which becomes critical for more complex types - RAII - so it should become a habit) and you can immediately initialize it with the proper value rather then a dummy value, which in turn allows more variables then you'd think to be const. https://stackoverflow.com/a/3773458/1320881
  5. Why do you declare all your variables in the beginning of a function asif this was old C ? Declare them when you need them, in the scope you need them. Should throw up a bunch of warnings for signed/unsigned comparisons. It's also considered good practice to use braces even if there's only a single expression in them: if (...) { ++j; }
  6. The Siglent SDS1204X-E comes in at the upper end of your budget (659€ excl. VAT RSP, but you can find deals), and ticks most of your boxes. 7", 4 channel (2x2 shared), 200Mhz, 1GSa/s, FFT and SPI/I2C decoding. It does not have a function generator but I'd buy a separate device for that anyway. Also does not have a VGA/HDMI output. I don't think any of the affordable devices do, but it does have a built-in web host that allows visiting the scope's "web page" from any PC connected to the network and control/view it. We've got a couple of those in the lab as daily drivers to free up the expensive stuff and I can't really fault it for the price.
  7. I've hundreds of second hand 18650 cells pulled from laptops, cordless drills, etc... Their wear levels are all over the place and even tough I try to select cells that test at similar capacities to make a battery pack I find that most commercial balancer solutions simply refuse to charge such dissimilar cells. So I designed this. It charges all the cells in series and each cell has a bypass transistor parallel to it (on the bottom of the board on a heatsink). The bypass transistor starts conducting at 4,15V and shunts the charging current around the cell, preventing it from going over 4,15V while the other cells continue charging. When all cells are bypassed the battery pack is full. It also does not charge if any cell is below 2V unloaded (damaged cell safety) or reversed polarity.
  8. But not a single PSU has a overcurrent protection that trips at that low a current. Overcurrent protection/Fuses are mainly meant to protect the wiring against overcurrent. The safe bet is to use wire that can at least handle the trip current of the PSU.
  9. @Hi P Many clean code guidelines/techniques can be applied to any language. Of course it depends on the language paradigm. Object oriented techniques obviously can't be applied to a functional only language. (And attempting to mimic them is one source of ugly code). That said, one should probably try to get books that apply to a language you know pretty well so you can understand the examples laid out in the book. Also don't overlook the wealth of free information in the form of talks on youtube. In the C++ world you've got the CppCon channel, "Going native" and "clean code talks", for example.
  10. Aggressive Lithium Ion series balance charger. Charges up to 4 Li-Ion cells in series no matter how unbalanced/different the batteries are: Class D-Amplifier power stage prototype meant to mate with easily swappable PWM stage prototypes for development purposes: This is the third iteration and I'm pretty stoked about the fact that this thing hardly radiates despite 15nS transistion times @ 50V upto 10A. All my other PCB's I've currently on the bench are work related and not allowed to show
  11. Perhaps I expressed myself poorly. What I meant was that I feel it's impossible to have a block of hundreds of lines of code that is clean and elegant. It's the splitting up of things into functions and creating abstractions, in the process naming things, that's the main thing what makes code clean, elegant and readable. Imagine having a function with 25 lines of code that pulls something from a database, then 20 lines of code that mutates the thing and then another 20 lines that put it back. Even when written properly and without any clever tricks, that probably still looks pretty cryptic at first glance and might take a bit of time just to figure out you're doing three things. It's when you split things up and name them that it becomes readable: auto employee = RetrieveFromDB(key); employee.SetAddress(newAddress); SaveToDB(employee, key); And in each of these three functions you do the same, until - as you'll often find - the complexity magically goes away. The concepts are inseparable. So to me, there is no choice. It's either "cryptic" or it's easily readable, and the latter implies small and manageable pieces/abstractions. As for "having no time". I've found it to be a bogus argument. Doing it correctly actually saves you time. You create functions and useful abstractions and after a while you'll find yourself re-using them over and over. For example - while writing code to draw a grid you might create a "StepValue<T>" class that takes a "from", "to" and "stepSize" - which as can be gathered from the names with no comments required - steps a value from some start value to some destination value. And before you know it you'll find yourself re-using that thing throughout the entire application because it's something that's done surprisingly often. It also saves on debugging because once the "StepValue<T>" class is tested and bug free it's bug free, no matter how many times you reuse it.
  12. No, because you'd split up the "hundreds of lines" into more functions still, each of which is small and manageable. And you'll find lots of the complexity magically disappears. Whereas cryptic code will always be cryptic by definition.
  13. It did properly store the integer in the 8-bit variable (assuming it fits). It's just that operator << for streams has overloads for all basic types. Since int8_t is often a alias for char, it chose the overload to print a char. Static cast it to int to have it call the int overload. int8_t i = 5; std::cout << static_cast<int>(i);
  14. Fair enough, on systems that small the application is probably so small anyway that it's still manageable to keep track of all the things and avoid the bugs I've described in my previous post. Lot's of "best practices" go out the window on tiny systems because memory is too precious, but luckily with tiny systems come tiny programs.
  15. Several reasons: 1) The arithmetic conversions make it so that mixing signed and unsigned is the source of a whole class of bugs. Let's take the classic example: unsigned int i = 1; if (i < -1) { std::cout << "Apparantly, 1 is smaller then -1"; } Which prints: Or: unsigned int j = 1; int i = 0; //<-signed! std::cout << i - j; Which prints: And does not give warnings on gcc, even with -Wconversion set. Both of these problems caused by the fact that the signed value will be converted to unsigned prior to the operation. Of course, if the signed type is wider the rules change: unsigned int j = 1; long long i = 0; //<-still signed! std::cout << i - j; Which prints: as expected. Of course, in some small trivial examples like these it's easy to see what's going on and where the problem is. But in a real application, where the hardcoded numbers might themselves by variables or equations, it's easy to lose track quickly and introduce bugs. 2) Unsigned ints model modular arithmetic, not non-negative integers. When it's not okay to have negative numbers, having wraparound is also not okay in most cases. It's much safer to have underflow at INT_MIN, a value we won't even get close to most of the time, then to have it below 0, a value we work near all the time. This ties in with: Which sounds all fine and dandy until you need the difference between two things that can never be negative. Imagine we're writing some code to regulate the speed of a electric motor. Also imagine the motor can only spin one way. It's mechanically impossible to spin the other way so we don't need negative speed values. Let's document that using unsigned values - you get gems like this: unsigned int targetSpeed = 3000; //RPM unsigned int measuredSpeed = 3005; std::cout << "We need to adjust speed by " << targetSpeed - measuredSpeed << " RPM"; Which prints: Yes, you can code around this by checking which number is bigger first, and then only subtracting the small number from the big number, and then .... Does not sound like you're making the code any clearer and more maintainable this way... There's a reason pointer arithmetic returns a ptrdiff_t, which is a signed type. 3) It's easier to debug. It's much better to catch some value that should not be negative being negative red-handed then to have it wrap around to some huge value and possibly escape detection. Note that using unsigned types to document that a value should not be negative does not enforce anything: void i_only_take_unsigned_values(unsigned int i) { std::cout << "But not really, I just turn them into a really large number! " << i; } int main() { i_only_take_unsigned_values(-1); return 0; } Which prints: So it does not really help much. But worse, you've lost the information you needed to catch the bug, namely that a negative number was passed originally. Why not just accept signed values and actually check ? void i_only_take_unsigned_values(int i) { MyAssert(i >= 0); //Throws when condition not met. //... } C++20 Will give us some nice new toys in the form of contracts to improve even further. In most cases, signed arithmetic is faster, because signed overflow is undefined behavior, while unsigned overflow is perfectly defined. The fact that signed overflow is undefined opens up a whole range of optimization possibilities for the compiler. The latter makes sense in a whole lot of cases. If I have a std::vector that holds all the open documents for my application, is it any problem if I safely cast the vector size to int ? Having 2+ billion documents open at a time is ridiculous and won't ever happen. For things that can truly be large there's int64_t. Stuff even bigger then that is probably something very special that would benefit from using some very large number class anyway. Safe casts are available pre-made, for example gsl::narrow<> from the guideline support library. Some people will argue one should just use signed values when required and unsigned values elsewhere, and "simply" don't mix them. Which is poor advice because in a real program, values and their types propagate throughout the program. And somewhere, someplace those values will meet in some comparison or equation.
  16. Even if we assume you're monitoring the data correctly, did you expect the car's computer to spit out human readable ASCII text ? A protocol is more then just the electrical connection and the way individual bits are sent, it also includes the way the data is formatted, the "language" if you will. Without exact knowledge of this format and some program code to decode it, the raw data will of course look like nonsense.
  17. Types like short int aren't used much anymore indeed. If you need a fixed-width integer for some reason then use the C99 fixed-width integers as already mentioned. (But keep in mind these are optional - For example, if CHAR_BIT > 8, then there is no (u)int8_t, because it would have to have padding bits and the fixed-width integer types are defined as having no padding.) However, plain int is often used (as it should.) The reason int has no fixed width (but must be able to hold at least [−32,767, +32,767]) is that it gives implementers the opportunity to give int a width that is handled best by the target platform. For example, if the standard had forced int to be 32 bit, then all 16 bit machines would be inefficient at working with ints. Vice versa, if the standard had forced int to be 16 bit then any 32 bit platform that cannot handle unaligned access would have to constantly mask values. Now, compiler implementers just make int on the 16 bit platform 16 bit and 32 bit on the 32 bit platform. Thus, one should use int wherever he can, when sure int will be large enough to hold all possible values. Many programmers for desktop applications assume int to be at least 32 bit (the google style guide also does) as it has been 32 bit on PC's for the longest time and the chance that a modern desktop application would ever have to be ported to some small 16 bit system is unlikely. Only on 8-bit systems, such as tiny microcontrollers, does this not apply, because even the smallest possible allowed int width is too much for a 8-bit system to handle efficiently. Use signed int wherever you can, you may assume int is at least 32-bit for desktop platforms [−2,147,483,647, +2,147,483,647]. If you get any unsigned values (returned from a API-function call or whatever) safely convert it to signed asap. If you need the extra signed bit, use a larger datatype instead, don't go unsigned. Only use unsigned and fixed width types when there's a reason, for example: when doing bitwise operations.
  18. No, not only are the transistors connected wrong, those large(ish) transistors don't have enough amplification and are too slow. What do you mean there are no gate drivers? There's loads. IR2110 for example is often used by hobbyists.
  19. As already said in a earlier post, this is probably due to the C runtime library you bind to not handling the C99 extension type specifier "hhu" correctly.
  20. uint8_t is one of the fixed width integers that was added in C99. Their very purpose is to have fixed width integer types that behave the same on all platforms. As such, the problem with your test code was probably elsewhere. Perhaps it invoked undefined behavior or behavior which can be confusing to the beginner (such as implicit arithmetic conversions).
  21. You can find some inspiration in the C standard library function "strcat", which performs the same function: char *strcat( char *dest, const char *src ); Source: https://en.cppreference.com/w/cpp/string/byte/strcat Thus, the string pointed to by "src" is appended to the string pointed to by "dest". It's upto the caller to make sure "dest" points to a memory block long enough to hold the resulting string + null terminator. This removes the immediate need for dynamic memory allocation and shifts the choice to the caller, which is preferable, a plain static array may suffice. It's not up to the library writer to make/force such choices. You may want to model your own implementation of string concatenation in the same way.
  22. Probably because you forgot to terminate the string with a 0. You need to allocate 1 byte more then the string length and put a 0 at the end. The 0 marks the end so functions like printf know where to stop. ( integer value 0, not character '0').
  23. Undefined behavior. You alloc a single byte and then proceed to write additional bytes in the memory following the allocated byte, memory that does not belong to you. The C standard leaves this situation undefined. You should not do it but anything can happen if you do, including the program working normally. By leaving such situations undefined compiler implementers don't have to add various checks and tests into your code, which is the main reason C and C++ can produce such fast code.
  24. Very, there's often more planning then code-writing going on. More precisely, the objects(classes) and their layout/relation to each other should be worked out before you start coding. You can't just get at it and start writing a bunch of classes haphazardly and only then consider how everything is going to fit together. Yes, classes should be lightly coupled, and for some classes that can be done. But in a large project there will be some tight coupling going on whether you like it or not and you better had planned for it. The actual pitch level isn't of much importance. Rather the fact that there is a elevator, it's pitch can be changed, what elements are affected by changes in the elevator, where does the elevator fit in, who's responsible for it, etc... Planning is a team effort over here, so when you're brought into a new project there's not yet been any planning - The team does the planning first before commencing. I've never been brought into a existing project where everyone was working without a plan - I don't think such projects would be long-lived anyway.
  25. By setting examples by spilling blood - as socialism eventually always does.
×