Jump to content

DevBlox

Member
  • Posts

    251
  • Joined

  • Last visited

Everything posted by DevBlox

  1. Yeah, it's a good starting point. Game modding in general too. That is something I tried to do but failed miserably. Tried to mod STALKER games. C++ engine with an embedded Lua engine with logic, I couldn't figure things out to save my life back then
  2. DevBlox

    Next???

    I assume only browser stuff? Then start on NodeJS, Typescript, backend applications basically. You'll learn how to make a full web stack . If you want something more low level see the comment up ^, my personal 2 cents would be C.
  3. Hello, usually silent fly on the wall here. So, I just got a little bit nostalgic, decided maybe it's a good idea to ask the community on what your roots at programming are. Motivations? The Spark? How did you learn? When did you learn? What did you learn? Do you still program nowadays? Maybe someone starting out can take a bit of inspiration, and I get to read other peoples experiences, win win So my short story here: The Background. Since a young age, I've been exposed to computers, it was mainly my older brother (11 years older) who taught me basics, taught me to play (first game Half-Life at age 4), he was not knowledgeable in programming ever, though. But what he would always do (and taught me to) is to pirate games and software. Buying all that was not an option in my family, by my country's standards we weren't poor, but when you translate software prices in dollars to our currency the figures used to be very large, and, needless to say, were not granted by our parents. Some pirated things are easy to set up, but some are a bit involved, and I've always wondered how things work under there. So the Origins - 14 yo. summer vacation, bored out of my mind (I think every friend was out of the city) I pick up an old old laptop that dad had brought home from an old office, it ran Win 98 (this thing was ancient at 2009, now it would be a relic), found it had a 3D Rage GPU and had a Pentium (don't remember any more details unfortunately). So I did the next logical step, and just installed the original Doom and binged it. I remember stumbling upon some scripts, and wondered how they work, said something to my bro, so he gave me this programming book (Turbo Pascal 7.0), so I just kind of started learning. Oh and that laptop became my 'learning to program' machine, since I could keep it. So, Turbo Pascal 7.0, with DOS like console interfaces, Turbo Debugger. Over time I started using the family PC (which was old too, AMD Athlon XP 2800+ (K7!) CPU, NVIDIA GeForce FX5800). Started using Lazarus IDE (which is still around, and I remember it being pretty good), I could write OS applications with GUI! So I've learned that much, so it's time to approach the IT teacher in school. There I learn that 10th grade you can choose learn C++, or you can do crappy web pages. In my mind I have already picked C++ and went straight home to learn it. Forgot all about Pascal after that, rightfully so. Even learned OOP with C++ to some extent, delved into the complex topics, but not really using them well, quality of code was crap for a long time still. A year passes and it's time to learn C++ in class. At that point I knew more C++ than my teacher (she was a teacher that learned to program, not the other way around), so I ended up doing my assignments quickly and then helping others. Simultaneously I discovered that uni at a nearby city were doing remote 'lessons' for my age group. They were basically - you get an assignment, you solve it in a week, and a uni student gives feedback on your work. That program was awesome (for me at least). Some uni students wouldn't bother too much and would just not put any effort into teaching. But the one who looked at my solutions, saw the spark in me, and just hammered down on my code quality, on structure, and always encouraged me to find more ways than one to find a solution. This was all in text basically. The next year I would also apply, but that student wasn't really into teaching much and complexity remained the same, so I did not learn much at that point. But that year was the year I got the Raspberry PI, the first model. This would be a moment that you would call a career changing moment, but it was not a career at this point. But that thing changed the way I looked at hardware, I researched so much, that at one point, I've helped a guy in this used-to-be-popular forum (in my country at least) to write 3D printer software in C++ on RPI, we could work without meeting, since I could test my code myself on a screen. The code still wasn't really impressive though, and I ended up not finishing the thing unfortunately. This basically sums up my final year in high-school. I live in a small town at this point, that means, I need to move out for uni. At that point also, I was done with studying and wanted to work. So I found one internship (disguised as a company project for uni students), who were impressed with my skill, but did not get hired, that whole after-project process was a bit lengthy, but I figured they just didn't like that I was that young (almost 19 at that point). That finished, uni starts, first year, I'm just not ecstatic to be there, I want to do my own thing, d^#% it! It's so bad, that after half a year of just uni, I'm starting to fail, because this in not what I want. The upside, I made friends, who still are friends to this day. And suddenly, into my uni email, pops up the email for a 3 month long internship (also disguised as a project for students, and it was paid wow!). I won't really go into details who this company is, but they were and are still amazing. They actually had multiple projects, I picked a web front end one (I wanted to switch up things), I went to an interview (first was HR interview), she noticed my skills, and immediately after asked me if I want to join another team, because there were stronger members, and I would fit right in. I said yes. That team was in charge of making a high-load weather lookup by IP program. By itself, it wasn't difficult. The catch was high-load, and that was some epic foreshadowing for me. Oh and the tech was .NET C#, so I've learned that. So we made our application in those months, managed to crank about 7k requests per second from a crappy laptop, probably could have done more, but we weren't that experienced anyway. We were overlooked by an actual dev team, which was awesome by itself, because we learned a lot. The other thing is that the reason the project was high-load, is because the team had a high-load project. Talking about 1M requests per second, and the processing were not a walk in a park either. Responses also had to be send off within 80ms. The tech, of course, was C#. So yeah, I got hired at age 19 into a company that tries it's best to be on the edge of tech, whose projects are exciting as hell, and people are really nice. I ended up working there for 3 years. Halfway through I switched to being a DevOps Engineer, because I was good at Linux apparently (my team was all only Win knowledgeable) and was invited to do so by my manager, to which I said yes. I've since left the company, I've felt (did not know, just felt) that my DevOps team (made of me and another IT admin guy) was to be disbanded (global company project reorganize was started to happen), and even though I had freedom to do what I want basically, I wanted to explore the waters (as this is the only company that I've worked for yet), raise my pay-check and kick myself in the butt a little bit. The next company is quite a bit more modest, the projects are PHP, but the people are really nice. In the mean time on the uni side, got to dabble with synthetic biology, and made some software for that in Go, that project was really interesting, got to travel to US because of it. I'm probably going to get back to it at some point. My relationship with uni is turbulent, a mix between I need to get a degree, and I want to work and do exciting things, I've skipped (postponed kind of) some years. A lot to tell there, but I won't go into detail, this is already a lengthy one. Back at the not uni side, the company I worked for half a year, before deciding to travel with my gf, and the company offered me to work for them remotely, which I'm grateful for. Quit uni, decided screw the degree, I'll have one when I want one (or desperately need one :D). So I'm still doing DevOps, for a company on the other side of the globe, travelling and working (I'm actually living in a van right now, working off of solar power lol). I'm still learning new things and trying to create something new (currently going at Go pretty hard). Still excited to create and to learn. So that's it. tl;dr, I've programmed for 10 years, has been my hobby, now a career. I'm interested what other people here have experienced! Tell me your story!
  4. My two cents would be to not plunge into C++, there's just too much to learn language-wise, newer standards more so. Being new you won't utilise even 10% of the language and features, I would not advise to confuse yourself, pick a language and learn to the fullest. C is a good option for starting out with a bit lower level (than is common these days) programming. To create an operating system from scratch you will need to learn Assembly, not hard-core learn everything, but know how things work, be able to read, write and most importantly - research. The bootstrap of an operating system is always a little bit of Assembly. You can put a lot of languages (not interpreted) directly on top of that. C is a logical step, but you CAN do something else. When you're advanced, you can try doing an OS written in Rust (language). Heck, I'd like to try that myself. Anyway, whatever path you choose, I wish you a lot of persistence. Even if a project fails (and a lot of them will), or you lose momentum (or interest), value the knowledge you take with you - it will go into your next project. I remember I started programming around 14 (8th grade as well), now - 10 years later, it's my career and I still love it, I still want to write OS's and game engines (it's still hard to do). And if by any chance you get to university and you hate studying computer science, don't worry - I did too (maybe my uni just sucks I dunno, but at that point I did not really want to study anymore, I wanted to work in the field, badly, so I did). Just some encouraging words, you can skip the top part of the post, whatever, just got a bit nostalgic and remembered how I started out.
  5. I have an old friend that was building a basic UNIX-like OS for university course. You could use it as an example https://github.com/haliucinas/Marox . I do not remember how he would run fast tests on it though. Maybe QEMU? I don't really remember :'( . It's definitely an interesting project, building an OS that is, so I wish you to have fun with it
  6. Decompiling might give you some use if that application is written on .NET, with languages like C# or VB, and that is assuming it is not purposely obfuscated with special tools that will make that difficult or impossible. EXE Explorer has some of that capability, but to really browse code you can use Visual Studio + ReSharper with it's decompiler, that's your best bet. Otherwise, if a program is compiled to machine code. Assembly. No other way around. Decompilers for C and C++ produce much more crooked results than those before mentioned, they're not really that useful. EXE Explorer still might give you an idea of what resources are present in the executable, but code is all up to you to read and interpret. Your best bet is to attach a debugger and go through the program step by step, tediously checking what each program part is doing. You will encounter a lot of external/system calls, you will need to recognize which is which, best have some table/list of them so you could do that. With enough time you can definitely reverse engineer important portions of it. That's how people who pirate software get past security measures, how some modders have to mod their games out (that is very hard to do for big games though). You will learn a lot, if you're willing, go for it!
  7. If you're looking for log n, binary search is a good idea to do, and since you have prefixes for your numbers it may be a good idea to partition your data into multiple tables identified by the prefix, depending on the variety of prefixes do binary search on that, then perform binary search on a specific table, it will work if data across partitions is reasonably even, otherwise you may have a penalty on certain partitions that are larger than others. A solution to that is partitioning your data using hashes, but you can potentially get away with numbers anyway. That's how some databases to it for data balancing between servers and fast (<1ms) access, and it adapts well to large amounts of data. Implementing this in-memory should give you the performance you want, also, this could lend itself to some parallelism too, I don't know how well would it lend itself to just sequential reads.
  8. Yes, simply use a comfortable client, and you're all set. Another option is Postman. It's a chrome extension that can be used standalone.
  9. I would consider that as a functional requirement. Because if consistency fails it will impact function, you will confuse people that use it (and your own application) with inconsistency, that is quite undesirable.
  10. More on this topic if you're passing by and are curious why this happens. There is a nice video for it.
  11. The client allways makes the call for data. But how fast should the change/notification be? Seconds? Miliseconds? On that depends what method to use and how carefully you should code your backend. If it's seconds to tens of seconds you're fine with simple repetitive calls that query for changes, but this has drawbacks. If it's milliseconds, you can use a similar method mobile phones use for push notifications, though that is harder to implement, it is a much more elegant and robust solution IMO.
  12. A website has as much potential downtime as is it is complicated. A simple website can easily survive even a hosting service restart, because it takes less than a second, while a bigger project might be more dangerous to restart, causing all sorts of business losses along the way. If a system is complicated there may be more ways than one to experience downtime. What I'm trying to say, only a part of the system may experience downtime at one point, while, let's say, the front facing system is fine (ever seen some message of downtime on some big site that still seems to function? Chances are, they are doing maintenance in their backend or something not nice is going on there, some features may not work, while others will work just fine). It's not just the database that can mean downtime, some systems have their data travel many servers to from backend to frontend, with all sorts of processing happening along the way, some can have nothing to do with the database at all. It's all to to with the architecture of a system. With such large systems, there are also cunning ways of avoiding downtime, dealing with large load and so on. Usually that means more than just pushing to master and immediately testing it on the site. I understand if some things may be harder to comprehend without witnessing, it takes some time to learn :). Though that's a giant topic/topics that would take hours to explain in depth, seriously, you could write a whole book about that stuff, i bet people have, I did not read them though, I learned all that in the field, so I don't know where to point you to :).
  13. It's definitely not "just as easily" with HttpListener, it will need some manual header fetching (and possibly parsing) to achieve this, past that, you should be able to find an easy way. SignalR might be a bit faster to deploy, because it's API is mostly the same if not identical to ASP.NET, though I can't tell you for sure, I've never done anything authentication/etc with it, it might take some playing around.
  14. Get rid of standard MVC controllers, write a HttpListener server instead, or at least use SignalR, standard MVC sucks for performance. Otherwise make sure your allocations are at minimal, not just because of little performance gain in your case, but to avoid GC as much as possible, that kind of stuff, you get the idea, I don't think that's an issue for you.
  15. There is a video about it It is 90% accurate, the rest 10% were just not mentioned. It takes a huge amount of time to move such huge software like game engines to a new fundamental standard, no wonder everything still sucks. Remember those first demo's that were built around it? They rocked.
  16. Software/hardware relationship is pretty crap, what you're saying is partly true. I think currently it's more of a case "the more computing power we have, the more slack we can cut with efficiency and optimization, so it's faster to code", because bloatware is at it's finest these days, if we kept old development standards we would have better software, but less of it, because it would be time consuming to make.
  17. You can run it as a Windows service. You can write Windows service installers in C++ (those are some boilerplate code basically) into your application, or you can use NSSM. EDIT: This will require Administrator account access, unless you have permissions to manage services. Another option is to instead of a Console application, make a Windows application that produces no Windows, you could make it in a way that it will not appear anyhow.
  18. You'll stop fantasizing about how cool it is real fast while digging into it. Romanticizing hackers is what breeds script kiddies. And you're trying to start off of the wrong foot, and that will not be successful.
  19. Because every program needs to hunt for vampire numbers right? Yes, on some workloads you can easily kill the JVM for example. But i dare you to write huge business software with C and then write it with Java, C# or whatever, you'll notice how much of a win those languages are for these kind of things. The time you save is ridiculous. The difference in performance is basically at micro-optimization level for software like that (JVM and .NET do micro-optimization in runtime btw, programs get more optimal as they run, as long as you didn't screw up something major, like your whole algorithm or something), not to mention you can definitely write great performing software with those languages. And if the gap in performance is bigger and you need more heavy metal - business will buy some. Hardware is cheap, good engineers are not. EDIT: Not that I actually meant to call you out. But people should know this. There are objective reasons why kernels are written in C and most of other software is not.
  20. Python for learning definitely. Java is great to know how to code if you're going to make a bit more complex systems, it is more aimed towards for enterprise stuff historically (and Android apps these days), it's what I would use if I were to make a web api or some backend application in general. Not that Python can't do that, but I'd trust Java better. You would also be hurting yourself if you would not use a proper IDE with Java, I most likely avoid touching it without one, while you easily use any text editor of your choice with Python, also better for learning experience. If I were you I'd avoid people talking you out of high level or interpreted languages. The thing is, not everything is worth the time of being written in C/C++/whatever-the-fetish. That's not to say that those languages are not worth learning, do learn at least one if you're willing.
  21. The only "be warned" message I have is if you're going to work with .NET, then ideally you want to pick a PC with Windows, though there are ways, like I use a Windows Server VM with Visual Studio and that's it, it's pretty fast, though there are some things to be vary of. Otherwise these days it does not matter at all, just pick one you're most comfortable with. Macbook Pro's are great, though I do not fancy iOS too much, I am a Linux user at heart anyway. I found that business class laptops, like Thinkpads are great too, they have failed with some things (biggest complaint: it had a mushy pad with buttons in the corners that I still hate and can't get used to after having it for a year) on their previous generation ones, but they fixed them on the newest ones anyway. Overall they are pretty awesome, I use it with 2 docks: one at work and one at home, which is so neat that I don't want a laptop that does not have this feature anymore. They also have great Linux compatibility, everything works out-of-the-box even (on Fedora at least). And as per operating systems, whatever your "religion" just use that, obviously you are more efficient on it. That does not mean you shouldn't learn to use a new one though. You should! I'm thinking of giving BSD a try, I have an old laptop that I can conveniently put it on.
  22. I personally find it way easier to call an API. Though my statement is based on some assumptions, mainly that universities/colleges may very well buy printers in bulk, so they may be the same model. But even though they may be of different makes and models, having tailored a ton of regex'es or listing a ton of endpoints for all models is not that different anyway. IMO when using an API the script is more likely to survive a firmware update of the printers than a regex check, it could also protect against a more horrible administering problem - when only part of printers have their firmware updated, one part may have an updated GUI, another part may not. It becomes a less brittle solution, I bet @GFC_ would like to spend as less time as possible maintaining that thing. @GFC_ you need to look at how the html/js works for those pages and figure out what's the better solution for them, this is a simpler, but more annoying solution in the long run. Another possible solution would be to use CUPS to maintain a list of those printers and do the interfacing with all those brands and makes for you, then base your script on it's API (I bet you can retrieve all those metrics from CUPS), this would minimize your script to just calling a single API and making the report, no multiple model handling necessary.
  23. You could analyze what rest calls the web page is doing, if any. That would be even more simple and robust than crawling over the whole page
  24. Create an implicit conversion override methods in your Expression class: public static implicit operator string(Expression e) { return e.GetExpression(); } public static implicit operator Expression(string e) { return Expression.CreateFromString(e); } Note Expression creation in done by a static method, which I quite like in some cases, you can just create an appropriate constructor and go with new Expression() there. These changes should allow you to do this: Expression exp = "expression goes here"; string str = exp; Note also that I wrote this from memory. Google your way out using this if in doubt . Generally I would not recommend doing this, it makes code harder to read, you could use that static method I really like instead and you would be golden. But I trust you know the design issues and have thought about it, so go for it!
  25. Yeah, C++ is hard AF. It has it's place in software development, a really strong place. It's a nice language to try and master, but as a first language it sucks balls.
×