As a software developer, I've come across things people say that annoy me, because often it's not the reality:
Software development is "easy"
Like any other skill, the easy part isn't because it's actually easy, it's because people built-up experience and skills necessary to simply just do something. Because if it was easy, you, as a layperson, should be able to do it just as easily.
Software is built from start to finish in one go, e.g.: "Day one patches are dumb"
In software development land, this is known as the "Waterfall model." In a lot of commercially developed software, this process is almost never used as a whole. It might be used for just the software development process itself, as in, it's in the hands of the people actually churning out code, but as far as a software project goes, other models are used. After all, if you're doing a game and all the concept art and storyboarding is done, those creative people likely aren't going to be the same people coding the game. It might be the case in smaller studios, but not in a AAA studio.
Typically what's done is some variation of incremental build model or iterative build model, which usually ends up going to Agile software development.
This is why games have things as day-1 patches or DLC. Before the game can be formally released, it has to go through a validation process. Instead of having the people working on the game sit on their thumbs, why not have them work on stuff in the mean time and release it later? And since patches and DLC often have a less stringent validation process, it can go through much faster.
Software is released the moment the last line of code is written and the application is built
Any developer worth their salt will have a process in place once the final build is made. That process involves testing the heck out of the application to make sure it works, that all the things it needs to do are done, that it doesn't break other things in horrible ways. Only once the final build has passed all these checks can it be released.
Granted it may not feel like this in certain cases, but it's silly to release the final build without doing some sort of testing.
"But that problem should've been seen a mile away!"
Have you ever proofread for the hundredth time a paper you wrote and you somehow missed a simple spelling or grammar rule? Same principle applies here.
This is also on top of some software sources being huge, up to hundreds of thousands to millions of lines of code. Chances are you're not touching every bit of it, but laser focused on only some parts of it. Or so focused on solving one problem, you don't see that there's a problem with another area.
Or basically it's a similar thing that this video attempts to point out:
"How can X have so many problems?"
This is sort of an umbrella. An example I can think of is Windows Updates. Yes it's become a sort of meme that Windows updates are unreliable and can break your system, but at the same time, Microsoft has to deal with having hundreds of millions of instances of Windows to update, likely with millions of different configurations not just hardware, but software as well. To think that a 100% success rate should be a thing is absurd. Also given the install base of say Windows 10, if we assume the figure given by Microsoft at 900 million "Windows devices" (https://news.microsoft.com/bythenumbers/en/windowsdevices), even a million people affected by the problem is less than 1%. A million people is a lot. < 1% of the userbase isn't.
Basically, the pool of users Microsoft has to deal with is so large, using the devices uniquely, that they have enough of a sample size such that the probability of any problem coming up is basically 100%.
You try making software that works on nearly a billion different devices with countless combinations used uniquely in each way without a problem.