Jump to content

C Programming

Br3tt96

According to the dictionary, that thing which defines (not undefines) terms, C can't be "low-level". This is because (as I've already quoted) it's defined in the context of an "abstract machine"; it can't be "close or native to a piece of physical hardware", by definition! It is, by definition, portable, until you break the rules (e.g. invoke undefined behaviour), and then you're not programming in C, but a subtly differe... I feel like I'm repeating myself! Ugh!

To be clear, as I've already stated, your compiler translates C source code to machine code. The machine code has performance on your system. The C source code, however... wanna see how that performs when it's not being translated to a different language? Want to see how it performs IN THE ABSTRACT MACHINE?

Get an interpreter... and you'll see, "fast" is not a part of the definition of C.

 

Quote

High-level languages, erected on a machine-language substrate, hide concerns about the representation of data as collections of bits and the representation of programs as sequences of primitive instructions. These languages have means of combination and abstraction, such as procedure definition, that are appropriate to the larger-scale organization of systems.


Combination such as complex expressions involving an arbitrary number of subexpressions, and abstractions such as fopen/fread/etc, which in languages prior to C had no standardised interface? According to SICP, which is a classic academic textbook written by two highly reputable professors of information technology, C is a high-level language!

Link to comment
Share on other sites

Link to post
Share on other sites

What part of "The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant." do you not understand? Please pass the shroom juice!

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Sebivor said:

What part of the rationale states definitively that the only reason or rationale for "undefined behaviour" is performance? Where did you get this from? Because literally none of my documents have it.

Plain common sense ? I just gave you the clearest example one can possibly give.

 

Language A defines array out of bounds access. Language C says out of bounds array access is undefined.

Every implementation of language A will have to test each and every array access for a valid index, there is no other way to adhere to a standard that tries to define out of bounds access.

Language C must include no such tests because it's standard allows for such undefined behavior.

 

Ergo: Any implementation of language A will be slower then it could've been when accessing arrays.

 

Almost everything that has to do with undefined behavior boils down to this. The compiler not having to add loads of checks and tests behind your back in order to adhere to a standard that tries to define everything. How can you define divide by 0 error ? By checking for zero before the divide - all that defined behavior is never free!

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Unimportant said:

Plain common sense ? I just gave you the clearest example one can possibly give.

 

Language A defines array out of bounds access. Language C says out of bounds array access is undefined.

Every implementation of language A will have to test each and every array index for a valid index, there is no other way to adhere to a standard that tries to define out of bounds access.

Language C must include no such tests because it's standard allows for such undefined behavior.

 

Ergo: Any implementation of language A will be slower then it could've been when accessing arrays.

 

... unless the implementation can deduce that the bounds checking isn't necessary at compile time. As a trivial example, given `fubar` which is an array of 2 of whatever element you like, and `x` which is rand() at any time, `fubar[x % 2]` doesn't need bounds checking, because a compiler could theoretically infer that the `% 2` reduces `x` to an argument that's always in bounds anyway.

Nonetheless, this is an issue of optimization, and the section of the C standard which I quoted already states that those are irrelevant. Indeed, the behaviour is undefined, so bounds checking isn't required there. That doesn't prevent an actual implementation from implementing bounds checking, though!

... and such an implementation of C (with bounds checking) would be equally fast as one which doesn't, right?

No. You're confusing the implementation with the specification. Stop that. The courts don't care about implementations. If you don't read (and trust) the authoritative manuals that define the technologies you use, you might end up in trouble for it...

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Sebivor said:

... unless the implementation can deduce that the bounds checking isn't necessary at compile time. As a trivial example, given `fubar` which is an array of 2 of whatever element you like, and `x` which is rand() at any time, `fubar[x % 2]` doesn't need bounds checking, because a compiler could theoretically infer that the `% 2` reduces `x` to an argument that's always in bounds anyway.

That argument quickly falls to pieces given more complex code. You cherry pick the simplest of examples and run with it. If every index was derived simply programming would be simple. Try 'fubar[ComplexIndexCalculationDependingOnRuntimeData()]'. And again, array access is just one example, what about the divide by 0 ? The divisor is certainly a runtime variable, you would not hardcode a 0, right ? What about null checking when dereferencing a pointer ?

 

16 minutes ago, Sebivor said:

Nonetheless, this is an issue of optimization, and the section of the C standard which I quoted already states that those are irrelevant. Indeed, the behaviour is undefined, so bounds checking isn't required there. That doesn't prevent an actual implementation from implementing bounds checking, though!

You seem to have things reversed.

All these examples are undefined on the real machine. In order to define them, extra tests and checks must be performed. Undefined behavior allows the implementer not to include such checks. And indeed, every C/C++ compiler I'm aware off makes use of this and never includes such checks.

16 minutes ago, Sebivor said:

... and such an implementation of C (with bounds checking) would be equally fast as one which doesn't, right?

Off course it wouldn't. Are you trying to claim both these code snippets are equally fast ?

extern someArray[];
int Accu = 0;
for (int i = 0; i < runTimeVar; ++i)
{
	Accu += someArray[i];   
}

	//OR...

extern someArray[];
int Accu = 0;
for (int i = 0; i < runTimeVar; ++i)
{
	//!!!TEST IMPLICITLY ADDED BY COMPILER IN ORDER TO
  	//ADHERE TO A STANDARD THAT DEFINES OUT OF BOUNDS
	//ACCESS!!!
	if (i < sizeof(someArray)) 
	{
		Accu += someArray[i];  
	}
	else
	{
		//Do whatever the standard says should happen on out of bounds access... 
	}
}

 

16 minutes ago, Sebivor said:

No. You're confusing the implementation with the specification. Stop that. The courts don't care about implementations. If you don't read (and trust) the authoritative manuals that define the technologies you use, you might end up in trouble for it...

No, you don't seem to understand that to implement a specification that defines things that are undefined by their very nature requires adding overhead.

Link to comment
Share on other sites

Link to post
Share on other sites

By binding speed to the language, you're saying all existing C implementations are equally fast (they're not, and this is provable).

You're also stating that because most implementations don't perform bounds checking, all implementations must not perform bounds checking...

That or you're stating that the bounds checking has no overhead, because all implementations of C are equally fast, apparently...

Don't you see the flaw in your logic? The perceived speed is bound to your system, to your implementation, but isn't required by others.

Similarly, some implementations have bounds checking, and are permitted to do so because... undefined behaviour... the behaviour is undefined... they're free to do that...

... and so undefined behaviour doesn't necessarily make a language fast!

Imagine if someone asked you which is faster: French or English... It depends on who is speaking/writing/hearing/reading it, right?

Speed is bound to the implementation, NOT the language!

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

No, you don't seem to understand that to implement a specification that defines things that are undefined by their very nature requires adding overhead.

An implementation can't define the undefined. The undefined is by definition, undefined. It's undefined by the C standard, and no single implementation can represent the standard.

 

It should be noted that some architectures execute JVM bytecode natively... as machine code... and so it should come as no surprise to you that Java is faster on those systems, than on the systems that don't contain such native support! It might even be faster than C on those systems, since C would need to run within the JVM anyway...

 

Ohh, and that's also permitted. There are C compilers which target JVM bytecode (a small part of Java), begging the question, which is faster: C running on top of JVM bytecode, or... just... native Java? Hmmmm! Dilemmas!

Perhaps you can see now, why I suggest that it's invalid (not incorrect, but invalid; these two terms are subtly different) to say that C is faster or slower than other languages. It's possible that such an assertion is incorrect, and that makes it invalid. It's an invalid assertion because it's possible for it to be incorrect.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Sebivor said:

Speed is bound to the implementation, NOT the language!

I'll try to make it clear with the simplest example I can come up with, and that will be the last of it:

 

- The language says that accessing a array out of bounds must result in a IndexError being thrown (as per Phython)...

- Accessing a array out of bounds is undefined on a real computer. No-one knows what happens if you do a out of bounds access on the real computer. You might overwrite another variable of your own program, you might trash the stack, you could access memory that does not belong to your process, resulting in a segfault. No-one knows beforehand, partly because it depends on the runtime environment.

- In order for the implementer to implement this language on a real computer he must add checks to catch any out of bounds access before they happen. If he does not then what happens on the real computer is undefined as we just saw. Thus, because the language defines what must happen in a situation that is undefined on the real computer the implementer must add these checks to prevent the situation from occuring.

Link to comment
Share on other sites

Link to post
Share on other sites

I'll try to make it clear with the simplest example I can come up with, and that will be the last of it for me, too:

Python says that accessing an array out of bounds must result in an IndexError being thrown.

C has no such requirement. An implementation may implement a similar bounds checking interface, but the behaviour is undefined, so you shouldn't rely upon it.

Is Python faster or slower than a C implementation that has a similar bounds checking interface?

By the way, common implementations of C can be told to check bounds explicitly!

 

Here's gcc's manual. Search for "-fsanitize=address", and you'll find:
 

Quote

Memory access instructions are instrumented to detect out-of-bounds and use-after-free bugs.



llvm/clang has a similar feature.
 

Quote

If a bug is detected, the program will print an error message to stderr and exit with a non-zero exit code. AddressSanitizer exits on the first detected error. This is by design:

  • This approach allows AddressSanitizer to produce faster and smaller generated code (both by ~5%).
  • Fixing bugs becomes unavoidable. AddressSanitizer does not produce false alarms. Once a memory corruption occurs, the program is in an inconsistent state, which could lead to confusing results and potentially misleading subsequent reports.


This, yet again, begs the question:

Is C code compiled with bounds checking the same speed as C code compiled without? If you answer "no" to this, then you can't say that C has a definitive speed, because you've just admitted that the speed varies.

Is C code executed with an interpreter such as Ch the same speed as C code compiled to machine code and then executed using an optimal compiler? Again, if you answered "no" to this, then you've just admitted that the association of "speed" belongs to the "implementation", and not to the "language".

 

Quote

he must add checks to catch any out of bounds access before they happen. If he does not then what happens on the real computer is undefined as we just saw. 


You can't read, can you? It's not undefined, as Python has no undefined behaviour. Such an implementation would be just... not... python...

Link to comment
Share on other sites

Link to post
Share on other sites

@Sebivor  You can't convince a true believer.  If they can't see the inherent contradiction in saying Turing complete languages operate at different speeds when the fundamental core of Turing completeness is recursive equivalency...

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Yamoto42 said:

@Sebivor  You can't convince a true believer.  If they can't see the inherent contradiction in saying Turing complete languages operate at different speeds when the fundamental core of Turing completeness is recursive equivalency...

Who needs standards and turing completeness? That's all boring! Let's go shopping and find something to drink!

No, not by using our reading comprehension skills! I prefer to guess whether it's safe to drink than to inspect the bottle for warning labels!

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Sebivor said:

Is C code compiled with bounds checking the same speed as C code compiled without? If you answer "no" to this, then you can't say that C has a definitive speed, because you've just admitted that the speed varies.

I think we have a grave misunderstanding here.

The only point I'm trying to make is this:

The Python language requires this bound check to always be there. The C language does not. If you enable the bound check in C it might be just as slow as python, or even slower, who knows. The point is the C standard allows for it being turned off (by default).

 

Nothing more, nothing less.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

The Python language requires this bound check to always be there. The C language does not.


... and, back to the original topic, this is exactly why I recommend languages that don't have undefined behaviour to students who are new to programming; this is why I recommend Javascript and Python. It's often hard to come to grips with for even advanced programmers.

Often, to complete beginners, speed is not so important. Perhaps more important is the marketability. I'd suggest that JS is far more marketable than C...

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Unimportant said:

The C language does not.

 

32 minutes ago, Unimportant said:

The point is the C standard allows for it being turned off (by default).

The key words there is ALLOWS.  It is undefined behavior by the standard.  As soon as you use the word "default", you have defined it.  Undefined by the standard is absence of evidence, not evidence of absence.

 

26 minutes ago, Sebivor said:

Often, to complete beginners, speed is not so important. Perhaps more important is the marketability. I'd suggest that JS is far more marketable than C.

Ultimately, there is no one tool for the job.  The three properties of a produce are fast, cheap, and good.  Realistically you'll at best get to choose 2....typically 1 and a half...

C does teach you a lot of about how a computer works.  Definitely more so than any other high level language.  Unfortunately, to use your own example, that includes bounds checking arrays.  I wouldn't even pay minimum wage for a C developer that skipped their own bounds check in a non-trivial situation...and because we need to prevent such undefined behavior, that leads to a lot of what is essentially redundant boilerplate code.  It leads to frequently re-inventing the wheel.

While I would highly advocate learning C (if you're a serious developer), I would recommend C++, Java, C# or python first, and THEN learning C.  You don't have to be able to build a transmission to be an excellent driver...but If you later (after you've gotten your feet a little wet just driving around town) want to, I highly recommend at least learning how transmissions and differentials work; or even what they do.

You don't learn English by translating the complete works of Shakespear from their original Klingon.  That comes later.  At least learn to say "Hello World" first.

Similarly, knowing and understanding something does not in any imply that the solution will be correct.  The heartbleed bug is a good (potentially catastrophic) example of this.

But I digress...C does teach you a lot of things, and knowing what's going on under the hood can extremely helpful for debugging and finding optimizations in other languages...and even for that alone I would recommend learning it.  But even then I can't say I would use it for a new project if C++ were available.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Yamoto42 said:

The key words there is ALLOWS.  It is undefined behavior by the standard.  As soon as you use the word "default", you have defined it.  Undefined by the standard is absence of evidence, not evidence of absence.

The actual problem is, certain operations have a hard to predict, circumstantial, result when performed on the real hardware. I'm talking assembly machine level here - no higher level programming language is involved yet.

 

Humor me and imagine you write a assembly program with a bug in it that writes to a "wrong" memory address (a memory address you did not intend to write to). Whatever is going to happen is hard to predict. It depends on what address you overwrote and what you overwrote it with, and whatever the memory layout happens to be in that moment. It could be another variable of your own program, causing a bug somewhere else in your program. It could be the stack, probably going to cause the next 'return' instruction to return to a completely wrong address. It could be memory belonging to another process, whatever happens then depends on how the hardware/OS handle illegal memory access...you could say the result is undefined. (Or at least so hard to pin down and so dependent on lots of other things it's as good as undefined).

 

I hope this makes it very clear that undefined exists at the lowest level of the machine, at the assembly level. The simple machine operation of writing some value to some wrong memory address can result in loads of hard to predict things. This is what I meant with "undefined by its very nature".

 

If you now lay out the rules for a higher level programming language you can do 2 things:

  • Not define what must happen in this situation, and just live with the near unpredictable behavior that is already there. This allows any implementer to simply compile your code as is and not include any extra checks. Notice I said allows. A implementer could choose to add checks anyway but they're allowed not to and as a result most of them don't by default.
  • Define the situation and thus mandate all implementers to add checking code to catch any such events before they can happen and perform the defined behavior instead. They must do this in order to adhere to the standard, they're not allowed to skip the checks. There is no other way to make this situation - which is near unpredictable at the very machine level - predictable then by adding checking code that tests for such events before they happen.

 

As a result, the first language has the potential to be faster because it is allowed to skip the checks.

 

Please read this post properly, I'm trying to be as clear as I can possibly be.

That is all I meant - I don't know where all the confusion is coming from :(

 

And, if you won't believe me - here's a blogpost from the LLVM compiler toolchain project: http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html

Read "Advantages of Undefined Behavior in C, with Examples" specifically.

I guess the compiler makers are a credible source, no ?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites



This is what happens when an assembly programmer assumes similarities between C and assembly...

Enjoy!

Humor me and imagine you write a assembly program with a bug in it that writes to a "wrong" memory address

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Sebivor said:



This is what happens when an assembly programmer assumes similarities between C and assembly...

Enjoy!

 

I think you misunderstood. With wrong address I literally meant a wrong address, such as writing to address X + 100 in stead of X + 10 because you made a typo. I did not mean a illegal address persé. (But it could be, who knows what is sitting at that address, that's the point)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Unimportant said:

Please read this post properly, I'm trying to be as clear as I can possibly be.

That is all I meant - I don't know where all the confusion is coming from :(

I do understand what you are saying.  And no reasonable implementer of the the C language would perform a bound check on the array access...or for that matter differentiate it from pointer arithmetic...much like no reasonable C programmer would not take the time (read development cost) to write the check themselves before making the access in any non-trivial situation.

But a rule of thumb is not necessarily technically correct.

2 hours ago, Unimportant said:

As a result, the first language has the potential to be faster because it is allowed to skip the checks.

The thing is this:  Speed is in no way an attribute of the language.  It is entirely an attribute of implementation.

I keep bringing up Turing completeness because its core concept creates contradiction when speed is considered an attribute of the language.

If a language, lets call it A (C wat I did thar?), is Turing complete, then by the definition of Turing completeness I can write an interpreter for A in the A language itself...That quality is exactly what lets you write a C compiler in C.

 

Now, if you write a program in A, and compile it with Richard M. Stalin's latest gaa compiler, then run it in my A interpreter, then only two results are possible:


They finish at the exact same time or one finishes before the other (it doesn't matter which one is first).

 

Now, since my A interpreter is doing effectively run-time compilation AND execution, it is performing more computation than the result of gaa.

From this, if the programs complete at the same time, since the extra work included the execution time of both must have been zero.

Both execute your program A, but one has run-time compilation overhead.

A * n = A * (n+x) -> A = 0...so I feel comfortable throwing that out saying they won't finish simultaneously.

 

But what happens if they don't finish simultaneously?  Well, if speed is an attribute of the language and not the implementation you are effectively saying A != A.

//========

 

In practice, however, the latter will tend to be true.  They will not finish simultaneously.  However, it is not a contradiction because the difference comes from the native language vs. interpreted IMPLEMENTATIONS.

In fact, it's largely a moot point because my computer doesn't speak C, java, or python.  It only speaks x86...if anything we're more debating compilers than languages...

But as a rule of thumb, I fully agree a program written in C will halt faster than a program written to perform the same task in Java or Python....but the key phrase is "rule of thumb".

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Yamoto42 said:

But what happens if they don't finish simultaneously?  Well, if speed is an attribute of the language and not the implementation you are effectively saying A != A.

I think I understand what you mean. You're basically saying if a program, written in some language, is implemented in 2 different ways (for example, one interpreted and one compiled) and both are not equally fast then the speed difference has to be in the implementation, not the language. That is, off course, true.

 

What I'm saying is, if you implement a program twice, in exactly the same way but with 1 single difference: one with extra safety checks and one without, then the program with the extra safety checks is guaranteed to be slower. Now if those safety checks are mandated by the language then it follows the language could have been faster without them for each implementation. So the fastest implementation could still have been faster without them.

 

So yes, it comes down to the implementation but only because the language demands the behavior from the implementor - which is my point all along.

Link to comment
Share on other sites

Link to post
Share on other sites

@Unimportant And I do agree as a rule of thumb.

J = C+x && J = C <-> x = 0

But the C STANDARD does not guarantee x=0, or that x > 0, or x < 0.  Undefined literally means we know nothing about x.  It could be negative, making C slower for all the language should assume.

And you'd have to be a certain kind of stupid to use an implementation of C where x<=0...which is while it's not TECHNICALLY correct, I do agree in general.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Yamoto42 said:

Undefined literally means we know nothing about x.

Just a nitpick but undefined behavior is only applicable when your program does certain things it should not be doing in the first place. A correctly written program never invokes undefined behavior so everything is defined.

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/13/2017 at 5:38 PM, Erik Sieghart said:

...but really if it takes your program to execute in 1/50th of a second or 1/5000 of a second does it really matter?...

Not intending to jump in to a discussion that I'd much rather read from the sidelines, but I think Grace Hopper would have something to say to you about that ;)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

And no reasonable implementer of the the C language would perform a bound check on the array access...

What do you think of this llvm/clang manual regarding AddressSanitizer, and this gcc manual regarding (the equivalent/virtually identical) -fsanitize=address? Note that you should probably also use the -g flag, to ensure debugging symbols are added, to make maximum use of this. The reason for such an implementation ought to enlighten some: It's so you know can determine, during testing, whether your programs invoke undefined behaviour.


 

Quote

if you implement a program twice, in exactly the same way but with 1 single difference: one with extra safety checks and one without, then the program with the extra safety checks is guaranteed to be slower.

Yes. If you cut corners, you can write marginally faster software. The flip side of that is, if you cut corners and you get it wrong, you can't really boast the marginal performance boost of that which doesn't work correctly... right?

So you intend to answer the age-old question: "Which do I want? A heartbleed which could be minutely faster, or a non-heartbleed?"... Good luck with that!
 

 

 

Quote

the language demands the behavior from the implementor 

There are occasionally such strict demands in some languages, but more likely, the language expects that the observable behaviour produced be the same as what would be observable in the abstract machine (which is much more like an interpreter). Obviously, optimisations are permitted, so long as the observable behaviours for compliant programs remains valid.

 

if a bounds-checking compiler can deduce that no overflow is possible, then the bounds checking code can be omitted as an automatic optimisation. In such circumstances, such a compiler should produce machine code that's identical, right? To get an idea of what our common compilers can optimise, nowadays, here's a (very outdated) exercise for you... Keep in mind, that was written in 2010. There's no doubt it's due for an update, if anything has changed, though I'd expect compilers to get even more sophisticated over time, as opposed to regressing.

In a similar situation, some implementations of Java (a "managed language") don't perform garbage collection, for example, even though the language "demands" it... well, that's because an Oracle manual says "The Java programming language does not specify how soon a finalizer will be invoked, except to say that it will happen before the storage for the object is reused." Thus, Java might be just as fast as C on some systems because aside from JVM bytecode being native to the CPU, it's also permitted to indefinitely stall on cleaning up memory!

I'm glad to see you're coming to terms with this... :)

Link to comment
Share on other sites

Link to post
Share on other sites

Wow this got heated.

 

Though I did see the whole "C teaches you how computers work." But I fail to see how this is important from a general software development standpoint. Other than a super high-level explanation of "a computer is going to do exactly what you tell it" (which actually isn't even the case anymore), there's not much else you really need to know about how a computer works. If your problem solving requires extreme high performance, then sure, you can start poking into how your computer architecture of choice works. But if all you want to do is write bash scripts, then does it matter if the computer is RISC or CISC? Is Von Nuemann or Harvard? If you're speaking to Intel or ARM?

 

All in all, software development is solving a problem. Design your software soundly first before worrying about the implementation.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, M.Yurizaki said:

does it matter if the computer is RISC or CISC? Is Von Nuemann or Harvard? If you're speaking to Intel or ARM?

Design your software soundly first before worrying about the implementation.

There is, off course, more then your typical PC platform. You'll often find a Harvard architecture on microcontrollers simply because those platforms tend to have different memories for program and data anyway. Some form of permanent storage, such as flash, for program memory and SRAM for RAM.

 

On some of these platforms, such as Atmel AVR, this fact is not abstracted away for you by the available compilers. Reading data from program memory (a string literal, for example) requires explicitly coding program memory access, otherwise you'll be accessing the RAM address the pointer happens to hold. It's up to the programmer to keep track and access the right memory.

 

Even on your typical PC platform there are issues you'll run into with any language, simply because no matter the amount of abstraction, eventually the program has to run on a real machine:

  • Why is my multi-threaded code slower then a single threaded solution, even tough the problem is perfectly divisible and there is nothing wrong with my code ? (false sharing)
  • Why is my linked list solution slower then a simple array, even tough the array solution has to move a million elements when inserting in the middle ? And why does this behavior totally reverse on a different architecture machine ? (linked list traversal cache miss frenzy vs cacheless machine)

...and many others.

 

One could, off course, simply accept the existence of such issues, and their solutions without ever knowing the how or why, but I've found simply doing things without understanding why is not a recipe for success. Some such issues have to be considered from the initial design because not everything can be simply refactored (read: at low cost) when the behavior of the real machine clashes with the theoretical model. False sharing, for example, can require serious redesign in how the data is layed out in order to solve (and by extension, possible redesign of a large section of the class hierarchy).

 

I also find that, if you're one the ppl who have learned things the hard way, by muddling trough the assembler and/or some low level languages, it's a bit too simplistic to look back and say: "What a mistake that was, this whole ordeal would've been so much simpler if only I chose one of these new simple languages". Perhaps the fact that you climbed that mountain by hand and are now standing on the top is what gives you this clear view in the first place ?

 

I learned everything the hard way, there simply wasn't anything else back then. You had terribly slow basic or assembler on your commodore, compilers where a unaffordable commercial product. And given the choice, I'd do it all again. It's the journey that shapes you, not the result (a pile of mostly useless programs, written in order to learn, that probably could've been written much faster with one of these newer languages).

 

5 hours ago, Sebivor said:

I'm glad to see you're coming to terms with this... :)

 

I've not come to terms with anything. I simply gave up debating the matter.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×