Jump to content

Is hardware becoming to complex for the average programmer to take advantage of?

With the advent of the modern multi-core, multi-threaded CPU along with GPU integration, etc. has our computing hardware outstripped our capacity to reasonably take advantage of it?

 

We already know that very few programs out there are designed to address more than one core or even one thread at a time and all these extra streams we have for processing data are, for the most part, treated as completely separate resources (as if each process was running on it's own little computer).  Sure, queueing helps a little but can you really call that using the technology the way it was meant to be used? and, even when programs are "designed for multi-threading", they're often simply a bunch of smaller programs sandwiched into one interface each doing it's own thing while giving the illusion of integration. 

 

When, if it's even possible, will we catch up to the hardware we've created and what will it take to do so?  Perhaps it's time to focus on creating a "super-language" wherein we tell the computer what we want it to do and let it decide how to code it directly into machine code using whatever instruction set the processor requires.

Link to comment
Share on other sites

Link to post
Share on other sites

Short answer, I think that tools will eventually fall into place that make working with new features easier. Long answer continues below.

 

 

The trend I've noticed in studying computer architecture and computer history is that hardware usually outstrips the pace at which we can use it easily. i.e. when computers first existed it was binary, then assembly, then progressivley higher level languages such as Fortran, C, C++, Python etc. etc. Each language adding more layers and hiding things underneath that we don't have to worry about as much.

 

In a way hardware is the same way. With a bucket of Nand gates you can built adders, logic units, memory units, and other supporting hardware until you get a computer. Once you have a working computer you are essentially modifying things to make better use of the resources it has and to run at a higher clockspeed. For example implementing pipelining or hyperthreading. These Add-ons increase the effectiveness of the computer in exchange for complexity in design and complexity in use. For example like how you could probably memorize the ARM instruction set, but the x86-64 instruction set fills multiple thick volumes. I doubt anyone has the Intel instruction set memorized and I doubt many without a professional need actually look at the manuals.

 

It is my understanding that higher level development systems or at least new tools that tackle the problem differently help alleviate this issue. You want to make loops in you program, have a compile write the assembly for you. You want to do floating point division and only have an adder, let the compiler figure it out. Want to write multi-threaded code, switch to a language that makes it more straightforward (I've been meaning to do this for awhile). And if you need to use a GPU for computations there are developments coming down the pipe that will hopefully make working with them as simple as writing whatever language you already work with.

 

And this isn't limited to just computational code. I'm sure that the number of things that could be interfaced with through python code would seem staggering to someone decades ago who made their living working with bitstreams and logic operations with C or 'worse'.

My rig: 2600k(4.2 GHz) w/ Cooler Master hyper 212+, Gigabyte Z68-UD3H-B3, Powercolor 7870 xt(1100/1500) w/AIO mod,

8GB DDR3 1600, 120GB Kingston HyperX 3K SSD, 1TB Seagate, Antec earthwatts 430, NZXT H2

Verified max overclock, just for kicks: http://valid.canardpc.com/show_oc.php?id=2609399

Link to comment
Share on other sites

Link to post
Share on other sites

We (consumers) are already taking advantage of multi-cores, in a way. Even distinct programs can now run concurrently instead of sequentially in an interleaved fashion that gives an illusion of being run at the same time. And, if the program can afford it (if there are no shared variables access to which must be protected), running different processes and making them communicate (either via pipes, shared memory, system queues or even sockets) is also acceptable and still takes advantage of the multi-cores/threads.

 

But usually in computing systems what the consumer uses is not always the state of the art (especially in software terms). We can see an example of this in IPv6. The first RFC (request for comments, a document "describing methods, behaviors, research, or innovations applicable to the working of the Internet and Internet-connected systems" (from Wikipedia) that sometimes is made into standard) describing ipv6 was published in 1995 (if I am Wikipedia is not mistaken) but only now are we seeing it applied and the process will take some years to complete.

 

In the same way there are many technologies to help programmers execute their programs in many cores/threads and they too have been evolving. Fist we started with process forking and threading aided by semaphores or mutexes to provide mutual exclusion. Now we have technologies like OpenMP that do all of the dirty work behind creating threads and controlling access to shared variables by having the programmer just annotate the code with pragmas, available, in this particular case, in C, C++ and Fortran aided by recent technologies (previously available in database systems) such as Transactional Memory that take away all the low level details of worrying about where to put locks to avoid deadlocks while achieving high concurrency and allow the programmer to just specify that a certain area of code is atomic and letting the TM worry about that.

 

However most of this is still just research (my master thesis is actually about Software Transactional Memory without read sets) but some of it is already tested and ready to be used in real world applications. The problem still remains in the consumer end: changing what already exists to use the full potential of multi-cores is not easy and training software engineers that don't have experience with concurrent systems is a lengthy and expensive task for companies (hence the need for easier interfaces and compiler support and so on). And, truth be told, even though TM's are a good idea the interfaces available are still not suitable for use by naive users. There are some doctoral thesis and research into TM systems that are integrated into the Java Runtime and compiler for instance but most of them are separate libraries. And then there are other technologies such as Thread Level Speculation that let the compiler do stuff like divide the execution of cycles of a for loop by several threads in order to speed-up the process but it is still being researched.

 

And, like with many other features developed in software, TM's too are moving from being just Software based to Hardware based. Haswell, for instance, has an instruction set that allows the use of a Hardware Transactional Memory system. All you need is the compiler to generate the necessary code based on the annotations in the source code (although I think theirs is more suited for short OS transactions and not for general use). There has always been this cycle: first there is a software solution for something, then a hardware chip is added to the motherboard to help the CPU with those particular operations, then the CPU integrates that circuitry and makes it more performance optimized which leaves space for the next development. In the case of multi-cores there were already processes and threads being used by the operating system; however the increasing difficulty of putting more transistors into a chip and increasing the clock speeds was what ultimately lead to the inclusion of more than one core in a CPU.

 

But still, hardware and software have been more or less at the same level. However, up until recently, software developers relied on the hardware manufacturers to increase the speed of the CPU's which would automatically lead to increased performance of their applications and now they are expected to optimize their code to take advantage of the multi-cores. That's why the improvements are not yet apparent.

Link to comment
Share on other sites

Link to post
Share on other sites

As it was mentioned before it is always the same, we have new technology then we come up with new tools and more abstraction levels. But still parallelization will be one more aspect we have to worry about when creating software. I mean automatic parallelization is a field of research but it won't save us thinking about parallel algorithms.

In general it is always getting more complicated to create software on a commercial level (from a quality/demand point of view). Just think about the history of video games, OS etc. Right in the beginning it was possible to create a world wide successful program with two or three geeks. Nowadays you need hundrets of highly educated software engineers and programmers to create something like Windows, which is laughed about by every 13 year old.
 

I guess it will become necessary to specialize more and more, depending on the architecture you're writing for and the requirements of the software. Just think about security critical applications, it gets so much harder to proof a parallel program compared to a pure sequential one. Another example would be real-time-systems, how can you proof how long a certain part of a program will need at most when multiple threads can block each other because of resources.

 

With all that in mind I personaly think that it will take some time (or should I say even more, thinking about how long we already had multi core/processor systems) until (nearly) all applications written will be optimized for parallel architectures.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry if this repeats some of the above posters views, but just going to add my 2 cents.  I'll try to keep it short and give a few examples why.

 

Multicore issues:

When programming certain problems don't lend themselves to being multithreaded.  Imagine the Fibonacci sequence bn = b(n-1) + b(n-2).  To calculate the 10th series you need to calculate the previous ones.  It is very much a singular calculation (I know you can simplify the Fibonacci number to a single equation but for this example I will use this one).  So in this sense it is much like a bus driving along an empty freeway, it is no better than driving on a single lane road.

 

One other major issue is threading two processes could have significant overhead.  The best example is two threads that need access to one variable,

var myvar = 10;function abc(){   myvar++;}function def(){myvar--;}

running abc and def you need to use mutexes to allow proper access, but the cost of creating and locking a mutex just isn't worth it...you will slow it down by threading abc and def vs running abc then def.

 

The problem also happens with ram....two threads that need to access ram a lot will be bottle necked by the ram (Not often, but I have hit a ram bottle neck once.  It was more efficient to do a single thread than multithreaded as the ram access benefited from predicable reads/writes).

 

The last issue with multi-threading is a lot of programs don't need to be heavily threaded.  Most programs that I use don't max out one core, and produce cpu results in such a small time threading would be in perceivable to us.  So the amount of work splitting a single thread into multiple threads is not worth the extra complexity.

0b10111010 10101101 11110000 00001101

Link to comment
Share on other sites

Link to post
Share on other sites

I think develoeprs are just usually either too lazy or not knowledgable enough to utilize to harware to it's fullest potential. For example, look at the PS3. It's, what, 7 years old? And developers are just know figuring out how to use it to it's fullest potential. At the same time though, you also have devlopers still creating poorly made games. As for a "super-language", I don't think that's ever going to happen. It just seems like it'd be more work than it's worth. People can easily be taught how to utilize the hardware correctly. To create a program that will flawlessly generate code, it seems quite pointless. Have you seen existing code generators? They don't usually work well. And to create straight up binary (that's what you're getting at right?), it'd just be hell.

Sarah Jessica Parker

Link to comment
Share on other sites

Link to post
Share on other sites

As for a "super-language", I don't think that's ever going to happen.

Even a "super-language" is really just a stepping stone.  I've long been bothered by the lack of integration between the various methods used to interface with computers and technology in general.  Whether it's a keyboard, mouse, touch, gesture, or speech none really acknowledges the existence of the other.  It's bad enough being forced to program in "languages" with fewer words than a kindergartener (or is it pre-1Aer these days?) knows, but using our interfaces often feels like having all one's senses but only being able to use one or two at a time.

 

Ultimately, for technology to be truly integrated into our lives we need a system we can interact with as naturally and fluidly as we do with each other.  When we deal with someone we don't just talk, we gesture and we touch and we use tools for fine motor control to express ourselves in the way it seems most convenient and effective to do so depending upon the situation and our technology needs to process all of it.

 

Artificial touch and gesture systems are just a new version of Palm script and just as stilted.  "Pinch-to-Zoom"?!?!? If I want to look at something more closely I should be able to point and say make that bigger.  If I want to go back a page I don't want to remember how many fingers in what orientation to swipe.  I want to say go back or, if I'm feeling lazy, wave my hand.  If I write I want a pen or have my computer take dictation.

 

Sorry... Just my rant waiting for the future.

Link to comment
Share on other sites

Link to post
Share on other sites

Even a "super-language" is really just a stepping stone.  I've long been bothered by the lack of integration between the various methods used to interface with computers and technology in general.  Whether it's a keyboard, mouse, touch, gesture, or speech none really acknowledges the existence of the other.  It's bad enough being forced to program in "languages" with fewer words than a kindergartener (or is it pre-1Aer these days?) knows, but using our interfaces often feels like having all one's senses but only being able to use one or two at a time.

 

Ultimately, for technology to be truly integrated into our lives we need a system we can interact with as naturally and fluidly as we do with each other.  When we deal with someone we don't just talk, we gesture and we touch and we use tools for fine motor control to express ourselves in the way it seems most convenient and effective to do so depending upon the situation and our technology needs to process all of it.

 

Artificial touch and gesture systems are just a new version of Palm script and just as stilted.  "Pinch-to-Zoom"?!?!? If I want to look at something more closely I should be able to point and say make that bigger.  If I want to go back a page I don't want to remember how many fingers in what orientation to swipe.  I want to say go back or, if I'm feeling lazy, wave my hand.  If I write I want a pen or have my computer take dictation.

 

Sorry... Just my rant waiting for the future.

I think you migth be a bit friendly with all the touching.. :s

 

Anyways, while it is an amazing idea to remove and interface and have the computer simply do what you want it to as if it understood exactley what you wanted, it's near impossible with the technology we have today. As I said before, people are either too lazy to do what they can, or they don't know how to do it. Not only is that a viable arguement, there's also other facts to consider. To start, it would be hard as hell to create. Plus, people who don't understand it would hate it. They'd find it useless even though it'd be the most useful computer device ever created. What I'm saying is you would have to find a way to do it without a simple interface, but in a way people like. It's not all about if it's more useful or if it's easy to make. The biggest factor is community reaction.

 

That last paragraph is easily possible to create today. We have motion detection. We have voice command. Didn't Kindle release an E-Reader that tracked your eyes to know when to flip the page for you? But, nontheless, there's still the issue of finding a way to utilise these features in a way that the consumers will like.

Sarah Jessica Parker

Link to comment
Share on other sites

Link to post
Share on other sites

Meh, with programming languages getting higher and higher level I personally think that it isn't for example look at visual basic and how stupidly easy it is to use; you can pretty much just pick it up watch a few tutorials on youtube to learn the basic syntax and as soon as you start typing the program will predict your programming anyway so you barely even need to know anything ._. I bet hardly anyone in the programming community (I'm talking out of EVERYONE about 10-15% will) knows how to do a basic "Hello World!" in ASM(Assembly).

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

I bet hardly anyone in the programming community (I'm talking out of EVERYONE about 10-15% will) knows how to do a basic "Hello World!" in ASM(Assembly).

That illustrates my earlier point well. Go back 50 years and many computers didn't have displays or even proper printers. Just lights or other simple output that you had to reinterpret yourself to understand. Honestly, to write a proper hello world in assembly would take so much code on most platforms these days it's just silly. Depending on how serious you were. I could write the text "Hello World." into memory. But if I need to find my way to the video buffer through assembly I probably wouldn't want to bother if I didn't have to.

 

The point being that we have developed tools to do things that aren't worth thinking about anymore. My belief in this being that some time in the future we won't have to worry about threads, processes, and GPU's because our tools will abstract that for us. And most likely there will be other things that will come along that we'll need new tools for. I.E. a quantum co-processor card :P .

My rig: 2600k(4.2 GHz) w/ Cooler Master hyper 212+, Gigabyte Z68-UD3H-B3, Powercolor 7870 xt(1100/1500) w/AIO mod,

8GB DDR3 1600, 120GB Kingston HyperX 3K SSD, 1TB Seagate, Antec earthwatts 430, NZXT H2

Verified max overclock, just for kicks: http://valid.canardpc.com/show_oc.php?id=2609399

Link to comment
Share on other sites

Link to post
Share on other sites

That last paragraph is easily possible to create today. We have motion detection. We have voice command. Didn't Kindle release an E-Reader that tracked your eyes to know when to flip the page for you? But, nontheless, there's still the issue of finding a way to utilise these features in a way that the consumers will like.

Alright, so why isn't it being done?  The first company to make an interface which truly integrates the latest in touch, voice recognition/command, OCR, and motion detection along with facial recognition and adaptive learning software for personalized use can't help but make a mint.

Link to comment
Share on other sites

Link to post
Share on other sites

Ultimately, for technology to be truly integrated into our lives we need a system we can interact with as naturally and fluidly as we do with each other. 

 

Well, it appears then that you share the views of Mark Weiser, the father of ubiquitous computing. The problem achieving what you just said is not one of interface or underlying technology but a more general one.

 

He wrote an article about this (well, many more actually!) but I found something similar: http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html

It is quite long and not very technical. And it was written in 1991. But this is what computing is converging into, however slowly.

Link to comment
Share on other sites

Link to post
Share on other sites

That illustrates my earlier point well. Go back 50 years and many computers didn't have displays or even proper printers. Just lights or other simple output that you had to reinterpret yourself to understand. Honestly, to write a proper hello world in assembly would take so much code on most platforms these days it's just silly. Depending on how serious you were. I could write the text "Hello World." into memory. But if I need to find my way to the video buffer through assembly I probably wouldn't want to bother if I didn't have to.

 

The point being that we have developed tools to do things that aren't worth thinking about anymore. My belief in this being that some time in the future we won't have to worry about threads, processes, and GPU's because our tools will abstract that for us. And most likely there will be other things that will come along that we'll need new tools for. I.E. a quantum co-processor card :P .

With the higher level languages like basic you won't have to worry as the compiler will just be adjusted.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

 

That illustrates my earlier point well. Go back 50 years and many computers didn't have displays or even proper printers. Just lights or other simple output that you had to reinterpret yourself to understand. Honestly, to write a proper hello world in assembly would take so much code on most platforms these days it's just silly. Depending on how serious you were. I could write the text "Hello World." into memory. But if I need to find my way to the video buffer through assembly I probably wouldn't want to bother if I didn't have to.

 

The point being that we have developed tools to do things that aren't worth thinking about anymore. My belief in this being that some time in the future we won't have to worry about threads, processes, and GPU's because our tools will abstract that for us. And most likely there will be other things that will come along that we'll need new tools for. I.E. a quantum co-processor card :P .

 

 

With the higher level languages like basic you won't have to worry as the compiler will just be adjusted.

 

Sadly it won't work like that. The compiler can't do all the hard work for you. It is extremely important to use and develop algorithms that can be parallelized in a sufficient way. If you choose an inappropriate (for parallelization) algorithm the compiler can't do anything about it. Furthermore it is a very hard problem to automatically find data dependency and therefore automatically parallelize a program.

It might be easier when using other programming paradigms like functional programming but currently I can't see that we are moving away from imperativ programming. (for some good reasons I guess)

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

 

 

Sadly it won't work like that. The compiler can't do all the hard work for you. It is extremely important to use and develop algorithms that can be parallelized in a sufficient way. If you choose an inappropriate (for parallelization) algorithm the compiler can't do anything about it. Furthermore it is a very hard problem to automatically find data dependency and therefore automatically parallelize a program.

It might be easier when using other programming paradigms like functional programming but currently I can't see that we are moving away from imperativ programming. (for some good reasons I guess)

 

 

Actually this brings up a point I forgot to mention.  If you consider multiplayer games vs single player games, single player games are a lot easier to multithread than multiplayer.  Much like what Frodo said, it can all depend on data dependancies.

 

Consider a multiplayer game, for most games now battles are calculated on the users end, rather than the servers end.  The problem is you could make the physics fully multithreaded utilizing all the cores, but then different threads might run slightly faster on one computer to another, this will cause different collision detections which could cause drastically different physics.  While on a single player you don't have to worry about syncing, multiplayer you do.  So single player could use the multiple cores for some physics (but it won't always be repeatable, which I like in a game), while multiplayer needs to worry about syncing and doing calculations in the correct order (or have the character potentially "pop" around)

0b10111010 10101101 11110000 00001101

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

 

 

 

Sadly it won't work like that. The compiler can't do all the hard work for you. It is extremely important to use and develop algorithms that can be parallelized in a sufficient way. If you choose an inappropriate (for parallelization) algorithm the compiler can't do anything about it. Furthermore it is a very hard problem to automatically find data dependency and therefore automatically parallelize a program.

It might be easier when using other programming paradigms like functional programming but currently I can't see that we are moving away from imperativ programming. (for some good reasons I guess)

 

I know how it works you know but I was indeed talking about functional programming; if the functions are adjusted it wouldn't be that much more different.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

 

Sadly it won't work like that. The compiler can't do all the hard work for you. It is extremely important to use and develop algorithms that can be parallelized in a sufficient way. 

 

 

Fortunately that is exactly what compilers do for us. I understand that nine women can't make a baby in a month. But modern compilers can lay out a lot of optimizations for you. And many times they can handle simpler code more easily than "creative" code.

 

There are very smart people out there writing the tools that we use. So fortunately the most common algorithms can become recognized by the compiler and optimized for a more powerful architecture. It already is happening and will continue to happen. And I would argue that future hardware will not get proper use in a widespread manner until tools exist that don't require extensive study to use.

 

As a side example, 10 years ago if you wanted to do GPU compute you  needed to know how to manipulate graphics data in a way that computed your answer. A little more than 5 years ago Nvidia released CUDA for their 8800 series of cards allowing developers to write something more closely resembling C. And in the works right now from kronos group themselves is SPIR which can act as an intermediate layer between OpenCL and the language of your choice. Or in other words you will soon be able to code any language you please to take advantage of Co-processors.

 

Lastly, there is definitely the possibility that a paradigm shift is necessary to bring us a language that makes thinking about the proper algorithms easier. Or maybe the takeover of a language that already exists. I can admit that writing code is a two way street between yourself and the syntax in front of you.

My rig: 2600k(4.2 GHz) w/ Cooler Master hyper 212+, Gigabyte Z68-UD3H-B3, Powercolor 7870 xt(1100/1500) w/AIO mod,

8GB DDR3 1600, 120GB Kingston HyperX 3K SSD, 1TB Seagate, Antec earthwatts 430, NZXT H2

Verified max overclock, just for kicks: http://valid.canardpc.com/show_oc.php?id=2609399

Link to comment
Share on other sites

Link to post
Share on other sites

Fortunately that is exactly what compilers do for us. I understand that nine women can't make a baby in a month. But modern compilers can lay out a lot of optimizations for you. And many times they can handle simpler code more easily than "creative" code.

 

There are very smart people out there writing the tools that we use. So fortunately the most common algorithms can become recognized by the compiler and optimized for a more powerful architecture. It already is happening and will continue to happen. And I would argue that future hardware will not get proper use in a widespread manner until tools exist that don't require extensive study to use.

 

As a side example, 10 years ago if you wanted to do GPU compute you  needed to know how to manipulate graphics data in a way that computed your answer. A little more than 5 years ago Nvidia released CUDA for their 8800 series of cards allowing developers to write something more closely resembling C. And in the works right now from kronos group themselves is SPIR which can act as an intermediate layer between OpenCL and the language of your choice. Or in other words you will soon be able to code any language you please to take advantage of Co-processors.

 

Lastly, there is definitely the possibility that a paradigm shift is necessary to bring us a language that makes thinking about the proper algorithms easier. Or maybe the takeover of a language that already exists. I can admit that writing code is a two way street between yourself and the syntax in front of you.

 

I'm not saying that compilers can't do a lot of optimization, of course they can also parallelize calculations to a certain extend. And yes we will get better high-level languages for (GP)GPU programming that will make things easier.

 

My point is that programmers still have to take care of the algorithmic, more basic design of the program to make it suitable for these kinds of optimizations. If you make bad decisions at this stage there is nothing a tool can do about it. Therefore it isn't a very good idea to completely forget about parallelization and let the compiler do the work for you. I found a great article that can explain my point better than me: http://www.ncsa.illinois.edu/extremeideas/site/on_the_limits_of_automatic_parallelization

 

 

When people find out what I do for a living, most eventually ask when the compiler or the computer will be able to figure out how to parallelize their work automatically.  Everyone would like to take whatever task they have, put it on a supercomputer, and make it run fast, without having to re-invent the program to run in the parallel environment.

My usual answer is that this will happen as soon as compilers are smart enough to have insight into whatever the questioner does for a living.  When I’ve tried to parallelize a code without knowing anything about the application domain, there isn’t much I can do.  There are a few patterns I can look for, and there’s often something that I can do – but my options are very limited. I can’t, for instance, suggest an alternative algorithm, or analyze the tradeoff between quality and speed in a way that makes sense for their particular discipline.

Link to comment
Share on other sites

Link to post
Share on other sites

programmers still have to take care of the algorithmic, more basic design of the program to make it suitable for these kinds of optimizations. If you make bad decisions at this stage there is nothing a tool can do about it. 

 

I'm mostly quoting here for context. I primarily wanted to make clear that compilers do a lot of work for us and will continually do more. There will always be things they don't do so well, how else will developers keep their job security :P . Even today poorly written code can resist any help an optimizing compiler can do for it.

 

It will most likely become a trade-off between capable tools and knowledgeable developers. Ranging from anywhere between awesome tools and writing thought out code that compilers can work with to the other end of the spectrum probably being less than we have today.

 

Though this is a very important question in computing today.

My rig: 2600k(4.2 GHz) w/ Cooler Master hyper 212+, Gigabyte Z68-UD3H-B3, Powercolor 7870 xt(1100/1500) w/AIO mod,

8GB DDR3 1600, 120GB Kingston HyperX 3K SSD, 1TB Seagate, Antec earthwatts 430, NZXT H2

Verified max overclock, just for kicks: http://valid.canardpc.com/show_oc.php?id=2609399

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×